Defining the research agenda in Machine Learning interpretability, explainability and trustability
Time: 14:00 - 15:00
Venue: Room 3.009 Alliance Manchester Business school, Booth Street West, Manchester, M5 6PB
Speaker: Mihaela van der Schaar (University of Cambridge & Alan Turing Institute)
Title: Defining the research agenda in Machine Learning interpretability, explainability and trustability
The ability to interpret the predictions of a machine learning model brings about user trust and supports understanding of the underlying processes being modeled. In many application domains, such as the medical, insurance and criminal justice domains, model interpretability and explainability can be a crucial requirement for the deployment of machine learning, since a model’s predictions would inform critical decision-making. Unfortunately, most state-of-the-art models — such as ensemble models, kernel methods, and neural networks — are perceived as being complex “black-boxes”, the predictions of which are too hard to be interpreted. In this talk, I will define the research agenda in achieving machine learning model interpretability, explainability and trustability. I will then present extensive progress made in our group recently to contribute to this research agenda.
Professor van der Schaar is John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Turing Fellow at The Alan Turing Institute in London, where she leads the effort on data science and machine learning for personalised medicine. She is an IEEE Fellow (2009). She has received the Oon Prize on Preventative Medicine from the University of Cambridge (2018). She has also been the recipient of an NSF Career Award, 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award. She holds 35 granted USA patents.