You are all cordially invited to the AMLab seminar on Tuesday August 28 at 16:00 in C3.163, where Tameem Adel will give a talk titled “On interpretable representations and the tradeoff between accuracy and interpretability”. Afterwards there are the usual drinks and snacks!
As machine learning models grow in size and complexity, and as applications reach critical social, economic and public health domains, learning interpretable data representations is becoming ever more important. Most of the current methods jointly optimize an objective combining accuracy and interpretability. However, this may reduce accuracy, and is not applicable to already trained models. In our recent ICML-2018 paper, we proposed two rather contrasting interpretability frameworks. The first aims at controlling the accuracy vs. interpretability tradeoff by providing an interpretable lens for an existing model (which has already been optimized for accuracy). We use a generative model which takes as input the representation in an existing (generative or discriminative) model, weakly supervised by limited side information. Applying a flexible and invertible transformation to the input leads to an interpretable representation with no loss in accuracy. We extend the approach using an active learning strategy to choose the most useful side information to obtain, allowing a human to guide what “interpretable” means. The second framework relies on joint optimization for a representation which is both maximally informative about the side information and maximally compressive about the non-interpretable data factors. This leads to a novel perspective on the relationship between compression and regularization. We also propose an interpretability evaluation metric based on our frameworks. Empirically, we achieve state-of-the-art results on three datasets using the two proposed algorithms.
Tameem Adel is currently a research fellow in the Machine Learning Group at University of Cambridge, advised by Prof. Zoubin Ghahramani. He was previously an AMLAB postdoctoral researcher advised by Prof. Max Welling. He has obtained his PhD from University of Waterloo, Ontario, Canada, advised by Prof. Ali Ghodsi. His main research interests are circulated around probabilistic graphical models, Bayesian learning and inference, medical (especially MRI based) applications of machine learning, interpretability of deep models, and domain adaptation.