Talk by Zeynep Akata

You are all cordially invited to the AMLab seminar on Tuesday April 24 at 16:00 in C3.163, where Zeynep Akata will give a talk titled “Representing and Explaining Novel Concepts with Minimal Supervision”. Afterwards there are the usual drinks and snacks!

Abstract: Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Artificial Intelligence in that (1) how we can generalize the image classification models to the cases with no visual training data available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a correct and an incorrect class label, and explain why the predicted correct label is appropriate for the image and why the predicted incorrect label is not appropriate for the image.