Author Archives: Daniel Worrall

Talk by Will Grathwohl

You are all cordially invited to the special AMLab seminar on Tuesday 15th October at 12:00 in C1.112, where Will Grathwohlfrom David Duvenaud’s group in Toronto will give a talk titled “The many virtues of Incorporating energy-based generative models into discriminative learning”

Will is one of the authors behind many great recent papers. To name a few: 

Abstract: Generative models have long been promised to benefit downstream discriminative machine learning applications such as out-of-distribution detection, adversarial robustness, uncertainty quantification, semi-supervised learning and many others.  Yet, except for a few notable exceptions, methods for these tasks based on generative models are considerably outperformed by hand-tailored methods for each specific task. In this talk, I will advocate for the incorporation of energy-based generative models into the standard discriminative learning framework. Energy-Based Models (EBMs) can be much more easily incorporated into discriminative models than alternative generative modeling approaches and can benefit from network architectures designed for discriminative performance. I will present a novel method for jointly training EBMs alongside classifiers and demonstrate that this approach allows us to build models which rival the performance of state-of-the-art generative models and discriminative models within a single model. Further, we demonstrate our joint model gains many desirable properties such as a built-in mechanism for out-of-distribution detection, improved calibration, and improved robustness to adversarial examples — rivaling or improving upon hand-designed methods for each task. 

Talk by Andy Keller

You are all cordially invited to the AMLab seminar on Thursday 10th October at 14:00 in D1.113, where Andy Keller will give a talk titled “Approaches to Learning Approximate Equivariance”. There are the usual drinks and snacks!

Abstract: In this talk we will discuss a few proposed approaches to learning approximate equivariance directly from data. These approaches range from weakly supervised to fully unsupervised, relying on either mutual information bounds or inductive biases respectively. Critical discussion will be encouraged as much of the work is in early phases. Preliminary results will be shown to demonstrate validity of concepts.

Talk by Bhaskar Rao

You are all cordially invited to the AMLab seminar on Thursday 3rd October at 14:00 in B0.201, where Bhaskar Rao (visiting researcher: bio below)will give a talk titled “Scale Mixture Modeling of Priors for Sparse Signal Recovery”. There are the usual drinks and snacks!

Abstract: This talk will discuss Bayesian approaches to solving the sparse signal recovery problem. In particular, methods based on priors that admit a scale mixture representation will be discussed with emphasis on Gaussian scale mixture modeling. In the context of MAP estimation, iterative reweighted approaches will be developed. The scale mixture modeling naturally leads a hierarchical framework and empirical Bayesian methods motivated by this hierarchy will be highlighted. The pros and cons of the two approaches, MAP versus Empirical Bayes, will be a subject of discussion.