Author Archives: Daniel Worrall

Herke van Hoof

You are all cordially invited to the AMLab seminar on Thursday 21st November at 14:00 in C3.163, where Herke van Hoof will give a talk titled “Gradient estimation algorithms”. There are the usual drinks and snacks!

Abstract: In many cases, we cannot calculate exact gradients. This is the case if we cannot evaluate how well the model was expected to have done for different parameter values, for example if the model generates a sequence of stochastic decisions. Thus, many gradient estimators have been developed, from classical techniques from reinforcement learning to modern techniques such as the relax estimator. In e.g. meta-learning second derivative estimators have also been proposed. In this talk, I will attempt to give an overview of the properties of these techniques.

Maurice Weiler

You are all cordially invited to the AMLab seminar on Thursday 24th October at 14:00 in D1.113, where Maurice Weiler will give a talk titled “Gauge Equivariant Convolutional Networks”. There are the usual drinks and snacks!

Abstract: The idea of equivariance to symmetry transformations provides one of the first theoretically grounded principles for neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. We extend this principle beyond global symmetries to local gauge transformations, thereby enabling the development of equivariant convolutional networks on general manifolds. We show that gauge equivariant convolutional networks give a unified description of equivariant and geometric deep learning by deriving a wide range of models as special cases of our theory. To illustrate our theory on a simple example and highlight the interplay between local and global symmetries we discuss an implementation for signals defined on the icosahedron, which provides a reasonable approximation of spherical signals. We evaluate the Icosahedral CNN on omnidirectional image segmentation and climate pattern segmentation, and find that it outperforms previous methods.

Sindy Löwe

You are all cordially invited to the AMLab seminar on Thursday 14th November at 14:00 in C3.163, where Sindy Löwe will give a talk titled “Putting An End to End-to-End: Gradient-Isolated Learning of Representations”. There are the usual drinks and snacks!

Abstract: We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. Inspired by the observation that biological neural networks appear to learn without backpropagating a global error signal, we split a deep neural network into a stack of gradient-isolated modules. Each module is trained to maximally preserve the information of its inputs using the InfoNCE bound from Oord et al. [2018]. Despite this greedy training, we demonstrate that each module improves upon the output of its predecessor, and that the representations created by the top module yield highly competitive results on downstream classification tasks in the audio and visual domain. The proposal enables optimizing modules asynchronously, allowing large-scale distributed training of very deep neural networks on unlabelled datasets.

Talk by Will Grathwohl

You are all cordially invited to the special AMLab seminar on Tuesday 15th October at 12:00 in C1.112, where Will Grathwohlfrom David Duvenaud’s group in Toronto will give a talk titled “The many virtues of Incorporating energy-based generative models into discriminative learning”

Will is one of the authors behind many great recent papers. To name a few: 

Abstract: Generative models have long been promised to benefit downstream discriminative machine learning applications such as out-of-distribution detection, adversarial robustness, uncertainty quantification, semi-supervised learning and many others.  Yet, except for a few notable exceptions, methods for these tasks based on generative models are considerably outperformed by hand-tailored methods for each specific task. In this talk, I will advocate for the incorporation of energy-based generative models into the standard discriminative learning framework. Energy-Based Models (EBMs) can be much more easily incorporated into discriminative models than alternative generative modeling approaches and can benefit from network architectures designed for discriminative performance. I will present a novel method for jointly training EBMs alongside classifiers and demonstrate that this approach allows us to build models which rival the performance of state-of-the-art generative models and discriminative models within a single model. Further, we demonstrate our joint model gains many desirable properties such as a built-in mechanism for out-of-distribution detection, improved calibration, and improved robustness to adversarial examples — rivaling or improving upon hand-designed methods for each task. 

Talk by Andy Keller

You are all cordially invited to the AMLab seminar on Thursday 10th October at 14:00 in D1.113, where Andy Keller will give a talk titled “Approaches to Learning Approximate Equivariance”. There are the usual drinks and snacks!

Abstract: In this talk we will discuss a few proposed approaches to learning approximate equivariance directly from data. These approaches range from weakly supervised to fully unsupervised, relying on either mutual information bounds or inductive biases respectively. Critical discussion will be encouraged as much of the work is in early phases. Preliminary results will be shown to demonstrate validity of concepts.

Talk by Bhaskar Rao

You are all cordially invited to the AMLab seminar on Thursday 3rd October at 14:00 in B0.201, where Bhaskar Rao (visiting researcher: bio below)will give a talk titled “Scale Mixture Modeling of Priors for Sparse Signal Recovery”. There are the usual drinks and snacks!

Abstract: This talk will discuss Bayesian approaches to solving the sparse signal recovery problem. In particular, methods based on priors that admit a scale mixture representation will be discussed with emphasis on Gaussian scale mixture modeling. In the context of MAP estimation, iterative reweighted approaches will be developed. The scale mixture modeling naturally leads a hierarchical framework and empirical Bayesian methods motivated by this hierarchy will be highlighted. The pros and cons of the two approaches, MAP versus Empirical Bayes, will be a subject of discussion.