Talk by Pim de Haan‬

Hi everyone, you are all cordially invited to the AMLab Seminar on Thursday 30th July at 16:00 CEST on Zoom, where ‪Pim de Haan will give a talk titled “Natural Graph Networks“.

Paper link: https://arxiv.org/abs/2007.08349

Title: Natural Graph Networks

Abstract: Conventional neural message passing algorithms are invariant under permutation of the messages and hence forget how the information flows through the network. Studying the local symmetries of graphs, we propose a more general algorithm that uses different kernels on different edges, making the network equivariant to local and global graph isomorphisms and hence more expressive. Using elementary category theory, we formalize many distinct equivariant neural networks as natural networks, and show that their kernels are ‘just’ a natural transformation between two functors. We give one practical instantiation of a natural network on graphs which uses an equivariant message network parameterization, yielding good performance on several benchmarks.

Emiel Hoogeboom

You are all cordially invited to the AMLab seminar on Thursday 2nd July at 16:00 on Zoom, where Emiel Hoogeboomwill give a talk titled “The Convolution Exponential”

Title:The Convolution Exponential

Paper linkhttps://arxiv.org/abs/2006.01910

Abstract: We introduce a new method to build linear flows, by taking the exponential of a linear transformation. This linear transformation does not need to be invertible itself, and the exponential has the following desirable properties: it is guaranteed to be invertible, its inverse is straightforward to compute and the log Jacobian determinant is equal to the trace of the linear transformation. An important insight is that the exponential can be computed implicitly, which allows the use of convolutional layers. Using this insight, we develop new invertible transformations named convolution exponentials and graph convolution exponentials, which retain the equivariance of their underlying transformations. In addition, we generalize Sylvester Flows and propose Convolutional Sylvester Flows which are based on the generalization and the convolution exponential as basis change. Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows. In addition, we show that Convolutional Sylvester Flows improve performance over residual flows as a generative flow model measured in log-likelihood.

Victor Garcia Satorras

You are all cordially invited to the AMLab seminar on Thursday 25th June at 16:00 on Zoom, where Victor Garcia Satorras will give a talk titled “Neural Enhanced Belief Propagation on Factor Graphs”

Note: you can access the video afterwards, which will be uploaded to YouTube

Paper linkhttps://arxiv.org/pdf/2003.01998.pdf

Abstract: A graphical model is a structured representation of locally dependent random variables. A traditional method to reason over these random variables is to perform inference using belief propagation. When provided with the true data generating process, belief propagation can infer the optimal posterior probability estimates in tree structured factor graphs. However, in many cases we may only have access to a poor approximation of the data generating process, or we may face loops in the factor graph, leading to suboptimal estimates. In this work we first extend graph neural networks to factor graphs (FG-GNN). We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation. The FG-GNN receives as input messages from belief propagation at every inference iteration and outputs a corrected version of them. As a result, we obtain a more accurate algorithm that combines the benefits of both belief propagation and graph neural networks. We apply our ideas to error correction decoding tasks, and we show that our algorithm can outperform belief propagation for LDPC codes on bursty channels.

Virtual talk by Jens Kober on Robots Learning (Through) Interactions

Following RIVM guidelines, we will host a completely virtual seminar in the Delta Lab Deep Learning Seminar Series. We will livestream the talk at the brand-new AMLab YouTube channel, starting May 7th at 11:00 CEST:
https://www.youtube.com/channel/UC-UamuSbKi_Dcaa4wlEyqlA
(Note: in we need to change the streaming link because of technical problems, look for updates here)

Abstract:
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Reinforcement learning and imitation learning are two different but complimentary machine learning approaches commonly used for learning motor skills.
In this seminar, Jens Kober will discuss various learning techniques we developed that can handle complex interactions with the environment. Complexity arises from non-linear dynamics in general and contacts in particular, taking multiple reference frames into account, dealing with high-dimensional input data, interacting with humans, etc. A human teacher is always involved in the learning process, either directly (providing demonstrations) or indirectly (designing the optimization criterion), which raises the question: How to best make use of the interactions with the human teacher to render the learning process efficient and effective?
All these concepts will be illustrated with benchmark tasks and real robot experiments ranging from fun (ball-in-a-cup) to more applied (unscrewing light bulbs).

Jens Kober is an associate professor at the TU Delft, Netherlands. He worked as a postdoctoral scholar jointly at the CoR-Lab, Bielefeld University, Germany and at the Honda Research Institute Europe, Germany. He graduated in 2012 with a PhD Degree in Engineering from TU Darmstadt and the MPI for Intelligent Systems. For his research he received the annually awarded Georges Giralt PhD Award for the best PhD thesis in robotics in Europe, the 2018 IEEE RAS Early Academic Career Award, and has received an ERC Starting grant. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.

Jan Günter Wöhlke

You are all cordially invited to the AMLab seminar on Thursday 20th February at 16:00 in C3.163, where Jan Günter Wöhlke from Boschwill give a talk titled “Tackling Sparse Rewards in Reinforcement Learning”

Abstract: Sparse reward problems present a challenge for reinforcement learning (RL) agents. Previous work has shown that choosing start states according to a curriculum can significantly improve the learning performance. Many existing curriculum generation algorithms rely on two key components: Performance measure estimation and a start selection policy. In our recently accepted AAMAS paper, we therefore propose a unifying framework for performance-based start state curricula in RL, which allows analyzing and comparing the influence of the key components. Furthermore, a new start state selection policy is introduced. With extensive empirical evaluations, we demonstrate state-of-the-art performance of our novel curriculum on difficult robotic navigation tasks as well as a high-dimensional robotic manipulation task.

Herke van Hoof

You are all cordially invited to the AMLab seminar on Thursday 21st November at 14:00 in C3.163, where Herke van Hoof will give a talk titled “Gradient estimation algorithms”. There are the usual drinks and snacks!

Abstract: In many cases, we cannot calculate exact gradients. This is the case if we cannot evaluate how well the model was expected to have done for different parameter values, for example if the model generates a sequence of stochastic decisions. Thus, many gradient estimators have been developed, from classical techniques from reinforcement learning to modern techniques such as the relax estimator. In e.g. meta-learning second derivative estimators have also been proposed. In this talk, I will attempt to give an overview of the properties of these techniques.

Maurice Weiler

You are all cordially invited to the AMLab seminar on Thursday 24th October at 14:00 in D1.113, where Maurice Weiler will give a talk titled “Gauge Equivariant Convolutional Networks”. There are the usual drinks and snacks!

Abstract: The idea of equivariance to symmetry transformations provides one of the first theoretically grounded principles for neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. We extend this principle beyond global symmetries to local gauge transformations, thereby enabling the development of equivariant convolutional networks on general manifolds. We show that gauge equivariant convolutional networks give a unified description of equivariant and geometric deep learning by deriving a wide range of models as special cases of our theory. To illustrate our theory on a simple example and highlight the interplay between local and global symmetries we discuss an implementation for signals defined on the icosahedron, which provides a reasonable approximation of spherical signals. We evaluate the Icosahedral CNN on omnidirectional image segmentation and climate pattern segmentation, and find that it outperforms previous methods.

Sindy Löwe

You are all cordially invited to the AMLab seminar on Thursday 14th November at 14:00 in C3.163, where Sindy Löwe will give a talk titled “Putting An End to End-to-End: Gradient-Isolated Learning of Representations”. There are the usual drinks and snacks!

Abstract: We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. Inspired by the observation that biological neural networks appear to learn without backpropagating a global error signal, we split a deep neural network into a stack of gradient-isolated modules. Each module is trained to maximally preserve the information of its inputs using the InfoNCE bound from Oord et al. [2018]. Despite this greedy training, we demonstrate that each module improves upon the output of its predecessor, and that the representations created by the top module yield highly competitive results on downstream classification tasks in the audio and visual domain. The proposal enables optimizing modules asynchronously, allowing large-scale distributed training of very deep neural networks on unlabelled datasets.

Talk by David Blei on The Blessings of Multiple Causes

You are all cordially invited to the UvA-Bosch Delta lab seminar on Thursday October 17th October at 15:00 on the Roeterseilandcampus A2.11 , where  David Blei, well known for his fantastic work on LDA, Bayesian nonparametrics, and variational inference. He will give a talk on “The Blessings of Multiple Causes”.

Abstract:

Causal inference from observational data is a vital problem, but itcomes with strong assumptions. Most methods require that we observeall confounders, variables that affect both the causal variables andthe outcome variables. But whether we have observed all confounders isa famously untestable assumption. We describe the deconfounder, a wayto do causal inference with weaker assumptions than the classicalmethods require.
How does the deconfounder work? While traditional causal methodsmeasure the effect of a single cause on an outcome, many modernscientific studies involve multiple causes, different variables whoseeffects are simultaneously of interest. The deconfounder uses thecorrelation among multiple causes as evidence for unobservedconfounders, combining unsupervised machine learning and predictivemodel checking to perform causal inference.  We demonstrate thedeconfounder on real-world data and simulation studies, and describethe theoretical requirements for the deconfounder to provide unbiasedcausal estimates.
This is joint work with Yixin Wang.
[*] https://arxiv.org/abs/1805.06826


Biography


David Blei is a Professor of Statistics and Computer Science atColumbia University, and a member of the Columbia Data ScienceInstitute. He studies probabilistic machine learning, including itstheory, algorithms, and application. David has received several awardsfor his research, including a Sloan Fellowship (2010), Office of NavalResearch Young Investigator Award (2011), Presidential Early CareerAward for Scientists and Engineers (2011), Blavatnik Faculty Award(2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship(2017), and a Simons Investigator Award (2019). He is theco-editor-in-chief of the Journal of Machine Learning Research.  He isa fellow of the ACM and the IMS.

Talk by Will Grathwohl

You are all cordially invited to the special AMLab seminar on Tuesday 15th October at 12:00 in C1.112, where Will Grathwohlfrom David Duvenaud’s group in Toronto will give a talk titled “The many virtues of Incorporating energy-based generative models into discriminative learning”

Will is one of the authors behind many great recent papers. To name a few: 

Abstract: Generative models have long been promised to benefit downstream discriminative machine learning applications such as out-of-distribution detection, adversarial robustness, uncertainty quantification, semi-supervised learning and many others.  Yet, except for a few notable exceptions, methods for these tasks based on generative models are considerably outperformed by hand-tailored methods for each specific task. In this talk, I will advocate for the incorporation of energy-based generative models into the standard discriminative learning framework. Energy-Based Models (EBMs) can be much more easily incorporated into discriminative models than alternative generative modeling approaches and can benefit from network architectures designed for discriminative performance. I will present a novel method for jointly training EBMs alongside classifiers and demonstrate that this approach allows us to build models which rival the performance of state-of-the-art generative models and discriminative models within a single model. Further, we demonstrate our joint model gains many desirable properties such as a built-in mechanism for out-of-distribution detection, improved calibration, and improved robustness to adversarial examples — rivaling or improving upon hand-designed methods for each task.