Category Archives: Uncategorized

Talk by Pim de Haan‬

Hi everyone, you are all cordially invited to the AMLab Seminar on Thursday 30th July at 16:00 CEST on Zoom, where ‪Pim de Haan will give a talk titled “Natural Graph Networks“.

Paper link: https://arxiv.org/abs/2007.08349

Title: Natural Graph Networks

Abstract: Conventional neural message passing algorithms are invariant under permutation of the messages and hence forget how the information flows through the network. Studying the local symmetries of graphs, we propose a more general algorithm that uses different kernels on different edges, making the network equivariant to local and global graph isomorphisms and hence more expressive. Using elementary category theory, we formalize many distinct equivariant neural networks as natural networks, and show that their kernels are ‘just’ a natural transformation between two functors. We give one practical instantiation of a natural network on graphs which uses an equivariant message network parameterization, yielding good performance on several benchmarks.

Emiel Hoogeboom

You are all cordially invited to the AMLab seminar on Thursday 2nd July at 16:00 on Zoom, where Emiel Hoogeboomwill give a talk titled “The Convolution Exponential”

Title:The Convolution Exponential

Paper linkhttps://arxiv.org/abs/2006.01910

Abstract: We introduce a new method to build linear flows, by taking the exponential of a linear transformation. This linear transformation does not need to be invertible itself, and the exponential has the following desirable properties: it is guaranteed to be invertible, its inverse is straightforward to compute and the log Jacobian determinant is equal to the trace of the linear transformation. An important insight is that the exponential can be computed implicitly, which allows the use of convolutional layers. Using this insight, we develop new invertible transformations named convolution exponentials and graph convolution exponentials, which retain the equivariance of their underlying transformations. In addition, we generalize Sylvester Flows and propose Convolutional Sylvester Flows which are based on the generalization and the convolution exponential as basis change. Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows. In addition, we show that Convolutional Sylvester Flows improve performance over residual flows as a generative flow model measured in log-likelihood.

Victor Garcia Satorras

You are all cordially invited to the AMLab seminar on Thursday 25th June at 16:00 on Zoom, where Victor Garcia Satorras will give a talk titled “Neural Enhanced Belief Propagation on Factor Graphs”

Note: you can access the video afterwards, which will be uploaded to YouTube

Paper linkhttps://arxiv.org/pdf/2003.01998.pdf

Abstract: A graphical model is a structured representation of locally dependent random variables. A traditional method to reason over these random variables is to perform inference using belief propagation. When provided with the true data generating process, belief propagation can infer the optimal posterior probability estimates in tree structured factor graphs. However, in many cases we may only have access to a poor approximation of the data generating process, or we may face loops in the factor graph, leading to suboptimal estimates. In this work we first extend graph neural networks to factor graphs (FG-GNN). We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation. The FG-GNN receives as input messages from belief propagation at every inference iteration and outputs a corrected version of them. As a result, we obtain a more accurate algorithm that combines the benefits of both belief propagation and graph neural networks. We apply our ideas to error correction decoding tasks, and we show that our algorithm can outperform belief propagation for LDPC codes on bursty channels.

Virtual talk by Jens Kober on Robots Learning (Through) Interactions

Following RIVM guidelines, we will host a completely virtual seminar in the Delta Lab Deep Learning Seminar Series. We will livestream the talk at the brand-new AMLab YouTube channel, starting May 7th at 11:00 CEST:
https://www.youtube.com/channel/UC-UamuSbKi_Dcaa4wlEyqlA
(Note: in we need to change the streaming link because of technical problems, look for updates here)

Abstract:
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Reinforcement learning and imitation learning are two different but complimentary machine learning approaches commonly used for learning motor skills.
In this seminar, Jens Kober will discuss various learning techniques we developed that can handle complex interactions with the environment. Complexity arises from non-linear dynamics in general and contacts in particular, taking multiple reference frames into account, dealing with high-dimensional input data, interacting with humans, etc. A human teacher is always involved in the learning process, either directly (providing demonstrations) or indirectly (designing the optimization criterion), which raises the question: How to best make use of the interactions with the human teacher to render the learning process efficient and effective?
All these concepts will be illustrated with benchmark tasks and real robot experiments ranging from fun (ball-in-a-cup) to more applied (unscrewing light bulbs).

Jens Kober is an associate professor at the TU Delft, Netherlands. He worked as a postdoctoral scholar jointly at the CoR-Lab, Bielefeld University, Germany and at the Honda Research Institute Europe, Germany. He graduated in 2012 with a PhD Degree in Engineering from TU Darmstadt and the MPI for Intelligent Systems. For his research he received the annually awarded Georges Giralt PhD Award for the best PhD thesis in robotics in Europe, the 2018 IEEE RAS Early Academic Career Award, and has received an ERC Starting grant. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.

Jan Günter Wöhlke

You are all cordially invited to the AMLab seminar on Thursday 20th February at 16:00 in C3.163, where Jan Günter Wöhlke from Boschwill give a talk titled “Tackling Sparse Rewards in Reinforcement Learning”

Abstract: Sparse reward problems present a challenge for reinforcement learning (RL) agents. Previous work has shown that choosing start states according to a curriculum can significantly improve the learning performance. Many existing curriculum generation algorithms rely on two key components: Performance measure estimation and a start selection policy. In our recently accepted AAMAS paper, we therefore propose a unifying framework for performance-based start state curricula in RL, which allows analyzing and comparing the influence of the key components. Furthermore, a new start state selection policy is introduced. With extensive empirical evaluations, we demonstrate state-of-the-art performance of our novel curriculum on difficult robotic navigation tasks as well as a high-dimensional robotic manipulation task.

Herke van Hoof

You are all cordially invited to the AMLab seminar on Thursday 21st November at 14:00 in C3.163, where Herke van Hoof will give a talk titled “Gradient estimation algorithms”. There are the usual drinks and snacks!

Abstract: In many cases, we cannot calculate exact gradients. This is the case if we cannot evaluate how well the model was expected to have done for different parameter values, for example if the model generates a sequence of stochastic decisions. Thus, many gradient estimators have been developed, from classical techniques from reinforcement learning to modern techniques such as the relax estimator. In e.g. meta-learning second derivative estimators have also been proposed. In this talk, I will attempt to give an overview of the properties of these techniques.

Maurice Weiler

You are all cordially invited to the AMLab seminar on Thursday 24th October at 14:00 in D1.113, where Maurice Weiler will give a talk titled “Gauge Equivariant Convolutional Networks”. There are the usual drinks and snacks!

Abstract: The idea of equivariance to symmetry transformations provides one of the first theoretically grounded principles for neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. We extend this principle beyond global symmetries to local gauge transformations, thereby enabling the development of equivariant convolutional networks on general manifolds. We show that gauge equivariant convolutional networks give a unified description of equivariant and geometric deep learning by deriving a wide range of models as special cases of our theory. To illustrate our theory on a simple example and highlight the interplay between local and global symmetries we discuss an implementation for signals defined on the icosahedron, which provides a reasonable approximation of spherical signals. We evaluate the Icosahedral CNN on omnidirectional image segmentation and climate pattern segmentation, and find that it outperforms previous methods.

Sindy Löwe

You are all cordially invited to the AMLab seminar on Thursday 14th November at 14:00 in C3.163, where Sindy Löwe will give a talk titled “Putting An End to End-to-End: Gradient-Isolated Learning of Representations”. There are the usual drinks and snacks!

Abstract: We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. Inspired by the observation that biological neural networks appear to learn without backpropagating a global error signal, we split a deep neural network into a stack of gradient-isolated modules. Each module is trained to maximally preserve the information of its inputs using the InfoNCE bound from Oord et al. [2018]. Despite this greedy training, we demonstrate that each module improves upon the output of its predecessor, and that the representations created by the top module yield highly competitive results on downstream classification tasks in the audio and visual domain. The proposal enables optimizing modules asynchronously, allowing large-scale distributed training of very deep neural networks on unlabelled datasets.

Talk by Stephan Alaniz

You are all cordially invited to the second AMLab seminar this week, on Thursday November 1 at 16:00 in C3.163, where Stephan Alaniz will give a talk titled “Iterative Binary Decision”. Afterwards there are the usual drinks and snacks!

Abstract: The complexity of functions a neural network approximates make
it hard to explain what the classification decision is based on. In this
work, we present a framework that exposes more information about this
decision-making process. Instead of producing a classification in a
single step, our model iteratively makes binary sub-decisions which,
when combined as a whole, ultimately produce the same classification
result while revealing a decision tree as thought process. While there
is generally a trade-off between interpretability and accuracy, the
insights our model generates come at a negligible loss in accuracy. The
decision tree resulting from the sequence of binary decisions of our
model reveal a hierarchical clustering of the data and can be used as
learned attributes in zero-shot learning.

PhD student Noud de Kroon joined AMLab

Noud de Kroon has joined the UvA in October 2018 as a PhD student of AMLab, under the joint supervision of dr. Joris Mooij and dr. Danielle Belgrave (Microsoft Research Cambridge). Previously, he obtained a bachelor’s degree in software science at Eindhoven University of Technology and a master’s degree in computer science at the University of Oxford. His research focus is on combining causality and reinforcement learning in order to make better
decisions and improve data efficiency, with applications for example in the medical domain.