Category Archives: Uncategorized

Miles Cranmer’s Talk

Hi, everyone! We have a guest speaker for our Seminar, and you are all cordially invited to the AMLab Seminar on Thursday 3rd December at 16:00 CET on Zoom, where‪ Miles Cranmer will give a talk titled “LAGRANGIAN NEURAL NETWORKS”.

Title : LAGRANGIAN NEURAL NETWORKS

Abstract : Accurate models of the world are built upon notions of its underlying symmetries. In physics, these symmetries correspond to conservation laws, such as for energy and momentum. Yet even though neural network models see increasing use in the physical sciences, they struggle to learn these symmetries. In this paper, we propose Lagrangian Neural Networks (LNNs), which can parameterize arbitrary lagrangian using neural networks. In contrast to models that learn Hamiltonians, LNNs do not require canonical coordinates and thus perform well in situations where canonical momenta are unknown or difficult to compute. Unlike previous approaches, our method does not restrict the functional form of learned energies and will produce energy-conserving models for a variety of tasks. We test our approach on a double pendulum and a relativistic particle, demonstrating energy conservation where a baseline approach incurs dissipation and modeling relativity without canonical coordinates where a Hamiltonianapproach fails. Finally, we show how this model can be applied to graphs and continuous systems using a Lagrangian Graph Network, and demonstrate it on the1D wave equation.

Paper Link: https://arxiv.org/pdf/2003.04630.pdf

To gain more deep insights into LAGRANGIAN NEURAL NETWORKS, feel free to join and discuss it! See you there!

David Duvenaud’s Talk

Hi, everyone! We have a guest speaker for our Seminar, and you are all cordially invited to the AMLab Seminar on Tuesday 24th November at 16:00 CET on Zoom, where‪ David Duvenaud will give a talk titled “Latent Stochastic Differential Equations for Irregularly-Sampled Time Series”.

Title: Latent Stochastic Differential Equations for Irregularly-Sampled Time Series

Abstract: Much real-world data is sampled at irregular intervals, but most time series models require regularly-sampled data. Continuous-time models address this problem, but until now only deterministic (ODE) models or linear-Gaussian models were efficiently trainable with millions of parameters. We construct a scalable algorithm for computing gradients of samples from stochastic differential equations (SDEs), and for gradient-based stochastic variational inference in function space, all with the use of adaptive black-box SDE solvers. This allows us to fit a new family of richly-parameterized distributions over time series. We apply latent SDEs to motion capture data, and to construct infinitely-deep Bayesian neural networks.

The technical details are in this paper: https://arxiv.org/abs/2001.01328 And the code is available at: https://github.com/google-research/torchsde

To gain more deep insights into neural stochastic differential equations, feel free to join and discuss it! See you there!

Bio: David Duvenaud is an assistant professor in computer science at the University of Toronto. His research focuses on continuous-time models, latent-variable models, and deep learning. His postdoc was done at Harvard University, and his Ph.D. at the University of Cambridge. David also co-founded Invenia, an energy forecasting and trading company.

Leon Lang’s Talk

Hi everyone, You are all cordially invited to the AMLab Seminar on Thursday 12nd Nov. at 16:00 CET on Zoom, where‪ Leon Lang will give a talk titled “A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels“.

Title: A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels

Abstract: Group equivariant convolutional networks (GCNNs) endow classical convolutional networks with additional symmetry priors, which can lead to a considerably improved performance. Recent advances in the theoretical description of GCNNs revealed that such models can generally be understood as performing convolutions with G-steerable kernels, that is, kernels that satisfy an equivariance constraint themselves. While the G-steerability constraint has been derived, it has to date only been solved for specific use cases – a general characterization of Gsteerable kernel spaces is still missing. This work provides such a characterization for the practically relevant case of G being any compact group. Our investigation is motivated by a striking analogy between the constraints underlying steerable kernels on the one hand and spherical tensor operators from quantum mechanics on the other hand. By generalizing the famous Wigner-Eckart theorem for spherical tensor operators, we prove that steerable kernel spaces are fully understood and parameterized in terms of 1) generalized reduced matrix elements, 2) ClebschGordan coefficients, and 3) harmonic basis functions on homogeneous spaces.

Link to paper: https://arxiv.org/pdf/2010.10952.pdf

To gain more deep insights into Group Equivariance, feel free to join and discuss it 🙂 !

Tim Bakker’s Talk

Hi everyone,

You are all cordially invited to the AMLab Seminar on Thursday 5th November at 4:00 p.m CET on Zoom, where‪ Tim Bakker will give a talk titled ” Experimental design for MRI by greedy policy search “.

Title: Experimental design for MRI by greedy policy search

Abstract: In today’s clinical practice, magnetic resonance imaging (MRI) is routinely accelerated through subsampling of the associated Fourier domain. Currently, the construction of these subsampling strategies – known as experimental design – relies primarily on heuristics. We propose to learn experimental design strategies for accelerated MRI with policy gradient methods. Unexpectedly, our experiments show that a simple greedy approximation of the objective leads to solutions nearly on-par with the more general non-greedy approach. We offer a partial explanation for this phenomenon rooted in greater variance in the non-greedy objective’s gradient estimates, and experimentally verify that this variance hampers non-greedy models in adapting their policies to individual MR images. We empirically show that this adaptivity is key to improving subsampling designs.

Paper Link: https://arxiv.org/pdf/2010.16262.pdf

To gain more deep insights into MRI research using Reinforcement Learning, feel free to join and discuss it! See you there 🙂 !

Emiel Hoogeboom

You are all cordially invited to the AMLab seminar on Thursday 2nd July at 16:00 on Zoom, where Emiel Hoogeboomwill give a talk titled “The Convolution Exponential”

Title:The Convolution Exponential

Paper linkhttps://arxiv.org/abs/2006.01910

Abstract: We introduce a new method to build linear flows, by taking the exponential of a linear transformation. This linear transformation does not need to be invertible itself, and the exponential has the following desirable properties: it is guaranteed to be invertible, its inverse is straightforward to compute and the log Jacobian determinant is equal to the trace of the linear transformation. An important insight is that the exponential can be computed implicitly, which allows the use of convolutional layers. Using this insight, we develop new invertible transformations named convolution exponentials and graph convolution exponentials, which retain the equivariance of their underlying transformations. In addition, we generalize Sylvester Flows and propose Convolutional Sylvester Flows which are based on the generalization and the convolution exponential as basis change. Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows. In addition, we show that Convolutional Sylvester Flows improve performance over residual flows as a generative flow model measured in log-likelihood.

Victor Garcia Satorras

You are all cordially invited to the AMLab seminar on Thursday 25th June at 16:00 on Zoom, where Victor Garcia Satorras will give a talk titled “Neural Enhanced Belief Propagation on Factor Graphs”

Note: you can access the video afterwards, which will be uploaded to YouTube

Paper linkhttps://arxiv.org/pdf/2003.01998.pdf

Abstract: A graphical model is a structured representation of locally dependent random variables. A traditional method to reason over these random variables is to perform inference using belief propagation. When provided with the true data generating process, belief propagation can infer the optimal posterior probability estimates in tree structured factor graphs. However, in many cases we may only have access to a poor approximation of the data generating process, or we may face loops in the factor graph, leading to suboptimal estimates. In this work we first extend graph neural networks to factor graphs (FG-GNN). We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation. The FG-GNN receives as input messages from belief propagation at every inference iteration and outputs a corrected version of them. As a result, we obtain a more accurate algorithm that combines the benefits of both belief propagation and graph neural networks. We apply our ideas to error correction decoding tasks, and we show that our algorithm can outperform belief propagation for LDPC codes on bursty channels.

Virtual talk by Jens Kober on Robots Learning (Through) Interactions

Following RIVM guidelines, we will host a completely virtual seminar in the Delta Lab Deep Learning Seminar Series. We will livestream the talk at the brand-new AMLab YouTube channel, starting May 7th at 11:00 CEST:
https://www.youtube.com/channel/UC-UamuSbKi_Dcaa4wlEyqlA
(Note: in we need to change the streaming link because of technical problems, look for updates here)

Abstract:
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Reinforcement learning and imitation learning are two different but complimentary machine learning approaches commonly used for learning motor skills.
In this seminar, Jens Kober will discuss various learning techniques we developed that can handle complex interactions with the environment. Complexity arises from non-linear dynamics in general and contacts in particular, taking multiple reference frames into account, dealing with high-dimensional input data, interacting with humans, etc. A human teacher is always involved in the learning process, either directly (providing demonstrations) or indirectly (designing the optimization criterion), which raises the question: How to best make use of the interactions with the human teacher to render the learning process efficient and effective?
All these concepts will be illustrated with benchmark tasks and real robot experiments ranging from fun (ball-in-a-cup) to more applied (unscrewing light bulbs).

Jens Kober is an associate professor at the TU Delft, Netherlands. He worked as a postdoctoral scholar jointly at the CoR-Lab, Bielefeld University, Germany and at the Honda Research Institute Europe, Germany. He graduated in 2012 with a PhD Degree in Engineering from TU Darmstadt and the MPI for Intelligent Systems. For his research he received the annually awarded Georges Giralt PhD Award for the best PhD thesis in robotics in Europe, the 2018 IEEE RAS Early Academic Career Award, and has received an ERC Starting grant. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.

Jan Günter Wöhlke

You are all cordially invited to the AMLab seminar on Thursday 20th February at 16:00 in C3.163, where Jan Günter Wöhlke from Boschwill give a talk titled “Tackling Sparse Rewards in Reinforcement Learning”

Abstract: Sparse reward problems present a challenge for reinforcement learning (RL) agents. Previous work has shown that choosing start states according to a curriculum can significantly improve the learning performance. Many existing curriculum generation algorithms rely on two key components: Performance measure estimation and a start selection policy. In our recently accepted AAMAS paper, we therefore propose a unifying framework for performance-based start state curricula in RL, which allows analyzing and comparing the influence of the key components. Furthermore, a new start state selection policy is introduced. With extensive empirical evaluations, we demonstrate state-of-the-art performance of our novel curriculum on difficult robotic navigation tasks as well as a high-dimensional robotic manipulation task.

Herke van Hoof

You are all cordially invited to the AMLab seminar on Thursday 21st November at 14:00 in C3.163, where Herke van Hoof will give a talk titled “Gradient estimation algorithms”. There are the usual drinks and snacks!

Abstract: In many cases, we cannot calculate exact gradients. This is the case if we cannot evaluate how well the model was expected to have done for different parameter values, for example if the model generates a sequence of stochastic decisions. Thus, many gradient estimators have been developed, from classical techniques from reinforcement learning to modern techniques such as the relax estimator. In e.g. meta-learning second derivative estimators have also been proposed. In this talk, I will attempt to give an overview of the properties of these techniques.

Maurice Weiler

You are all cordially invited to the AMLab seminar on Thursday 24th October at 14:00 in D1.113, where Maurice Weiler will give a talk titled “Gauge Equivariant Convolutional Networks”. There are the usual drinks and snacks!

Abstract: The idea of equivariance to symmetry transformations provides one of the first theoretically grounded principles for neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. We extend this principle beyond global symmetries to local gauge transformations, thereby enabling the development of equivariant convolutional networks on general manifolds. We show that gauge equivariant convolutional networks give a unified description of equivariant and geometric deep learning by deriving a wide range of models as special cases of our theory. To illustrate our theory on a simple example and highlight the interplay between local and global symmetries we discuss an implementation for signals defined on the icosahedron, which provides a reasonable approximation of spherical signals. We evaluate the Icosahedral CNN on omnidirectional image segmentation and climate pattern segmentation, and find that it outperforms previous methods.