Category Archives: Uncategorized

Talk by Abubakar Abid

Hi, guys~ We have a remote visitor Abubakar Abid and you are all cordially invited to the AMLab Seminar on Thursday 17th September at 16:00 CEST on Zoom, where‪ Abubakar will give a talk titled ” Interactive UIs for Your Machine Learning Models “.

Title: Interactive UIs for Your Machine Learning Models

Abstract: Accessibility is a major challenge of machine learning (ML). Typical ML models are built by specialists and require specialized hardware/software as well as ML experience to validate. This makes it challenging for non-technical collaborators and endpoint users (e.g. physicians) to easily provide feedback on model development and to gain trust in ML. The accessibility challenge also makes collaboration more difficult and limits the ML researcher’s exposure to realistic data and scenarios that occur in the wild. To improve accessibility and facilitate collaboration, we developed an open-source Python package, Gradio, which allows researchers to rapidly generate a visual interface for their ML models. Gradio makes accessing any ML model as easy as opening a URL in your browser. Our development of Gradio is informed by interviews with a number of machine learning researchers who participate in interdisciplinary collaborations. We developed these features and carried out a case study to understand Gradio’s usefulness and usability in the setting of a machine learning collaboration between a researcher and a cardiologist.

To gain more deep insights into understanding your machine learning models, feel free to join and discuss it! See you there!

Talk by Elise van der Pol

Hi everyone,

You are all cordially invited to the AMLab Seminar on Thursday 10th September at 16:00 CEST on Zoom, where‪ Elise van der Pol will give a talk titled “MDP Homomorphic Networks for Deep Reinforcement Learning “.

Paper link: https://arxiv.org/pdf/2006.16908.pdf and https://arxiv.org/pdf/2002.11963.pdf

Title: MDP Homomorphic Networks for Deep Reinforcement Learning

Abstract: This talk discusses MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done.

We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong.

To gain more deep insights on Deep Reinforcement Learning, feel free to join it and discuss! See you there!

Talk by Didrik Nielsen

You are all cordially invited to the AMLab Seminar on Thursday 3rd September at 16:00 CEST on Zoom, where ‪ Didrik Nielsen will give a talk titled “SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows”.

Paper link: https://arxiv.org/abs/2007.02731

Title: SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows

Abstract: Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions. However, they both impose constraints on the models: Normalizing flows use bijective transformations to model densities whereas VAEs learn stochastic transformations that are non-invertible and thus typically do not provide tractable estimates of the marginal likelihood. In this paper, we introduce SurVAE Flows: A modular framework of composable transformations that encompasses VAEs and normalizing flows. SurVAE Flows bridge the gap between normalizing flows and VAEs with surjective transformations, wherein the transformations are deterministic in one direction — thereby allowing exact likelihood computation, and stochastic in the reverse direction — hence providing a lower bound on the corresponding likelihood. We show that several recently proposed methods, including dequantization and augmented normalizing flows, can be expressed as SurVAE Flows. Finally, we introduce common operations such as the max value, the absolute value, sorting and stochastic permutation as composable layers in SurVAE Flows.

Talk by Pim de Haan‬

Hi everyone, you are all cordially invited to the AMLab Seminar on Thursday 30th July at 16:00 CEST on Zoom, where ‪Pim de Haan will give a talk titled “Natural Graph Networks“.

Paper link: https://arxiv.org/abs/2007.08349

Title: Natural Graph Networks

Abstract: Conventional neural message passing algorithms are invariant under permutation of the messages and hence forget how the information flows through the network. Studying the local symmetries of graphs, we propose a more general algorithm that uses different kernels on different edges, making the network equivariant to local and global graph isomorphisms and hence more expressive. Using elementary category theory, we formalize many distinct equivariant neural networks as natural networks, and show that their kernels are ‘just’ a natural transformation between two functors. We give one practical instantiation of a natural network on graphs which uses an equivariant message network parameterization, yielding good performance on several benchmarks.

Emiel Hoogeboom

You are all cordially invited to the AMLab seminar on Thursday 2nd July at 16:00 on Zoom, where Emiel Hoogeboomwill give a talk titled “The Convolution Exponential”

Title:The Convolution Exponential

Paper linkhttps://arxiv.org/abs/2006.01910

Abstract: We introduce a new method to build linear flows, by taking the exponential of a linear transformation. This linear transformation does not need to be invertible itself, and the exponential has the following desirable properties: it is guaranteed to be invertible, its inverse is straightforward to compute and the log Jacobian determinant is equal to the trace of the linear transformation. An important insight is that the exponential can be computed implicitly, which allows the use of convolutional layers. Using this insight, we develop new invertible transformations named convolution exponentials and graph convolution exponentials, which retain the equivariance of their underlying transformations. In addition, we generalize Sylvester Flows and propose Convolutional Sylvester Flows which are based on the generalization and the convolution exponential as basis change. Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows. In addition, we show that Convolutional Sylvester Flows improve performance over residual flows as a generative flow model measured in log-likelihood.

Victor Garcia Satorras

You are all cordially invited to the AMLab seminar on Thursday 25th June at 16:00 on Zoom, where Victor Garcia Satorras will give a talk titled “Neural Enhanced Belief Propagation on Factor Graphs”

Note: you can access the video afterwards, which will be uploaded to YouTube

Paper linkhttps://arxiv.org/pdf/2003.01998.pdf

Abstract: A graphical model is a structured representation of locally dependent random variables. A traditional method to reason over these random variables is to perform inference using belief propagation. When provided with the true data generating process, belief propagation can infer the optimal posterior probability estimates in tree structured factor graphs. However, in many cases we may only have access to a poor approximation of the data generating process, or we may face loops in the factor graph, leading to suboptimal estimates. In this work we first extend graph neural networks to factor graphs (FG-GNN). We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation. The FG-GNN receives as input messages from belief propagation at every inference iteration and outputs a corrected version of them. As a result, we obtain a more accurate algorithm that combines the benefits of both belief propagation and graph neural networks. We apply our ideas to error correction decoding tasks, and we show that our algorithm can outperform belief propagation for LDPC codes on bursty channels.

Virtual talk by Jens Kober on Robots Learning (Through) Interactions

Following RIVM guidelines, we will host a completely virtual seminar in the Delta Lab Deep Learning Seminar Series. We will livestream the talk at the brand-new AMLab YouTube channel, starting May 7th at 11:00 CEST:
https://www.youtube.com/channel/UC-UamuSbKi_Dcaa4wlEyqlA
(Note: in we need to change the streaming link because of technical problems, look for updates here)

Abstract:
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Reinforcement learning and imitation learning are two different but complimentary machine learning approaches commonly used for learning motor skills.
In this seminar, Jens Kober will discuss various learning techniques we developed that can handle complex interactions with the environment. Complexity arises from non-linear dynamics in general and contacts in particular, taking multiple reference frames into account, dealing with high-dimensional input data, interacting with humans, etc. A human teacher is always involved in the learning process, either directly (providing demonstrations) or indirectly (designing the optimization criterion), which raises the question: How to best make use of the interactions with the human teacher to render the learning process efficient and effective?
All these concepts will be illustrated with benchmark tasks and real robot experiments ranging from fun (ball-in-a-cup) to more applied (unscrewing light bulbs).

Jens Kober is an associate professor at the TU Delft, Netherlands. He worked as a postdoctoral scholar jointly at the CoR-Lab, Bielefeld University, Germany and at the Honda Research Institute Europe, Germany. He graduated in 2012 with a PhD Degree in Engineering from TU Darmstadt and the MPI for Intelligent Systems. For his research he received the annually awarded Georges Giralt PhD Award for the best PhD thesis in robotics in Europe, the 2018 IEEE RAS Early Academic Career Award, and has received an ERC Starting grant. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.

Jan Günter Wöhlke

You are all cordially invited to the AMLab seminar on Thursday 20th February at 16:00 in C3.163, where Jan Günter Wöhlke from Boschwill give a talk titled “Tackling Sparse Rewards in Reinforcement Learning”

Abstract: Sparse reward problems present a challenge for reinforcement learning (RL) agents. Previous work has shown that choosing start states according to a curriculum can significantly improve the learning performance. Many existing curriculum generation algorithms rely on two key components: Performance measure estimation and a start selection policy. In our recently accepted AAMAS paper, we therefore propose a unifying framework for performance-based start state curricula in RL, which allows analyzing and comparing the influence of the key components. Furthermore, a new start state selection policy is introduced. With extensive empirical evaluations, we demonstrate state-of-the-art performance of our novel curriculum on difficult robotic navigation tasks as well as a high-dimensional robotic manipulation task.

Herke van Hoof

You are all cordially invited to the AMLab seminar on Thursday 21st November at 14:00 in C3.163, where Herke van Hoof will give a talk titled “Gradient estimation algorithms”. There are the usual drinks and snacks!

Abstract: In many cases, we cannot calculate exact gradients. This is the case if we cannot evaluate how well the model was expected to have done for different parameter values, for example if the model generates a sequence of stochastic decisions. Thus, many gradient estimators have been developed, from classical techniques from reinforcement learning to modern techniques such as the relax estimator. In e.g. meta-learning second derivative estimators have also been proposed. In this talk, I will attempt to give an overview of the properties of these techniques.

Maurice Weiler

You are all cordially invited to the AMLab seminar on Thursday 24th October at 14:00 in D1.113, where Maurice Weiler will give a talk titled “Gauge Equivariant Convolutional Networks”. There are the usual drinks and snacks!

Abstract: The idea of equivariance to symmetry transformations provides one of the first theoretically grounded principles for neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. We extend this principle beyond global symmetries to local gauge transformations, thereby enabling the development of equivariant convolutional networks on general manifolds. We show that gauge equivariant convolutional networks give a unified description of equivariant and geometric deep learning by deriving a wide range of models as special cases of our theory. To illustrate our theory on a simple example and highlight the interplay between local and global symmetries we discuss an implementation for signals defined on the icosahedron, which provides a reasonable approximation of spherical signals. We evaluate the Icosahedral CNN on omnidirectional image segmentation and climate pattern segmentation, and find that it outperforms previous methods.