Category Archives: Talk

Talk by Dmitry Vetrov

Next week, Dmitry Vetrov (Higher School of Economics & Samsung AI center, Moscow) will be visiting us, and will give a talk titled “Interesting properties of the variational dropout framework”. You are all cordially invited to this talk on Thursday morning July 26, at 11:00 in C1.112 (FNWI, Amsterdam Science Park).

Abstract: Recently it was shown that dropout, popular regularization technique, can be treated as Bayesian procedure. Such Bayesian interpretation allows us to extend the initial model and to set the individual dropout rates for each weight of DNN. Variational inference automatically sets the rates to their optimal values that surprizingly leads to very high sparsification of DNN. The effect is similar in spirit to well-known ARD procedure for linear models and neural networks. By exploiting different extension one may show that DNNs can be trained with extremely large dropout rates and even when traditional signal-to-noise ratio is zero (e.g. when all weights in the layer have zero means and tunable variances). Coupled with recent discoveries about the landscape of loss these results provide new perspective in building much more powerful yet compact ensembles and/or removing the redundancy in modern deep learning models. In the talk we will cover these topics and present our most recent results in exploring those models.

Bio: Dmitry Vetrov (graduated from Moscow State Univerisity in 2003, PhD in 2006) is research professor at Higher School of Economics, Moscow and head of deep learning lab at Samsung AI center in Moscow. He is founder and head of Bayesian methods research group which became one of the strongest research groups in Russia. Three of his former PhD students became researchers in DeepMind. His research focuses on combining Bayesian framework with deep learning models. His group is also actively involved in building scalable tools for stochastic optimization, the application of tensor decomposition methods to large-scale ML, constructing cooperative multi-agent systems, etc.

Talk by Bela Mulder

UPDATE: This talk will be rescheduled to a new date after the summer.

You are all cordially invited to the AMLab seminar on Tuesday June 12 at 16:00 in C3.163, where Bela Mulder (AMOLF) will give a talk titled “Pitting man against machine in the arena of bottom-up design of crystal structures”. Afterwards there are the usual drinks and snacks!

Abstract: In this highly informal seminar I would like to pitch the question “Can a machine learning system develop a theory?” One of the much-touted properties of deep learning networks is that their deeper levels develop higher order generalization representations of their inputs. This begs the question whether they are able to hit upon the type of hidden structures in physical problem that are the cornerstone of effective physical theories. I would like to propose to test this idea in a concrete setting related to the highly relevant question of inverse design of self-assembling matter. I have recently formulated a novel approach towards inferring the specific short range isotropic interactions between particles of multiple types on lattices of given geometry in order that they spontaneously form specified periodic states of essentially arbitrary complexity. This approach rests upon the subtle intertwining between the group of transformations that leave the lattice structure invariant, with the group of permutations in the set of particle types induced by these same transformations on the target ordered structure. The upshot of this approach is that the number of independent coupling constants in the lattice can be systematically reduced from O(N2), where N is the number of distinct species, to O(N). The idea would be to see whether a machine learning approach which uses the space of possible patterns and their trivial transforms under symmetry operations as input, the set of possible constants as outputs, and feedback based on the degree to which the target structure is realized with these coupling constants is able to “learn” the symmetry-based rules, in a way that also generalizes to similar patterns not included in the training set.

Talk by Diederik Roijers

You are all cordially invited to the AMLab seminar on Tuesday May 29 at 16:00 in C3.163, where Diederik Roijers (VU) will give a talk titled “Multiple objectives: because we (should) care about the user”. Afterwards there are the usual drinks and snacks!

Abstract: Multi-objective reinforcement learning is on the rise. In this talk, we discuss why multi-objective models and methods are a natural way to model real-world problems, can be highly beneficial, and can be essential if we want optimise for actual users. First, we discuss both the intuitive and formal motivation for multi-objective decision making. Then, we introduce the utility-based approach, in which we show we can make better decisions by putting user utility at the centre of our models and methods. And finally, we discuss two example methods for two different scenarios for using multi-objective models and methods, as well as open challenges.

Talk by Taco Cohen

You are all cordially invited to the AMLab seminar on Tuesday May 22 at 16:00 in C3.163, where Taco Cohen will give a talk titled “The Quite General Theory of Equivariant Convolutional Networks”. Afterwards there are the usual drinks and snacks!

Abstract: Group equivariant and steerable convolutional neural networks (regular and steerable G-CNNs) have recently emerged as a very effective model class for learning from signal data such as 2D and 3D images, video, and other data where symmetries are present. In geometrical terms, regular G-CNNs represent data in terms of scalar fields (“feature channels”), whereas the steerable G-CNN can also use vector and tensor fields (“capsules”) to represent data. In this paper we present a general mathematical framework for G-CNNs on homogeneous spaces like Euclidean space or the sphere. We show that the layers of an equivariant network are convolutional if and only if the input and output feature spaces transform like a field. This result establishes G-CNNs as a universal class of equivariant network architectures. Furthermore, we study the space of equivariant filter kernels (or propagators), and show how an understanding of this space can be used to construct G-CNNs for general fields over homogeneous spaces. Finally, we discuss several applications of the theory, such as 3D model recognition, molecular energy regression, analysis of protein structure, omnidirectional vision, and others.

The goal of this talk is to explain this new mathematical theory in a way that is accessible to the machine learning community.

Talk by Emiel Hoogeboom

You are all cordially invited to the AMLab seminar on Tuesday May 15 at 16:00 in C3.163, where Emiel Hoogeboom will give a talk titled “G-HexaConv”. Afterwards there are the usual drinks and snacks!

Abstract: The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible.

Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models.

Talk by Zeynep Akata

You are all cordially invited to the AMLab seminar on Tuesday April 24 at 16:00 in C3.163, where Zeynep Akata will give a talk titled “Representing and Explaining Novel Concepts with Minimal Supervision”. Afterwards there are the usual drinks and snacks!

Abstract: Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Artificial Intelligence in that (1) how we can generalize the image classification models to the cases with no visual training data available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a correct and an incorrect class label, and explain why the predicted correct label is appropriate for the image and why the predicted incorrect label is not appropriate for the image.

Talk by Tineke Blom

You are all cordially invited to the AMLab seminar on Tuesday April 17 at 16:00 in C3.163, where Tineke Blom will give a talk titled “Causal Modeling for Dynamical Systems using Generalized Structural Causal Models”. Afterwards there are the usual drinks and snacks!

Abstract: Structural causal models (SCMs) are a popular tool to describe causal relations in systems in many fields such as economy, the social sciences, and biology. Complex (cyclical) dynamical systems, such as chemical reaction networks, are often described by a set of ODEs. We show that SCMs are not flexible enough in general to give a complete causal representation of equilibrium states in these dynamical systems. Since such systems do form an important modeling class for real-world data, we extend the concept of an SCM to a generalized structural causal model. We show that this allows us to capture the essential causal semantics that characterize dynamical systems. We illustrate our approach on a basic enzymatic reaction.

Two talks: Avital Oliver and Petar Veličković

Next week Monday and Tuesday, the AMLab seminar will host two talks at FNWI, Amsterdam Science Park:

On Monday April 9 at 16:00 in room C1.112, Avital Oliver (Google Brain) will give a talk titled “Realistic Evaluation of Semi-Supervised Learning Algorithms“;

On Tuesday April 10 at 16:00 in room F1.02, Petar Veličković (University of Cambridge) will give a talk titled “Keeping our graphs attentive“.

Abstracts and bio’s are included below. Afterwards there will be the usual drinks and snacks. (Note that room F1.02 for Petar’s talk is a several minute walk away from the main entrance.)

Continue reading

Talk by Karen Ullrich

You are all cordially invited to the AMLab seminar on Tuesday April 3 at 16:00 in C3.163, where Karen Ullrich will give a talk titled “Variational Bayes Wake-Sleep algorithm for expressive latent representations in 3D protein reconstruction”. Afterwards there are the usual drinks and snacks!

Abstract: Reconstructing three dimensional structures from noisy two dimensional orthographic projections is a central task in many scientific domains, examples range from medical tomography to single particle electron microscopy.
We treat this problem from a Bayesian point of view. Specifically, we regard a specimen’s structure and its pose as latent factors which are marginalized over. This allows us to express uncertainty in pose and even local uncertainty in the sample’s structure. This information can serve to detect unstable sub-structures or multiple configurations of a specimen. In particular, we apply amortized deep neural networks to encode observations into latent factors. This bears the advantage of transferability across multiple structures. To this end, we propose to train the model alternately in observation space and latent space, resulting in a generalized version of the wake-sleep algorithm.
We focus our experiments on cryogenic electron microscopy (CryoEM) single particle analysis, a technique that enables deep understanding of structural biology and chemistry by inspecting single proteins. We show our model to be competitive while predicting reasonable uncertainties. Moreover, we empirically demonstrate that the model is more data efficient than competitive methods and that it is transferable between molecules.

Talk by Wouter Kool

You are all cordially invited to the AMLab seminar on Tuesday March 27 at 16:00 in C3.163, where Wouter Kool will give a talk titled “Attention Solves Your TSP”. Afterwards there are the usual drinks and snacks!

Abstract: We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.