You are all cordially invited to the AMLab seminar on **Thursday June 6th** at **16:00** in **C3.163**, where **Wouter Kool **will give a talk titled **“Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without Replacement”**. Afterwards there are the usual drinks and snacks!**Abstract: **The well-known Gumbel-Max trick for sampling from a categorical distribution can be extended to sample k elements without replacement. We show how to implicitly apply this ‘Gumbel-Top-k’ trick on a factorized distribution over sequences, allowing to draw exact samples without replacement using a Stochastic Beam Search. Even for exponentially large domains, the number of model evaluations grows only linear in k and the maximum sampled sequence length. The algorithm creates a theoretical connection between sampling and (deterministic) beam search and can be used as a principled intermediate alternative. In a translation task, the proposed method compares favourably against alternatives to obtain diverse yet good quality translations. We show that sequences sampled without replacement can be used to construct low-variance estimators for expected sentence-level BLEU score and model entropy.

# AMLab talks with De Dataloog (Dutch)

(Dutch only) In De Dataloog, the Dutch Podcast on Data Science and Machine Learning, Wouter Kool talks about AMLab research on learning to solve Operations Research problems! Find the podcast here!

# Talk by Maximilian Ilse

You are all cordially invited to the AMLab seminar on **Thursday May 16th** at **16:00** in **C3.163**, where **Maximilian Ilse **will give a talk titled **“DIVA: Domain Invariant Variational Autoencoder”**. Afterwards there are the usual drinks and snacks!

**Abstract: **We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the Domain Invariant VAe (DIVA), a generative model that tackles this problem by learning three independent latent subspaces, one for the class, one for the domain and one for the object itself. In addition, we highlight that due to the generative nature of our model we can also incorporate unlabeled data from known or previously unseen domains. This property is highly desirable in fields like medical imaging where labeled data is scarce. We experimentally evaluate our model on the rotated MNIST benchmark and a malaria cell images dataset where we show that (i) the learned subspaces are indeed complementary to each other, (ii) we improve upon recent works on this task and (iii) incorporating unlabelled data can boost the performance even further.

# Talk by Shi Hu

You are all cordially invited to the AMLab seminar on **Thursday May 9th** at **16:00** in **C3.163**, where **Shi Hu **will give a talk titled **“Supervised Uncertainty Quantification for Segmentation with Multiple Annotations”**. Afterwards there are the usual drinks and snacks!

**Abstract: **The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation. Unfortunately, most existing works on predictive uncertainty do not return calibrated uncertainty estimates, which could be used in practice. In this work we exploit multi-grader annotation variability as a source of ‘groundtruth’ aleatoric uncertainty, which can be treated as a target in a supervised learning problem. We combine this groundtruth uncertainty with a Probabilistic U-Net and test on the LIDC-IDRI lung nodule CT dataset and MICCAI2012 prostate MRI dataset. We find that we are able to improve predictive uncertainty estimates. We also find that we can improve sample accuracy and sample diversity.

# Talk by Rodolfo Corona

You are all cordially invited to the AMLab seminar on **Thursday May 2nd** at **16:00** in **C3.163**, where **Rodolfo Corona **will give a talk titled **“Perceptual Theory of Mind”**. Afterwards there are the usual drinks and snacks!

**Abstract: **In this talk I will present ongoing on work on applying theory of mind, where an agent forms a mental model of another based on observed behavior, to an image reference game. In our setting, a learner is tasked with describing images using image attributes, and plays the game with a population of agents whose perceptual capabilities vary, which can cause them to guess differently for a given description. In each episode, the learner plays a series of games with an agent randomly sampled from the population. We show that it can improve its performance by forming a mental model of the agents it plays with, using embeddings generated from the gameplay history. We investigate how different policies perform in this task and begin to explore how explanations could be generated for the learner’s decisions.

# Talk by Anjan Dutta

You are all cordially invited to the AMLab seminar on **Thursday April 18th** at **16:00** in **C3.163**, where **Anjan Dutta** will give a talk titled **“Towards Practical Sketch-based Image Retrieval”**. Afterwards there are the usual drinks and snacks!

**Abstract: **Recently, matching natural images with free-hand sketches has received a lot of attention within the computer vision, multimedia and machine learning community, resulting in the sketch-based image retrieval (SBIR) paradigm. Since sketches can efficiently and precisely express the shape and pose of the target images, SBIR serves a better applicable scenario compared to the conventional text-image cross-modal image retrieval. In this seminar, I will talk about my recent works on SBIR and related topics, specifically my talk will address the questions: (1) how to retrieve multi-labeled images with a combination multi-modal queries, (2) how to generalize SBIR model to the cases with no visual training data, and (3) how to progress towards more practical SBIR in terms of data and model.

# Talk by Benjamin Bloem-Reddy

You are all cordially invited to the AMLab seminar on ****Monday Mar 18th** at **15:00** (Note the non-standard date/time)** in **C3.163**, where **Benjamin Bloem-Reddy** will give a talk titled **“Probabilistic symmetry and invariant neural networks”**. Afterwards there are the usual drinks and snacks!

**Abstract: **In an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings, much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures. We treat the neural network input and output as random variables, and consider group invariance from the perspective of probabilistic symmetry. Drawing on tools from probability and statistics, we establish a link between functional and probabilistic symmetry, and obtain functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Those representations characterize the structure of neural networks that can be used to represent such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We develop the details of the general program for exchangeable sequences and arrays, recovering a number of recent examples as special cases.

# Talk by Changyong Oh

You are all cordially invited to the AMLab seminar on **Thursday Mar 14th** at **16:00** in **C3.163**, where **Changyong Oh** will give a talk titled **“****Combinatorial Bayesian Optimization using Graph Representations****”**. Afterwards there are the usual drinks and snacks!

**Abstract:** This paper focuses on Bayesian Optimization – typically considered with continuous inputs – for discrete search input spaces, including integer, categorical or graph structured input variables. In Gaussian process-based Bayesian Optimization a problem arises, as it is not straightforward to define a proper kernel on discrete input structures, where no natural notion of smoothness or similarity could be provided. We propose COMBO, a method that represents values of discrete variables as vertices of a graph and then use the diffusion kernel on that graph. As the graph size explodes with the number of categorical variables and categories, we propose the graph Cartesian product to decompose the graph into smaller sub-graphs, enabling kernel computation in linear time with respect to the number of input variables. Moreover, in our formulation we learn a scale parameter per subgraph. In empirical studies on four discrete optimization problems we demonstrate that our method is on par or outperforms the state-of-the-art in discrete Bayesian optimization.

# Talk by Christos Louizos

You are all cordially invited to the AMLab seminar on **Thursday Feb 28th** at **16:00** in **C3.163**, where **Christos Louizos** will give a talk titled **“Learning Exchangeable Distributions”**. Afterwards there are the usual drinks and snacks!

**Abstract:** We present a new family of models that directly parametrize exchangeable distributions; it is realized via the introduction of an explicit model for the dependency structure of the joint probability distribution over the data, while respecting the permutation invariance of an exchangeable distribution. This is achieved by combining two recent advances in variational inference and probabilistic modelling for graphs, normalizing flows and (di)graphons. We, empirically, demonstrate that such models are also approximately consistent, hence they can also provide epistemic uncertainty about their predictions without positing an explicit prior over global variables. We show how to train such models on data and evaluate their predictive capabilities as well as the quality of their uncertainty on various tasks.

# Talk by Thomas Kipf

You are all cordially invited to the AMLab seminar on **Thursday Feb 21st** at **16:00** in **C3.163**, where **Thomas Kipf** will give a talk titled “**Compositional Imitation Learning: Explaining and executing one task at a time**”. Afterwards there are the usual drinks and snacks!

**Abstract: **We introduce a framework for Compositional Imitation Learning and Execution (CompILE) of hierarchically-structured behavior. CompILE learns reusable, variable-length segments of behavior from demonstration data using a novel unsupervised, fully-differentiable sequence segmentation module. These learned behaviors can then be re-composed and executed to perform new tasks. At training time, CompILE auto-encodes observed behavior into a sequence of latent codes, each corresponding to a variable-length segment in the input sequence. Once trained, our model generalizes to sequences of longer length and from environment instances not seen during training. We evaluate our model in a challenging 2D multi-task environment and show that CompILE can find correct task boundaries and event encodings in an unsupervised manner without requiring annotated demonstration data. Latent codes and associated behavior policies discovered by CompILE can be used by a hierarchical agent, where the high-level policy selects actions in the latent code space, and the low-level, task-specific policies are simply the learned decoders. We found that our agent could learn given only sparse rewards, where agents without task-specific policies struggle.