Category Archives: Talk

Talk by Paul Baireuther

You are all cordially invited to the AMLab seminar on Tuesday March 20 at 16:00 in C3.163, where Paul Baireuther (Lorentz Institute of Leiden University) will give a talk titled “Quantum Error Correction with Recurrent Neural Networks”. Afterwards there are the usual drinks and snacks!

Abstract: In quantum computation one of the key challenges is to build fault-tolerant logical qubits. A logical qubit consists of several physical qubits. In stabilizer codes, a popular class of quantum error correction schemes, a part of the system of physical qubits is measured repeatedly, without measuring (and collapsing by the Born rule) the state of the encoded logical qubit. These repetitive measurements are called syndrome measurements, and must be interpreted by a classical decoder in order to determine what errors occurred on the underlying physical system. The decoding of these space- and time-correlated syndromes is a highly non-trivial task, and efficient decoding algorithms are known only for a few stabilizer codes. In this talk I will explain how we design and train decoders based on recurrent neural networks.

Talk by Max Welling

You are all cordially invited to the AMLab seminar on Tuesday March 13 at 16:00 in C3.163 (FNWI, Amsterdam Science Park), where prof. Max Welling will give a talk titled “Stochastic Deep Learning”. Afterwards there are the usual drinks and snacks.

Abstract: Deep learning has been very successful in many applications, but there are a number challenges that still need to be addressed:
1) DL does not provide reliable confidence intervals
2) DL is susceptible to small adversarial input perturbations
3) DL easily overfits
4) DL uses too much energy and memory
In this talk I will argue that we should be looking at stochastic DL models where the hidden units are noisy. We can train these models with variational methods.
A number of interesting connections emerge in such models:
1) The noisy hidden units form an information bottleneck
2) Through local reparameterization we can interpret these models as Bayesian
3) The noise can be used to create privacy preserving models
4) Stochastic quantization to low bit-width can make DL more power and memory efficient.
This talk will not go in great depth in these topics but rather paint the larger picture.

Talk by Thijs van Ommen

You are all cordially invited to the AMLab seminar on Tuesday March 6 at 16:00 in C3.163, where Thijs van Ommen will give a talk titled “Accurate and efficient causal discovery”. Afterwards there are the usual drinks and snacks!

Abstract: Will administering a certain chemical cause a cancer cell to stop multiplying? To answer this and other scientific “what-if” questions, we need causal models, which describe the cause-effect relations within a system of interest. Because even domain experts may not know the right causal model, we want to learn it automatically from large-scale data. This problem is called causal discovery, and is very difficult: the signals in the data that allow us to distinguish different causal models are often weak, so we need to be careful when interpreting them. Also, the number of candidate models that must be considered makes this problem computationally challenging. I will present some of my recent results which are an important step towards developing a statistically accurate and computationally efficient algorithm for causal discovery.

Talk by Bas Veeling

You are all cordially invited to the AMLab seminar on Tuesday February 20 at 16:00 in C3.163, where Bas Veeling will give a talk titled “Uncertainty in Deep Neural Networks with Stochastic Quantized Activation Variational Inference”. Afterwards there are the usual drinks and snacks!

Abstract: The successful uptake of deep neural networks in high-risk domains is contingent on the capability to ensure minimal-risk guarantees. This requires that deep neural networks provide predictive uncertainty of high quality. Amortized variational inference provides a promising direction to achieve this, but demands a flexible yet tractable approximative posterior, which is an open area of research. We propose “SQUAVI”, a novel and flexible variational inference model that imposes a multinomial distribution on quantized latent variables. The proposed method is scalable, self-normalizing and sample efficient, and we demonstrate that the model utilizes the flexible posterior to its full potential, learns interesting non-linearities, and provides predictive uncertainty of competitive quality.

Talk by ChangYong Oh

You are all cordially invited to the AMLab seminar on Tuesday February 13 at 16:00 in C3.163, where ChangYong Oh will give a talk titled “BOCK: Bayesian Optimization with Cylindrical Kernels”. Afterwards there are the usual drinks and snacks!

Abstract: A major challenge in Bayesian Optimization is the boundary issue (Swersky, 2017) where an algorithm spends too many evaluations near the boundary of its search space. In this paper we propose BOCK, Bayesian Optimization with Cylindrical Kernels, whose basic idea is to transform the ball geometry of the search space using a cylindrical transformation. Because of the transformed geometry, the Gaussian Process-based surrogate model spends less budget searching near the boundary, while concentrating its efforts relatively more near the center of the search region, where we expect the solution to be located. We evaluate BOCK extensively, showing that it is not only more accurate and efficient, but it also scales successfully to problems with a dimensionality as high as 500. We show that the better accuracy and scalability of BOCK even allows optimizing modestly sized neural network layers, as well as neural network hyperparameters.

Talk by Jorn Peters

You are all cordially invited to the AMLab seminar on Tuesday January 30 at 16:00 in C3.163, where Jorn Peters will give a talk titled “Binary Neural Networks: an overview”. Afterwards there are the usual drinks and snacks!

Abstract: One limiting factor for deploying neural networks in real-world applications (e.g., self-driving cars or smart home appliances) is the requirement for memory, computation and power. As a consequence, it is often infeasible to employ many of today’s deep learning innovations in situations where resources are scarce. One way to combat these resource requirements of neural networks is to reduce the floating point bit-precision for the parameters and/or activations in the neural network, which effectively increases FLOPS and reduces memory requirements. Taking this to the extreme, one obtains binary neural networks, i.e., neural networks in which the parameters and/or activations are constrained to only two possible values (e.g., -1 or 1). In recent years, several methods for training binary neural networks using gradient descent have been developed. In this talk I will give an overview of (a selection) of these methods.

Talk by Veronika Cheplygina

You are all cordially invited to this week’s AMLab seminar, on Friday January 26 at 15:30 in C3.165 (so the day, time and location of the idea club). There Veronika Cheplygina (Eindhoven) will give a talk titled “Challenges of multiple instance learning in medical image analysis”. Afterwards there are the usual drinks and snacks!

Abstract: Data is often only weakly annotated: for example, for a medical image, we might know the patient’s diagnosis, but not where the abnormalities are located. Multiple instance learning (MIL), is aimed at learning classifiers from such data. In this talk, I will share a number of lessons I have learnt about MIL so far: (1) researchers do not agree on what MIL is, (2) there is no “one size fits all” approach (3) we need more thorough evaluation methods. I will give examples from several applications, including computer-aided diagnosis in chest CT images. I will also briefly discuss my work on crowdsourcing medical image annotations, and why MIL might be useful in this case.

Veronika Cheplygina is an assistant professor at the Medical Image Analysis group, Eindhoven University of Technology since February 2017. She received her Ph.D. from the Delft University of Technology for her thesis “Dissimilarity-Based Multiple Instance Learning” in 2015. As part of her PhD, she was a visiting researcher at the Max Planck Institute for Intelligent Systems in Tuebingen, Germany. From 2015 to 2016 she was a postdoc at the Biomedical Imaging Group Rotterdam, Erasmus MC. Her research interests are centered around learning scenarios where few labels are available, such as multiple instance learning, transfer learning, and crowdsourcing. Next to research, Veronika blogs about academic life at

Talk by Jakub Tomczak

You are all cordially invited to the first AMLab seminar of 2018 on Tuesday January 16 at 16:00 in C3.163, where Jakub Tomczak will give a talk titled “Deep Multiple Instance Learning with the Attention-based Pooling Operator”. Afterwards there are the usual drinks and snacks!

The computer-aided analysis of medical scans is a longstanding goal in the medical imaging field. Currently, deep learning has became a dominant methodology for supporting pathologists and radiologist. Deep learning algorithms have been successfully applied to digital pathology and radiology, nevertheless, there are still practical issues that prevent these tools to be widely used in practice. The main obstacles are low number of available cases and large size of images (a.k.a. the small n, large p problem in machine learning), and a very limited access to annotation at a pixel level that can lead to severe overfitting and large computational requirements. We propose to handle these issues by introducing a framework that processes a medical image as a collection of small patches using a single, shared neural network. The final diagnosis is provided by combining scores of individual patches using a permutation-invariant operator (combination). In machine learning community such approach is called the multi-instance learning (MIL).

During this presentation we will outline the definition of the MIL and propose a learnable permutation-invariant operator using the attention mechanism. We will provide our preliminary results on a toy problem and real-life histopathology data.

Maximilian Ilse, Jakub Tomczak, Max Welling

Talk by Giorgio Patrini

You are all cordially invited to the AMLab seminar on Tuesday December 12 at 16:00 in C3.163, where Giorgio Patrini will give a talk titled “Federated learning on vertically partitioned data via entity resolution and homomorphic encryption”. Afterwards there are the usual drinks and snacks!

Consider two data providers, each maintaining private records of different feature sets about common entities. They aim to learn a linear model jointly in a federated setting, namely, data is local and a shared model is trained from locally computed updates. In contrast with most work on distributed learning, in this scenario (i) data is split vertically, i.e. by features, (ii) only one data provider knows the target variable and (iii) entities are not linked across the data providers. Hence, to the challenge of private learning, we add the potentially negative consequences of mistakes in entity resolution.

Our contribution is twofold. First, we describe a three-party end-to-end solution in two phases — privacy-preserving entity resolution and federated logistic regression over messages encrypted with an additively homomorphic scheme — , secure against a honest-but-curious adversary. The system allows learning without either exposing data in the clear or sharing which entities the data providers have in common. Our implementation is as accurate as a naive non-private solution that brings all data in one place, and scales to problems with millions of entities with hundreds of features. Second, we provide a formal analysis of the impact of entity resolution on learning.

Talk by Thomas Kipf

You are all cordially invited to the AMLab seminar on Tuesday November 14 at 16:00 in C3.163, where Thomas Kipf will give a talk titled “End-to-end learning on graphs with graph convolutional networks”. Afterwards there are the usual drinks and snacks!

Abstract: Neural networks on graphs have gained renewed interest in the machine learning community. Recent results have shown that end-to-end trainable neural network models that operate directly on graphs can challenge well-established classical approaches, such as kernel-based methods or methods that rely on graph embeddings (e.g. DeepWalk). In this talk, I will motivate such an approach from an analogy to traditional convolutional neural networks and introduce our recent variant of graph convolutional networks (GCNs) that achieves promising results on a number of semi-supervised node classification tasks. If time permits, I will further introduce two extensions of this basic framework, namely: graph auto-encoders and relational GCNs. While graph auto-encoders provide a novel way of approaching problems like link prediction or recommendation, relational GCNs allow for efficient modeling of directed relational graphs, such as knowledge bases.