Talk by Marco Federici

You are all cordially invited to the AMLab seminar on Thursday 12th September at 14:00 in C4.174, where Marco Federici will give a talk titled “Towards Robust Representations by Exploiting Multiple Data Views”. There are the usual drinks and snacks!

Abstract: The problem of creating data representations can be formulated as the definition of an encoding function which maps observations into a predefined code space. Whenever the encoding is used as an intermediate step for a predictive task, among the possible encodings, we are generally interested in the ones that retain the desired target information. Furthermore, recent literature has shown that discarding irrelevant factors of variation in the data (minimality) yield robustness and invariance to nuisances of the task. Following these two general guidelines, in this work, we introduce an information-theoretical method that exploits some known properties of the predictive task to create robust data representations without requiring direct supervision signals. By exploiting pairs of joint observations, our model learns representations that are as discriminative as the original data for the predictive task while being more robust than the raw-signal. The proposed theory builds upon well-known self-supervised algorithms (such as Contrastive Predictive Coding and the InfoMax principle), bridging the gap between information bottleneck and probabilistic invariance. Empirical evidence shows the applicability of our model for both multi-view and single-view datasets.

Talk by Wouter van Amsterdam

You are all cordially invited to the AMLab seminar on Thursday September 5th at 16:00 in C3.163, where Wouter van Amsterdam will give a talk titled “Controlling for Biasing Signals in Images for Prognostic Models: Survival Predictions for Lung Cancer with Deep Learning”. Afterwards there are the usual drinks and snacks!

Abstract: Deep learning has shown remarkable results for image analysis and is expected to aid individual treatment decisions in health care. Treatment recommendations are predictions with an inherently causal interpretation. To use deep learning for these applications, deep learning methods must be promoted from the level of mere associations to causal questions. We present a scenario with real-world medical images (CT-scans of lung cancers) and simulated outcome data. Through the data simulation scheme, the images contain two distinct factors of variation that are associated with survival, but represent a collider (tumor size) and a prognostic factor (tumor heterogeneity) respectively. We show that when this collider can be quantified, unbiased individual prognosis predictions are attainable with deep learning. This is achieved by (1) setting a dual task for the network to predict both the outcome and the collider and (2) enforcing a form of independence of the activation distributions of the last layer. Our method provides an example of combining deep learning and structural causal models to achieve unbiased individual prognosis predictions. Extensions of machine learning methods for applications to causal questions are required to attain the long standing goal of personalized medicine supported by artificial intelligence.

Talk by Karen Ullrich

You are all cordially invited to the AMLab seminar on Thursday June 20th at 16:00 in C3.163, where Karen Ullrich will give a talk titled “Differentiable probabilistic models of scientific imaging with the Fourier slice theorem”. Afterwards there are the usual drinks and snacks!

Abstract: Scientific imaging techniques such as optical and electron microscopy and computed tomography (CT) scanning are used to study the 3D structure of an object through 2D observations.  These observations are related to the original 3D object through orthogonal integral projections. For common 3D reconstruction algorithms, computational efficiency requires the modeling of the 3D structures to take place in Fourier space by applying the Fourier slice theorem. At present, it is unclear how to differentiate through the projection operator, and hence current learning algorithms can not rely on gradient based methods to optimize 3D structure models.  In this paper we show how back-propagation through the projection operator in Fourier space can be achieved. We demonstrate the validity of the approach with experiments on 3D reconstruction of proteins. We further extend our approach to learning probabilistic models of 3D objects. This allows us to predict regions of low sampling rates or estimate noise. A higher sample efficiency can be reached by utilizing the learned uncertainties of the 3D structure as an unsupervised estimate of the model fit. Finally, we demonstrate how the reconstruction algorithm can be extended with an amortized inference scheme on unknown attributes such as object pose. Through empirical studies we show that joint inference of the 3D structure and the object pose becomes more difficult when the ground truth object contains more symmetries. Due to the presence of for instance (approximate) rotational symmetries, the pose estimation can easily get stuck in local optima, inhibiting a fine-grained high-quality estimate of the 3D structure.

Talk by Wouter Kool

You are all cordially invited to the AMLab seminar on Thursday June 6th at 16:00 in C3.163, where Wouter Kool will give a talk titled “Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without Replacement”. Afterwards there are the usual drinks and snacks!

Abstract: The well-known Gumbel-Max trick for sampling from a categorical distribution can be extended to sample k elements without replacement. We show how to implicitly apply this ‘Gumbel-Top-k’ trick on a factorized distribution over sequences, allowing to draw exact samples without replacement using a Stochastic Beam Search. Even for exponentially large domains, the number of model evaluations grows only linear in k and the maximum sampled sequence length. The algorithm creates a theoretical connection between sampling and (deterministic) beam search and can be used as a principled intermediate alternative. In a translation task, the proposed method compares favourably against alternatives to obtain diverse yet good quality translations. We show that sequences sampled without replacement can be used to construct low-variance estimators for expected sentence-level BLEU score and model entropy.

Talk by Maximilian Ilse

You are all cordially invited to the AMLab seminar on Thursday May 16th at 16:00 in C3.163, where Maximilian Ilse will give a talk titled “DIVA: Domain Invariant Variational Autoencoder”. Afterwards there are the usual drinks and snacks!

Abstract: We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the Domain Invariant VAe (DIVA), a generative model that tackles this problem by learning three independent latent subspaces, one for the class, one for the domain and one for the object itself. In addition, we highlight that due to the generative nature of our model we can also incorporate unlabeled data from known or previously unseen domains. This property is highly desirable in fields like medical imaging where labeled data is scarce. We experimentally evaluate our model on the rotated MNIST benchmark and a malaria cell images dataset where we show that (i) the learned subspaces are indeed complementary to each other, (ii) we improve upon recent works on this task and (iii) incorporating unlabelled data can boost the performance even further.

Talk by Shi Hu

You are all cordially invited to the AMLab seminar on Thursday May 9th at 16:00 in C3.163, where Shi Hu will give a talk titled “Supervised Uncertainty Quantification for Segmentation with Multiple Annotations”. Afterwards there are the usual drinks and snacks!

Abstract: The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation. Unfortunately, most existing works on predictive uncertainty do not return calibrated uncertainty estimates, which could be used in practice. In this work we exploit multi-grader annotation variability as a source of ‘groundtruth’ aleatoric uncertainty, which can be treated as a target in a supervised learning problem. We combine this groundtruth uncertainty with a Probabilistic U-Net and test on the LIDC-IDRI lung nodule CT dataset and MICCAI2012 prostate MRI dataset. We find that we are able to improve predictive uncertainty estimates. We also find that we can improve sample accuracy and sample diversity.

Talk by Rodolfo Corona

You are all cordially invited to the AMLab seminar on Thursday May 2nd at 16:00 in C3.163, where Rodolfo Corona will give a talk titled “Perceptual Theory of Mind”. Afterwards there are the usual drinks and snacks!

Abstract: In this talk I will present ongoing on work on applying theory of mind, where an agent forms a mental model of another based on observed behavior, to an image reference game. In our setting, a learner is tasked with describing images using image attributes, and plays the game with a population of agents whose perceptual capabilities vary, which can cause them to guess differently for a given description. In each episode, the learner plays a series of games with an agent randomly sampled from the population. We show that it can improve its performance by forming a mental model of the agents it plays with, using embeddings generated from the gameplay history. We investigate how different policies perform in this task and begin to explore how explanations could be generated for the learner’s decisions.

Talk by Anjan Dutta

You are all cordially invited to the AMLab seminar on Thursday April 18th at 16:00 in C3.163, where Anjan Dutta will give a talk titled “Towards Practical Sketch-based Image Retrieval”. Afterwards there are the usual drinks and snacks!

Abstract: Recently, matching natural images with free-hand sketches has received a lot of attention within the computer vision, multimedia and machine learning community, resulting in the sketch-based image retrieval (SBIR) paradigm. Since sketches can efficiently and precisely express the shape and pose of the target images, SBIR serves a better applicable scenario compared to the conventional text-image cross-modal image retrieval. In this seminar, I will talk about my recent works on SBIR and related topics, specifically my talk will address the questions: (1) how to retrieve multi-labeled images with a combination multi-modal queries, (2) how to generalize SBIR model to the cases with no visual training data, and (3) how to progress towards more practical SBIR in terms of data and model.

Talk by Benjamin Bloem-Reddy

You are all cordially invited to the AMLab seminar on **Monday Mar 18th at 15:00** (Note the non-standard date/time) in C3.163, where Benjamin Bloem-Reddy will give a talk titled “Probabilistic symmetry and invariant neural networks”. Afterwards there are the usual drinks and snacks!

Abstract: In an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings, much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures. We treat the neural network input and output as random variables, and consider group invariance from the perspective of probabilistic symmetry. Drawing on tools from probability and statistics, we establish a link between functional and probabilistic symmetry, and obtain functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Those representations characterize the structure of neural networks that can be used to represent such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We develop the details of the general program for exchangeable sequences and arrays, recovering a number of recent examples as special cases.