Talk by Peter Orbanz

You are all cordially invited to the AMLab seminar on Monday November 12 at 11:00 (note the unusual date and time!) in C3.163 (FNWI, Amsterdam Science Park), where Peter Orbanz (Columbia University) will give a talk titled “Statistical models of large graphs and networks”. Afterwards there are the usual drinks and snacks.

Abstract: Relational data is, roughly speaking, any form of data that can be represented as a graph: A social network, user preference data, protein-protein interactions, etc. A recent body of work, by myself and others, aims to develop a statistical theory of such data for problems where a single graph is observed (such as a small part of a large social network). Keywords include graphon, edge-exchangeable and sparse exchangeable graphs, and many latent variable models used in machine learning. I will summarize the main ideas and results of this theory: How and why the exchangeability assumptions implicit in commonly used models for such data may fail; what can be done about it; what we know about convergence; and implications of these results for methods popular in machine learning, such as graph embeddings and empirical risk minimization.

Bio: Peter Orbanz is associate professor of statistics at Columbia University. His research interests include network and relational data, Bayesian nonparametrics, symmetry principles in machine learning and statistics, and hierarchies of latent variables. He was an undergraduate student at the University of Bonn, a PhD student at ETH Zurich, and a postdoctoral fellow at the University of Cambridge.

Slides (pdf)

Talk by Wendy Shang

You are all cordially invited to the AMLab seminar on Thursday November 8 at 16:00 in C3.163, where Wendy Shang will give a talk titled “Channel-Recurrent Autoencoding”. Afterwards there are the usual drinks and snacks!

Abstract: Understanding the functionalities of high-level features from deep neural networks (DNNs) is a long standing challenge. Towards achieving this ultimate goal, we propose a channel-recurrent architecture in place of the vanilla fully-connected layers to construct more interpretable and expressive latent spaces. Building on Variational Autoencoders (VAEs), we integrate recurrent connections across channels to both inference and generation steps, allowing the high-level features to be captured in global-to-local, coarse-to-fine manners. Combined with adversarial loss as well as two novel regularizations–namely the KL objective weighting scheme over time steps and mutual information maximization between transformed latent variables and the outputs, our channel-recurrent VAE-GAN (crVAE-GAN) outperforms VAE-GAN in generating a diverse spectrum of high resolution images while maintaining the same level of computational efficacy. Moreover, when applying crVAE-GAN in an attribute-conditioned generative setup, we further augment an attention mechanism over each attribute to indicate the specific latent subset responsible for its modulation, further imposing semantic meanings to the latent spaces. Evaluations are through both qualitative visual examination and quantitative metrics.

Talk by Stephan Alaniz

You are all cordially invited to the second AMLab seminar this week, on Thursday November 1 at 16:00 in C3.163, where Stephan Alaniz will give a talk titled “Iterative Binary Decision”. Afterwards there are the usual drinks and snacks!

Abstract: The complexity of functions a neural network approximates make
it hard to explain what the classification decision is based on. In this
work, we present a framework that exposes more information about this
decision-making process. Instead of producing a classification in a
single step, our model iteratively makes binary sub-decisions which,
when combined as a whole, ultimately produce the same classification
result while revealing a decision tree as thought process. While there
is generally a trade-off between interpretability and accuracy, the
insights our model generates come at a negligible loss in accuracy. The
decision tree resulting from the sequence of binary decisions of our
model reveal a hierarchical clustering of the data and can be used as
learned attributes in zero-shot learning.

Talk by Giorgio Patrini

This week we’ll have two talks in the seminar: one by Stephan Alaniz in the regular Thursday slot (announcement will appear soon), and an extra one on Wednesday October 31 at 16:00 in C3.163, where Giorgio Patrini will give a talk titled “Sinkhorn AutoEncoders”. Afterwards there are the usual drinks and snacks!

Abstract: Optimal Transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show how this principle dictates the minimization of the Wasserstein distance between the encoder aggregated posterior and the prior, plus a reconstruction error. We prove that in the non-parametric limit the autoencoder generates the data distribution if and only if the two distributions match exactly, and that the optimum can be obtained by deterministic autoencoders. We then introduce the Sinkhorn AutoEncoder (SAE), which casts the problem into Optimal Transport on the latent space. The resulting Wasserstein distance is minimized by backpropagating through the Sinkhorn algorithm. SAE models the aggregated posterior as an implicit distribution and therefore does not need a reparameterization trick for gradients estimation. Moreover, it requires virtually no adaptation to different prior distributions. We demonstrate its flexibility by considering models with hyperspherical and Dirichlet priors, as well as a simple case of probabilistic programming. SAE matches or outperforms other autoencoding models in visual quality and FID scores.

Joint work with Marcello Carioni (KFU Graz), Patrick Forré, Samarth Bhargav, Max Welling, Rianne van den Berg, Tim Genewein (Bosch Centre for AI), Frank Nielsen (Ecole Polytecnique)

Talk by Kihyuk Sohn

You are all cordially invited to the AMLab seminar on Thursday October 25 at 16:00 in C3.163 (FNWI, Amsterdam Science Park), where Kihyuk Sohn (NEC) will give a talk titled “Deep Domain Adaptation in the Wild”. Afterwards there are the usual drinks and snacks.

Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain. However, it is not well studied at which levels of representations the adaptation should happen and their complementary properties. Furthermore, the theory of domain adaptation is limited to classification problems whose source and target domains share the same task. In this talk, we address these challenges in deep domain adaptation. Firstly, we argue that the adaptation may happen at various levels of representations, such as input pixels, intermediate features, or output labels, with an injection of different insights at different levels. Secondly, we generalize the theory of domain adaptation to the case where source and target domains do not necessarily share the same classification task. We demonstrate the effectiveness of the proposed methods in several vision applications, namely, car recognition in the surveillance domain, face recognition across various ethnicity, and semantic segmentation.

Kihyuk Sohn is a researcher in the Media Analytics group of NEC Laboratories America. His research interest lies in machine learning and computer vision, with a focus on deep representation learning from large-scale, structured and multimodal data for robust visual perception. He obtained his Ph.D (2015) from the department of Electrical Engineering and Computer Science, University of Michigan.

PhD student Noud de Kroon joined AMLab

Noud de Kroon has joined the UvA in October 2018 as a PhD student of AMLab, under the joint supervision of dr. Joris Mooij and dr. Danielle Belgrave (Microsoft Research Cambridge). Previously, he obtained a bachelor’s degree in software science at Eindhoven University of Technology and a master’s degree in computer science at the University of Oxford. His research focus is on combining causality and reinforcement learning in order to make better
decisions and improve data efficiency, with applications for example in the medical domain.

Talk by Shihan Wang

You are all cordially invited to the AMLab seminar on Thursday October 18 at 16:00 in C3.163, where Shihan Wang will give a talk titled “Apply Machine Learning and Data Mining to Promote Physical Activity”. Afterwards there are the usual drinks and snacks!

Abstract: In this talk, we will introduce our research work in the “playful data-driven active urban living” (PAUL) project. Targeting on physical inactivity issue in modern society, we aim to motive less active people to participate in more physical activity. Given an overview of this project, we will mainly present recent papers.
Driven by a large-scale dataset of Dutch people’s running records (over 10K people in about 4 years), we start with characterizing runners based on their different temporal activity patterns. Then, in respect of diverse users, we studied how environmental situations (time, weather, geographical and social information) at the start time of a run affect the running distance. A rule-based machine learning method is applied to capture combined situations frequently associated with relevant long-distance runs. These environmental situations are going to be used in a mobile system, to identify the ‘right timing’ for motivating people to start longer-distance runs via message interventions.

Talk by Patrick Forré

You are all cordially invited to the AMLab seminar on Thursday October 11 at 16:00 in C3.163, where Patrick Forré will give a talk titled “Non-linear structural causal models with cycles and latent confounders”. Afterwards there are the usual drinks and snacks!

Abstract: In this talk we will present main results of two of our recent papers.
We will introduce a flexible class of general structural causal models that allow for non-/linear functional relations (like neural networks, etc.), arbitrary probability distributions (like discrete, continuous, mixtures, etc.), causal cycles (like feedback, etc.) and latent variables (aka confounders). For such models we will demonstrate several desirable properties, how to do causal reasoning, the rules of do-calculus and graphical criteria for conditional independence relations. We will also show how the latter can be exploited for causal discovery algorithms in such general context.

Talk by Bela Mulder

You are all cordially invited to the AMLab seminar on Thursday October 4 at 16:00 in C3.163, where Bela Mulder (AMOLF) will give a talk titled “Pitting man against machine in the arena of bottom-up design of crystal structures”. Afterwards there are the usual drinks and snacks!

Abstract: In this highly informal seminar I would like to pitch the question “Can a machine learning system develop a theory?” One of the much-touted properties of deep learning networks is that their deeper levels develop higher order generalization representations of their inputs. This begs the question whether they are able to hit upon the type of hidden structures in physical problem that are the cornerstone of effective physical theories. I would like to propose to test this idea in a concrete setting related to the highly relevant question of inverse design of self-assembling matter. I have recently formulated a novel approach towards inferring the specific short range isotropic interactions between particles of multiple types on lattices of given geometry in order that they spontaneously form specified periodic states of essentially arbitrary complexity. This approach rests upon the subtle intertwining between the group of transformations that leave the lattice structure invariant, with the group of permutations in the set of particle types induced by these same transformations on the target ordered structure. The upshot of this approach is that the number of independent coupling constants in the lattice can be systematically reduced from O(N2), where N is the number of distinct species, to O(N). The idea would be to see whether a machine learning approach which uses the space of possible patterns and their trivial transforms under symmetry operations as input, the set of possible constants as outputs, and feedback based on the degree to which the target structure is realized with these coupling constants is able to “learn” the symmetry-based rules, in a way that also generalizes to similar patterns not included in the training set.

Talk by Jakub Tomczak

You are all cordially invited to the AMLab seminar on Thursday September 27 at 16:00 in C3.163 (FNWI, Amsterdam Science Park), where Jakub Tomczak will give a talk titled “Deep Learning and Bayesian Inference for Medical Imaging”. Afterwards there are the usual drinks and snacks.

The aim of the DeeBMED project was to develop a powerful automatic medical imaging tool that can cope with main problems associated with complex images like medical scans, namely, multimodality of data distribution, large number of dimension and small number of examples, small amount of labeled data, multi-source learning, and robustness to transformations. In order to counteract these issues I have proposed to use a probabilistic framework, namely, the Variational Auto-Encoder (VAE), that combines deep learning and Bayesian inference. Within the project I have followed two lines of research:

– Development of the VAE by:
* enriching the encoder (Householder flow, Sylvester flow, Hyperspherical VAE);
* enriching the prior (VampPrior);
* enriching the decoder (an ongoing work with Rianne van den Berg & Christos Louizos);
* learning fair representations (Hierarchical VampPrior VFAE);
* learning disentangled representation (ongoing work with Maximilian Ilse).

– Development of deep neural networks by:
* learning from large images, i.e., ~10,000×10,000 pixels, (Deep MIL, and an ongoing work with Nathan Ing, Arkadiusz Gertych, Beatrice Knudsen);
* learning from multiple sources, e.g., different views (an ongoing work with Henk van Voorst).

During the talk I will outline assumptions of the DeeBMED project and its successes. At the end, a possible direction for future work will be presented.

During the project I have a great pleasure to publish with the following people (in alphabetical order):
* the University of Amsterdam: Rianne van den Berg, Philip Botros, Nicola de Cao, Tim Davidson, Luca Falorsi, Shi Hu, Maximilian Ilse, Thomas Kipf, Max Welling;
* the University of Oxford: Leonard Hasenclever;
* the Academic Medical Center in Amsterdam: Onno de Boer, Sybren Meijer;
* the Cedars-Sinai Medical Center in Los Angeles: Arkadiusz Gertych, Nathan Ing, Beatrice Knudsen.

Last but not least, all former and current members of AMLAB, QUVA Lab, Delta Lab and Philips Lab made my project successful through multiple discussions, meetings and seminars.

Slides (pdf)