Author Archives: Thijs van Ommen

Talk by Tom Claassen

You are all cordially invited to the AMLab seminar on Thursday December 13 at 16:00 in C3.163 (FNWI, Amsterdam Science Park), where Tom Claassen (Radboud/UvA) will give a talk titled “Causal discovery from real-world data: relaxing the faithfulness assumption”. Afterwards there are the usual drinks and snacks.

Abstract: The so-called causal Markov and causal faithfulness assumptions are well-established pillars behind causal discovery from observational data. The first is closely related to the memorylessness property of dynamical systems, and allows us to predict observable conditional independencies in the data from the underlying causal model. The second is the causal equivalent of Ockham’s razor, and enables us to reason backwards from data to the causal model of interest.
Though theoretically reasonable, in practice with limited data from real-world systems we often encounter violations of faithfulness. Some of these, like weak long-distance interactions, are handled surprisingly well by benchmark constraint-based algorithms such as FCI. Other violations may imply inconsistencies between observed (conditional) independence statements in the data that cannot currently be handled both effectively and efficiently by most constraint based algorithms. A fundamental question is whether our output retains any validity when not all our assumptions are satisfied, or whether it is still possible to reliably rescue parts of the model.
In this talk we introduce a novel approach based on a relaxed form of the faithfulness assumption that is able to handle many of the detectable faithfulness violations efficiently while ensuring the output causal model remains valid. Essentially we obtain a principled and efficient form of error-correction on observed in/dependencies, that can significantly improve both accuracy and reliability of the output causal models in practice. True; it cannot handle all possible violations, but the relaxed faithfulness assumption may be a promising step towards a more realistic, and so more effective, underpinning of the challenging task of causal discovery from real-world systems.

Talk by Daniel Worrall

You are all cordially invited to the AMLab seminar on Thursday November 29 at 16:00 in C3.163, where Daniel Worrall will give a talk titled “Semigroup Convolutional Neural Networks: Merging Scale-space and Deep Learning”. Afterwards there are the usual drinks and snacks!

Abstract: Group convolutional neural networks (GCNN) are symmetric under predefined, invertible transformations in the input e.g. rotations, flips, and translations. Can we extend this framework in the absence of invertibility, for instance in the case of pixelated image downscalings, or causal time-shifting of audio signals? To this end, I present Semigroup Convolutional Neural Networks (SCNN), a generalisation of GCNNs based on the related theory of semigroups. I will showcase a specialisation of a scale-equivariant SCNN, where the activations of each layer of the network live on a classical scale-space, finally linking the classical field of scale-spaces and modern deep learning.

Talk by Maurice Weiler

You are all cordially invited to the AMLab seminar on Thursday November 22 at 16:00 in C3.163 (FNWI, Amsterdam Science Park), where Maurice Weiler will give a talk titled “3D Steerable CNNs”. Afterwards there are the usual drinks and snacks.

Abstract: We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R^3. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.

Talk by Peter Orbanz

You are all cordially invited to the AMLab seminar on Monday November 12 at 11:00 (note the unusual date and time!) in C3.163 (FNWI, Amsterdam Science Park), where Peter Orbanz (Columbia University) will give a talk titled “Statistical models of large graphs and networks”. Afterwards there are the usual drinks and snacks.

Abstract: Relational data is, roughly speaking, any form of data that can be represented as a graph: A social network, user preference data, protein-protein interactions, etc. A recent body of work, by myself and others, aims to develop a statistical theory of such data for problems where a single graph is observed (such as a small part of a large social network). Keywords include graphon, edge-exchangeable and sparse exchangeable graphs, and many latent variable models used in machine learning. I will summarize the main ideas and results of this theory: How and why the exchangeability assumptions implicit in commonly used models for such data may fail; what can be done about it; what we know about convergence; and implications of these results for methods popular in machine learning, such as graph embeddings and empirical risk minimization.

Bio: Peter Orbanz is associate professor of statistics at Columbia University. His research interests include network and relational data, Bayesian nonparametrics, symmetry principles in machine learning and statistics, and hierarchies of latent variables. He was an undergraduate student at the University of Bonn, a PhD student at ETH Zurich, and a postdoctoral fellow at the University of Cambridge.

Slides (pdf)

Talk by Wendy Shang

You are all cordially invited to the AMLab seminar on Thursday November 8 at 16:00 in C3.163, where Wendy Shang will give a talk titled “Channel-Recurrent Autoencoding”. Afterwards there are the usual drinks and snacks!

Abstract: Understanding the functionalities of high-level features from deep neural networks (DNNs) is a long standing challenge. Towards achieving this ultimate goal, we propose a channel-recurrent architecture in place of the vanilla fully-connected layers to construct more interpretable and expressive latent spaces. Building on Variational Autoencoders (VAEs), we integrate recurrent connections across channels to both inference and generation steps, allowing the high-level features to be captured in global-to-local, coarse-to-fine manners. Combined with adversarial loss as well as two novel regularizations–namely the KL objective weighting scheme over time steps and mutual information maximization between transformed latent variables and the outputs, our channel-recurrent VAE-GAN (crVAE-GAN) outperforms VAE-GAN in generating a diverse spectrum of high resolution images while maintaining the same level of computational efficacy. Moreover, when applying crVAE-GAN in an attribute-conditioned generative setup, we further augment an attention mechanism over each attribute to indicate the specific latent subset responsible for its modulation, further imposing semantic meanings to the latent spaces. Evaluations are through both qualitative visual examination and quantitative metrics.

Talk by Stephan Alaniz

You are all cordially invited to the second AMLab seminar this week, on Thursday November 1 at 16:00 in C3.163, where Stephan Alaniz will give a talk titled “Iterative Binary Decision”. Afterwards there are the usual drinks and snacks!

Abstract: The complexity of functions a neural network approximates make
it hard to explain what the classification decision is based on. In this
work, we present a framework that exposes more information about this
decision-making process. Instead of producing a classification in a
single step, our model iteratively makes binary sub-decisions which,
when combined as a whole, ultimately produce the same classification
result while revealing a decision tree as thought process. While there
is generally a trade-off between interpretability and accuracy, the
insights our model generates come at a negligible loss in accuracy. The
decision tree resulting from the sequence of binary decisions of our
model reveal a hierarchical clustering of the data and can be used as
learned attributes in zero-shot learning.

Talk by Giorgio Patrini

This week we’ll have two talks in the seminar: one by Stephan Alaniz in the regular Thursday slot (announcement will appear soon), and an extra one on Wednesday October 31 at 16:00 in C3.163, where Giorgio Patrini will give a talk titled “Sinkhorn AutoEncoders”. Afterwards there are the usual drinks and snacks!

Abstract: Optimal Transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show how this principle dictates the minimization of the Wasserstein distance between the encoder aggregated posterior and the prior, plus a reconstruction error. We prove that in the non-parametric limit the autoencoder generates the data distribution if and only if the two distributions match exactly, and that the optimum can be obtained by deterministic autoencoders. We then introduce the Sinkhorn AutoEncoder (SAE), which casts the problem into Optimal Transport on the latent space. The resulting Wasserstein distance is minimized by backpropagating through the Sinkhorn algorithm. SAE models the aggregated posterior as an implicit distribution and therefore does not need a reparameterization trick for gradients estimation. Moreover, it requires virtually no adaptation to different prior distributions. We demonstrate its flexibility by considering models with hyperspherical and Dirichlet priors, as well as a simple case of probabilistic programming. SAE matches or outperforms other autoencoding models in visual quality and FID scores.

Joint work with Marcello Carioni (KFU Graz), Patrick Forré, Samarth Bhargav, Max Welling, Rianne van den Berg, Tim Genewein (Bosch Centre for AI), Frank Nielsen (Ecole Polytecnique)

Talk by Kihyuk Sohn

You are all cordially invited to the AMLab seminar on Thursday October 25 at 16:00 in C3.163 (FNWI, Amsterdam Science Park), where Kihyuk Sohn (NEC) will give a talk titled “Deep Domain Adaptation in the Wild”. Afterwards there are the usual drinks and snacks.

Abstract:
Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain. However, it is not well studied at which levels of representations the adaptation should happen and their complementary properties. Furthermore, the theory of domain adaptation is limited to classification problems whose source and target domains share the same task. In this talk, we address these challenges in deep domain adaptation. Firstly, we argue that the adaptation may happen at various levels of representations, such as input pixels, intermediate features, or output labels, with an injection of different insights at different levels. Secondly, we generalize the theory of domain adaptation to the case where source and target domains do not necessarily share the same classification task. We demonstrate the effectiveness of the proposed methods in several vision applications, namely, car recognition in the surveillance domain, face recognition across various ethnicity, and semantic segmentation.

Bio:
Kihyuk Sohn is a researcher in the Media Analytics group of NEC Laboratories America. His research interest lies in machine learning and computer vision, with a focus on deep representation learning from large-scale, structured and multimodal data for robust visual perception. He obtained his Ph.D (2015) from the department of Electrical Engineering and Computer Science, University of Michigan.

Talk by Shihan Wang

You are all cordially invited to the AMLab seminar on Thursday October 18 at 16:00 in C3.163, where Shihan Wang will give a talk titled “Apply Machine Learning and Data Mining to Promote Physical Activity”. Afterwards there are the usual drinks and snacks!

Abstract: In this talk, we will introduce our research work in the “playful data-driven active urban living” (PAUL) project. Targeting on physical inactivity issue in modern society, we aim to motive less active people to participate in more physical activity. Given an overview of this project, we will mainly present recent papers.
Driven by a large-scale dataset of Dutch people’s running records (over 10K people in about 4 years), we start with characterizing runners based on their different temporal activity patterns. Then, in respect of diverse users, we studied how environmental situations (time, weather, geographical and social information) at the start time of a run affect the running distance. A rule-based machine learning method is applied to capture combined situations frequently associated with relevant long-distance runs. These environmental situations are going to be used in a mobile system, to identify the ‘right timing’ for motivating people to start longer-distance runs via message interventions.

Talk by Patrick Forré

You are all cordially invited to the AMLab seminar on Thursday October 11 at 16:00 in C3.163, where Patrick Forré will give a talk titled “Non-linear structural causal models with cycles and latent confounders”. Afterwards there are the usual drinks and snacks!

Abstract: In this talk we will present main results of two of our recent papers.
We will introduce a flexible class of general structural causal models that allow for non-/linear functional relations (like neural networks, etc.), arbitrary probability distributions (like discrete, continuous, mixtures, etc.), causal cycles (like feedback, etc.) and latent variables (aka confounders). For such models we will demonstrate several desirable properties, how to do causal reasoning, the rules of do-calculus and graphical criteria for conditional independence relations. We will also show how the latter can be exploited for causal discovery algorithms in such general context.