Author Archives: Thijs van Ommen

Talk by Max Welling

You are all cordially invited to the AMLab seminar on Tuesday January 31 at 16:00 in C3.163, where Max Welling will give a talk titled “AMLAB/QUVA’s progress in Deep Learning”. Afterwards there are the usual drinks and snacks!

Abstract: I will briefly describe the progress that has been made in the past year in AMLAB and QUVA in terms of deep learning. I will try to convey a coherent story of how some of these projects tie together into a bigger vision for the field. I will end with research questions that seem interesting for future projects.

Talk by Marco Loog (TUD)

You are all cordially invited to the AMLab seminar on Tuesday January 24 at 16:00 in C3.163, where Marco Loog will give a talk titled “Semi-Supervision, Surrogate Losses, and Safety Guarantees”. Afterwards there are the usual drinks and snacks!

Abstract: Users of classification tools tend to forget [or worse, might not even realize] that classifiers typically do not minimize the 0-1 loss, but a surrogate that upperbounds the classification error on the training set.  Here we argue that we should also study these losses as such and we consider the problem of semi-supervised learning from this angle.  In particular, we look at the basic setting of linear classifiers and convex margin-based losses, e.g. hinge, logistic, squared, etc.  We investigate to what extent semi-supervision can be safe at least on the training set, i.e., we want to construct semi-supervised classifiers for which the empirical risk is never larger than the risk achieved by their supervised counterparts.  [Based on work carried out together with Jesse Krijthe; see https://arxiv.org/abs/1612.08875 and https://arxiv.org/abs/1503.00269].

Talk by Thomas Kipf

You are all cordially invited to the AMLab seminar on Tuesday December 13 at 16:00 in C3.163, where Thomas Kipf will give a talk titled “Deep Learning on Graphs with Graph Convolutional Networks”. Afterwards there are the usual drinks and snacks!

Abstract: Deep learning has recently enabled breakthroughs in the fields of computer vision and natural language processing. Little attention, however, has been devoted to the generalization of deep neural network-based models to datasets that come in the form of graphs or networks (e.g. social networks, knowledge graphs or protein-interaction networks). Generalizing convolutional neural networks, the workhorse of deep learning, to graph-structured data is not straightforward and a number of different approaches have been introduced (see [1] for an overview). I will review some of these models and introduce our own variant of graph convolutional networks [2] that achieves competitive performance on a number of semi-supervised node classification tasks. I will further talk about extensions to the basic graph convolutional framework, with special focus on our recently introduced variational graph auto-encoder [3]—a model for unsupervised learning and link prediction—and outline future research directions.

[1] Graph Convolutional Networks, http://tkipf.github.io/graph-convolutional-networks/
[2] TN Kipf and M Welling, Semi-Supervised Classification with Graph Convolutional Networks, arXiv:1609.02907, 2016
[3] TN Kipf and M Welling, Variational Graph Auto-Encoders, NIPS Bayesian Deep Learning Workshop, 2016

Talk by Sara Magliacane

You are all cordially invited to the AMLab seminar on Tuesday November 29 at 16:00 in C3.163, where Sara Magliacane will give a talk titled “Ancestral Causal Inference”. Afterwards there are the usual drinks and snacks!

Abstract: This is a practice talk for a ~12 minutes general-audience talk at a NIPS workshop, so ideally it should require no previous knowledge on causality.

Discovering causal relations from data is at the foundation of the scientific method. Traditionally, cause-effect relations have been recovered from experimental data in which the variable of interest is perturbed, but seminal work like the do-calculus and the PC/FCI algorithms demonstrate that, under certain assumptions, it is already possible to obtain significant causal information by using only observational data.

Recently, there have been several proposals for combining observational and experimental data to discover causal relations. These causal discovery methods are usually divided into two categories: constraint-based and score-based methods. Score-based methods typically evaluate models using a penalized likelihood score, while constraint-based methods use statistical independences to express constraints over possible causal models. The advantages of constraint-based over score-based methods are the ability to handle latent confounders naturally, no need for parametric modeling assumptions and an easy integration of complex background knowledge, especially in the logic-based methods.

We propose Ancestral Causal Inference (ACI), a logic-based method that provides a comparable accuracy to the best state-of-the-art constraint-based methods, but improves on their scalability by using a more coarse-grained representation of causal information. Furthermore, we propose a method to score predictions according to their confidence. We provide some theoretical guarantees for ACI, like soundness and asymptotic consistency, and demonstrate that it can outperform the state-of-the-art on synthetic data, achieving a speedup of several orders of magnitude. We illustrate its practical feasibility by applying it on a challenging protein data set that so far had only been addressed with score-based methods.

Talk by Paul Rubenstein (Cambridge/Tübingen)

You are all cordially invited to the AMLab seminar this Tuesday November 22 at 16:00 in C3.163, where Paul Rubenstein (Cambridge/Tübingen) will give a talk titled “Structural Equation Models: Where do they come from?”. Afterwards there are the usual drinks and snacks!

Abstract:

Structural Equation Models (SEMs) are widely used in the causality community as a language to describe how the distribution of a system of random variables changes under intervention.

Much work has been done to study certain properties of SEMs, for instance identifying conditions under which they can be learned from observational data, or restricted classes of interventions. However, many questions remain:

Under what conditions can we use an SEM to describe a system of random variables? Is it still possible to use them when we can only ‘coarsely’ measure the system? (For instance, if the timescale of consecutive observations of a process are slow compared to the timescale of the dynamics of the process itself.) What are ‘causal features’ and how can we derive an SEM to describe the relationship between them given a description of the underlying system?

In this talk I will introduce a framework in which we can ask these questions in a precise way, which is a necessary prerequisite to placing SEMs on a stronger theoretical footing.

See you there!