Category Archives: Talk

Talk by Max Welling

You are all cordially invited to the AMLab seminar on Tuesday January 31 at 16:00 in C3.163, where Max Welling will give a talk titled “AMLAB/QUVA’s progress in Deep Learning”. Afterwards there are the usual drinks and snacks!

Abstract: I will briefly describe the progress that has been made in the past year in AMLAB and QUVA in terms of deep learning. I will try to convey a coherent story of how some of these projects tie together into a bigger vision for the field. I will end with research questions that seem interesting for future projects.

Talk by Marco Loog (TUD)

You are all cordially invited to the AMLab seminar on Tuesday January 24 at 16:00 in C3.163, where Marco Loog will give a talk titled “Semi-Supervision, Surrogate Losses, and Safety Guarantees”. Afterwards there are the usual drinks and snacks!

Abstract: Users of classification tools tend to forget [or worse, might not even realize] that classifiers typically do not minimize the 0-1 loss, but a surrogate that upperbounds the classification error on the training set.  Here we argue that we should also study these losses as such and we consider the problem of semi-supervised learning from this angle.  In particular, we look at the basic setting of linear classifiers and convex margin-based losses, e.g. hinge, logistic, squared, etc.  We investigate to what extent semi-supervision can be safe at least on the training set, i.e., we want to construct semi-supervised classifiers for which the empirical risk is never larger than the risk achieved by their supervised counterparts.  [Based on work carried out together with Jesse Krijthe; see https://arxiv.org/abs/1612.08875 and https://arxiv.org/abs/1503.00269].

Talk by Thomas Kipf

You are all cordially invited to the AMLab seminar on Tuesday December 13 at 16:00 in C3.163, where Thomas Kipf will give a talk titled “Deep Learning on Graphs with Graph Convolutional Networks”. Afterwards there are the usual drinks and snacks!

Abstract: Deep learning has recently enabled breakthroughs in the fields of computer vision and natural language processing. Little attention, however, has been devoted to the generalization of deep neural network-based models to datasets that come in the form of graphs or networks (e.g. social networks, knowledge graphs or protein-interaction networks). Generalizing convolutional neural networks, the workhorse of deep learning, to graph-structured data is not straightforward and a number of different approaches have been introduced (see [1] for an overview). I will review some of these models and introduce our own variant of graph convolutional networks [2] that achieves competitive performance on a number of semi-supervised node classification tasks. I will further talk about extensions to the basic graph convolutional framework, with special focus on our recently introduced variational graph auto-encoder [3]—a model for unsupervised learning and link prediction—and outline future research directions.

[1] Graph Convolutional Networks, http://tkipf.github.io/graph-convolutional-networks/
[2] TN Kipf and M Welling, Semi-Supervised Classification with Graph Convolutional Networks, arXiv:1609.02907, 2016
[3] TN Kipf and M Welling, Variational Graph Auto-Encoders, NIPS Bayesian Deep Learning Workshop, 2016

Talk by Sara Magliacane

You are all cordially invited to the AMLab seminar on Tuesday November 29 at 16:00 in C3.163, where Sara Magliacane will give a talk titled “Ancestral Causal Inference”. Afterwards there are the usual drinks and snacks!

Abstract: This is a practice talk for a ~12 minutes general-audience talk at a NIPS workshop, so ideally it should require no previous knowledge on causality.

Discovering causal relations from data is at the foundation of the scientific method. Traditionally, cause-effect relations have been recovered from experimental data in which the variable of interest is perturbed, but seminal work like the do-calculus and the PC/FCI algorithms demonstrate that, under certain assumptions, it is already possible to obtain significant causal information by using only observational data.

Recently, there have been several proposals for combining observational and experimental data to discover causal relations. These causal discovery methods are usually divided into two categories: constraint-based and score-based methods. Score-based methods typically evaluate models using a penalized likelihood score, while constraint-based methods use statistical independences to express constraints over possible causal models. The advantages of constraint-based over score-based methods are the ability to handle latent confounders naturally, no need for parametric modeling assumptions and an easy integration of complex background knowledge, especially in the logic-based methods.

We propose Ancestral Causal Inference (ACI), a logic-based method that provides a comparable accuracy to the best state-of-the-art constraint-based methods, but improves on their scalability by using a more coarse-grained representation of causal information. Furthermore, we propose a method to score predictions according to their confidence. We provide some theoretical guarantees for ACI, like soundness and asymptotic consistency, and demonstrate that it can outperform the state-of-the-art on synthetic data, achieving a speedup of several orders of magnitude. We illustrate its practical feasibility by applying it on a challenging protein data set that so far had only been addressed with score-based methods.

Talk by Paul Rubenstein (Cambridge/Tübingen)

You are all cordially invited to the AMLab seminar this Tuesday November 22 at 16:00 in C3.163, where Paul Rubenstein (Cambridge/Tübingen) will give a talk titled “Structural Equation Models: Where do they come from?”. Afterwards there are the usual drinks and snacks!

Abstract:

Structural Equation Models (SEMs) are widely used in the causality community as a language to describe how the distribution of a system of random variables changes under intervention.

Much work has been done to study certain properties of SEMs, for instance identifying conditions under which they can be learned from observational data, or restricted classes of interventions. However, many questions remain:

Under what conditions can we use an SEM to describe a system of random variables? Is it still possible to use them when we can only ‘coarsely’ measure the system? (For instance, if the timescale of consecutive observations of a process are slow compared to the timescale of the dynamics of the process itself.) What are ‘causal features’ and how can we derive an SEM to describe the relationship between them given a description of the underlying system?

In this talk I will introduce a framework in which we can ask these questions in a precise way, which is a necessary prerequisite to placing SEMs on a stronger theoretical footing.

See you there!

Talk by Joan Bruna (NYU)

You are all cordially invited to the AMLab seminar talk this Tuesday October 11 at 16:00 in C3.163, where Joan Bruna from the Courant Institute at New York University  will give a talk titled “Addressing Computational and Statistical Gaps with Deep Neural Networks”. Afterwards there are the usual drinks and snacks!

Abstract: Many modern statistical questions are plagued with asymptotic regimes that separate our current theoretical understanding with what is possible given finite computational and sample resources. Important examples of such gaps appear in sparse inference, high-dimensional density estimation and non-convex optimization. In the former, proximal splitting algorithms efficiently solve the l1-relaxed sparse coding problem, but their performance is typically evaluated in terms of asymptotic convergence rates. In unsupervised high-dimensional learning, a major challenge is how to appropriately combine prior knowledge in order to beat the curse of dimensionality. Finally, the prevailing dichotomy between convex and non-convex optimization is not adapted to describe the diversity of optimization scenarios faced as soon as convexity fails.

In this talk we will illustrate how Deep architectures can be used in order to attack such gaps. We will first see how a neural network sparse coding model (LISTA, Gregor & LeCun’10) can be analyzed in terms of a particular matrix factorization of the dictionary, which leverages diagonalisation with invariance of the l1 ball, revealing a phase transition that is consistent with numerical experiments. We will then discuss image and texture generative modeling and super-resolution, a prime example of high-dimensional inverse problem. In that setting, we will explain how multi-scale convolutional neural networks are equipped to beat the curse of dimensionality and provide stable estimation of high frequency information. Finally, we will discuss recent research in which we explore to what extent the non-convexity of the loss surface arising in deep learning problems is hurting gradient descent algorithms, by efficiently estimating the number of basins of attractions.

Slides

Talk by Riaan Zoetmulder

You are all cordially invited to an AMLab seminar during the summer period at Tuesday August 23 at 16:00 in C3.163, where Riaan Zoetmulder will give a talk titled “Deep Causal Inference”. Afterwards there are the usual drinks and snacks!

AbstractDetermining causality is important for many fields of science. A variety of algorithms have been developed that are capable of discerning what the direction of causality is, given the data. Recent developments in deep learning however have shown that artificial deep neural networks have excellent performance on a variety of classification problems. This paper therefore seeks to ascertain whether causality can be determined using a deep learning approach. We have found that this is possible in two different ways; one can hand design features and train a deep neural network on them. Or one can design the deep neural network to detect features itself and learn how to classify accordingly. 

Talk by Christos Louizos

You are all cordially invited to the AMLab seminar at Tuesday July 12 at 16:00 in C3.163, where Christos Louizos will give a talk titled “Bayesian Deep Learning and Uncertainty”. Afterwards there are the usual drinks and snacks!

Abstract: In the first part of this talk we will show how we can extend upon recent advances in variational inference for Bayesian neural networks with a simple idea. Instead of the relative limited fully factorized Gaussian assumption in the posterior for the parameters of each layer we will instead assume that each weight matrix is distributed as a Matrix Gaussian. This parametrisation has several potential advantages; it introduces correlations among the weights, therefore increases the flexibility of the posterior, reduces the amount of variational parameters and furthermore allows for a (finite-rank) Gaussian Process interpretation for each layer and a Deep Gaussian Process interpretation of the entire network. We will show that this model is more effective than other Bayesian approaches in a regression and a classification task.

In the second part of this talk we will explore the predictive uncertainties that various Bayesian neural network approaches provide in classification tasks. Surprisingly we will see that none of the methods seem to perform well in inputs that are not from the data distribution, and as a result provide erroneously certain predictions. Interestingly this seems to be problem with the model class as even frequentist methods suffer from the same problem. We conclude with open questions and possible directions of research in order to tackle this intriguing problem.

Talk by Peter O’Connor

You are all cordially invited to the AMLab seminar at Tuesday July 5 at 16:00 in C3.163, where Peter O’Connor will give a talk titled “Deep Spiking Networks”. Afterwards there are the usual drinks and snacks!

AbstractWe introduce the Spiking Multi-Layer Perceptron (SMLP). The SMLP is a spiking version of a conventional Multi-Layer Perceptron with rectified-linear units. Our architecture is eventbased, meaning that neurons in the network communicate by sending “events” to downstream neurons, and that the state of each neuron is only updated when it receives an event. We show that the SMLP behaves identically, during both prediction and training, to a conventional deep network of rectified-linear units in the limiting case where we run the spiking network for a long time. We apply this architecture to a conventional classification problem (MNIST) and achieve performance very close to that of a conventional MLP with the same architecture. Our network is a natural architecture for learning based on streaming event-based data, and has potential applications in robotic systems systems, which require low power and low response latency.

Talk by Tameem Adel

You are all cordially invited to the AMLab seminar at Tuesday June 28 at 16:00 in C3.163, where Tameem Adel will give a talk titled “Collapsed Variational Inference for Sum-Product Networks”. Afterwards there are the usual drinks!

Abstract: Sum-Product Networks (SPNs) are probabilistic inference machines that admit exact inference in linear time in the size of the network. Existing parameter learning approaches for SPNs are largely based on the maximum likelihood principle and hence are subject to overfitting compared to more Bayesian approaches. Exact Bayesian posterior inference for SPNs is computationally intractable. We recently proposed a novel deterministic collapsed variational inference algorithm for SPNs that is computationally efficient, easy to implement and at the same time allows us to incorporate prior information into the optimization formulation. Experiments show a significant improvement in accuracy compared with a maximum likelihood based approach.