Monthly Archives: September 2017

Talk by Christos Louizos

You are all cordially invited to the AMLab seminar on Tuesday October 3 at 16:00 in C3.163, where Christos Louizos will give a talk titled “Bayesian Uncertainty and Compression for Deep Learning”. Afterwards there are the usual drinks and snacks!

Abstract:
Deep Learning has shown considerable success in a wide range of domains due its rich parametric form and natural scalability to big datasets. Nevertheless, it has limitations that prevent its adoption in specific problems. It has been shown in recent works that they suffer from over-parametrization as they can be significantly pruned without any loss in performance. This fact essentially shows that there is a lot of wasteful computation and resources, which can lead to large speedups if it is avoided. Furthermore, current neural networks suffer from unreliable uncertainty estimates that prevent their usage in domains that involve critical decision making and safety.

In this talk we will show how these two relatively distinct problems can be addressed under a common framework that involves Bayesian inference. In particular, we will show that by adopting a more elaborate version of Gaussian dropout we can obtain deep learning models that can have robust uncertainty on a variety of tasks and architectures, while simultaneously providing compressed networks where most of the parameters and computation has been removed.

Talk by Stephan Bongers

You are all cordially invited to the AMLab seminar on Tuesday September 19 at 16:00 in C3.163, where Stephan Bongers will give a talk titled “Marginalization of Structural Causal Models with feedback”. Afterwards there are the usual drinks and snacks!

Abstract: Structural causal models (SCMs), also known as non-parametric structural equation models (NP-SEMs), are widely used for causal modeling purposes. This talk consists of two parts: part one gives a rigorous treatment of structural causal models, dealing with measure-theoretic complications that arise in the presence of feedback, and part two deals with the marginalizion of SCMs. In part one we deal with recursive models (those without feedback), models where the solutions to the structural equations are unique, and arbitrary non-recursive models, those where the solutions are non-existent or non-unique. We show how we can reason about causality in these models and show how this differs from the recursive causal perspective. In part two, we address the question how we can marginalize an SCM (possibly with feedback), consisting of endogenous and exogenous variables, to a subset of the endogenous variables? Marginalizing an SCM projects the SCM down to an SCM on a subset of the endogenous variables, leading to a more parsimonious but causally equivalent representation of the SCM. We give an abstract defintion of marginalization and propose two approaches how to marginalize SCMs in a constructive way. Those constructive approaches define both a marginalization operation that effectively removes a subset of the endogenous variables from the model and lead to an SCM that has the same causal semantics as the original SCM. We provide several conditions under which the existence of such marginalizations hold.