Monthly Archives: March 2017

Talk by Frederick Eberhardt (Caltech)

You are all cordially invited to the AMLab seminar on Tuesday March 21 at 16:00 in C3.163, where Frederick Eberhardt (Caltech) will give a talk titled “Causal Macro Variables”. Afterwards there are the usual drinks and snacks!

Abstract: Standard methods of causal discovery take as input a statistical data set of measurements of well-defined causal variables. The goal is then to determine the causal relations among these variables. But how are these causal variables identified or constructed in the first place? Often we have sensor level data but assume that the relevant causal interactions occur at a higher scale of aggregation. Sometimes we only have aggregate measurements of causal interactions at a finer scale. I will present recent work on a framework and method for the construction and identification of causal macro-variables that ensures that the resulting causal variables have well-defined intervention distributions. We have applied this approach to large scale climate data, for which we were able to identify the macro-phenomenon of El Nino using an unsupervised method on micro-level sea surface temperature and wind measurements over the equatorial Pacific.

Talk by Taco Cohen

You are all cordially invited to the AMLab seminar on Tuesday March 14 at 16:00 in C3.163, where Taco Cohen will give a talk titled “Group Equivariant & Steerable CNNs”. Afterwards there are the usual drinks and snacks!

Abstract: Deep learning can be very effective, but typically requires large amounts of labelled data, which can be costly to collect. This is not only a major practical limitation to the applicability of deep learning, but also a fundamental barrier to AI: rapid learning is an essential part of intelligence.

In this talk I will present group equivariant networks, a natural generalization of convolutional networks that achieves improved statistical efficiency by exploiting symmetries like rotation and reflection. Instead of using convolutions, these networks use group equivariant convolutions. Group equivariant convolutions are easy to use, fast, and can be converted to standard convolutions after training. We show that simply replacing translational convolutions with group equivariant convolutions can improve image classification results. In the second part of the talk I will show how group equivariant nets can be scaled up to very large symmetry groups using steerable filters.

Talk by Karen Ullrich

You are all cordially invited to the AMLab seminar on Tuesday March 7 at 16:00 in C3.163, where Karen Ullrich will give a talk titled “Soft Weight-Sharing for Neural Network Compression”. Afterwards there are the usual drinks and snacks!

Abstract: The success of deep learning in numerous application domains created the desire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by Han et al. (2015a) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of ”soft weight-sharing” (Nowlan & Hinton, 1992). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.