You are all cordially invited to the AMLab seminar talk this Tuesday October 11 at 16:00 in C3.163, where Joan Bruna from the Courant Institute at New York University will give a talk titled “Addressing Computational and Statistical Gaps with Deep Neural Networks”. Afterwards there are the usual drinks and snacks!
Abstract: Many modern statistical questions are plagued with asymptotic regimes that separate our current theoretical understanding with what is possible given finite computational and sample resources. Important examples of such gaps appear in sparse inference, high-dimensional density estimation and non-convex optimization. In the former, proximal splitting algorithms efficiently solve the l1-relaxed sparse coding problem, but their performance is typically evaluated in terms of asymptotic convergence rates. In unsupervised high-dimensional learning, a major challenge is how to appropriately combine prior knowledge in order to beat the curse of dimensionality. Finally, the prevailing dichotomy between convex and non-convex optimization is not adapted to describe the diversity of optimization scenarios faced as soon as convexity fails.
In this talk we will illustrate how Deep architectures can be used in order to attack such gaps. We will first see how a neural network sparse coding model (LISTA, Gregor & LeCun’10) can be analyzed in terms of a particular matrix factorization of the dictionary, which leverages diagonalisation with invariance of the l1 ball, revealing a phase transition that is consistent with numerical experiments. We will then discuss image and texture generative modeling and super-resolution, a prime example of high-dimensional inverse problem. In that setting, we will explain how multi-scale convolutional neural networks are equipped to beat the curse of dimensionality and provide stable estimation of high frequency information. Finally, we will discuss recent research in which we explore to what extent the non-convexity of the loss surface arising in deep learning problems is hurting gradient descent algorithms, by efficiently estimating the number of basins of attractions.
You are all cordially invited to an AMLab seminar during the summer period at Tuesday August 23 at 16:00 in C3.163, where Riaan Zoetmulder will give a talk titled “Deep Causal Inference”. Afterwards there are the usual drinks and snacks!
Abstract: Determining causality is important for many fields of science. A variety of algorithms have been developed that are capable of discerning what the direction of causality is, given the data. Recent developments in deep learning however have shown that artificial deep neural networks have excellent performance on a variety of classification problems. This paper therefore seeks to ascertain whether causality can be determined using a deep learning approach. We have found that this is possible in two different ways; one can hand design features and train a deep neural network on them. Or one can design the deep neural network to detect features itself and learn how to classify accordingly.
You are all cordially invited to the AMLab seminar at Tuesday June 21 at 16:00 in C3.163, where Matthias Reisser will give a talk titled “Distributed Bayesian Deep Learning”. Afterwards there are the usual drinks and snacks!
Abstract: I would like to give you an overview on what my PhD topic going to be about, as well as present my first project along with initial results: Although deep learning becomes more and more data efficient, it is still true that with more data, more complex models with better generalization capabilities can be trained. More data and bigger models require more computation, resulting in longer training times and slow experiment cycles. One valid approach to speed up computations is by distributing them across machines. At the same time, in the truly huge data regime, as well as for privacy reasons, data may not be accessible from any machine, requiring distributed computations. In a first project, we look at variational inference and a principled approach to distributed training of one joint model. I am looking forward to your opinion and will be grateful for any feedback. Although I am a QUVA member, every UVA-employee is welcome to attend, independent on whether you have signed the QUVA NDA.
You are all cordially invited to the AMLab seminar at Tuesday May 31 at 16:00 in C3.163, where Matthijs Snel from Optiver will give a talk titled “An introduction to market making and data science at Optiver”. Afterwards there are the usual drinks and snacks!
Abstract: Optiver is an electronic market maker with significant presence on equity and derivatives exchanges around the world. Our automated trading strategies operate as semi-autonomous agents, processing information and making multiple decisions in the blink of an eye. In this talk, I will explain some basic market making concepts, supported by real-world examples of market microstructure. I will also provide an overview of what kind of data and challenges our strategies and machine learning applications deal with.
You are all cordially invited to the AMLab seminar at Tuesday May 24 at 16:00 in C4.174, where Ted Meeds will give a talk titled “Likelihood-free Inference by Controlling Simulator Noise”. Afterwards there are the usual drinks and snacks!
Abstract: Likelihood-free inference, or approximate Bayesian computation (ABC), is a general framework for performing Bayesian inference in simulation-based science. In this talk I will discuss two new approaches to likelihood-free inference that involve explicit control over a simulation’s randomness. By re-writing simulation code with two sets of arguments, the simulation parameters and its random numbers, many algorithmic options open up. The first approach, called Optimisation Monte Carlo, in an algorithm that efficiently and independently samples parameters from the posterior by first sampling a set of random numbers from a prior distribution, then running an optimisation algorithm—with fixed random numbers—to match simulation statistics with observed statistics. The second approach is recent and ongoing research on a variational ABC algorithm that has been written in an auto-differentiation language allowing for the gradients of the variational parameters to be computed through the simulation code itself.
You are all cordially invited to the AMLab colloquium Tuesday May 17 at 16:00 in C3.163, where Karen Ullrich will give a talk titled “Combining generative models and deep learning”. Afterwards there are the usual drinks and snacks!
Abstract: Deep learners prove to perform well on very large datasets. For small datasets, however, one has to come up with new methods to model and train. My current project is in line with this thought. By combining a simple deep learner with a state space model we hope to perform well on visual odometry.