Talk by Joan Bruna (NYU)

You are all cordially invited to the AMLab seminar talk this Tuesday October 11 at 16:00 in C3.163, where Joan Bruna from the Courant Institute at New York University  will give a talk titled “Addressing Computational and Statistical Gaps with Deep Neural Networks”. Afterwards there are the usual drinks and snacks!

Abstract: Many modern statistical questions are plagued with asymptotic regimes that separate our current theoretical understanding with what is possible given finite computational and sample resources. Important examples of such gaps appear in sparse inference, high-dimensional density estimation and non-convex optimization. In the former, proximal splitting algorithms efficiently solve the l1-relaxed sparse coding problem, but their performance is typically evaluated in terms of asymptotic convergence rates. In unsupervised high-dimensional learning, a major challenge is how to appropriately combine prior knowledge in order to beat the curse of dimensionality. Finally, the prevailing dichotomy between convex and non-convex optimization is not adapted to describe the diversity of optimization scenarios faced as soon as convexity fails.

In this talk we will illustrate how Deep architectures can be used in order to attack such gaps. We will first see how a neural network sparse coding model (LISTA, Gregor & LeCun’10) can be analyzed in terms of a particular matrix factorization of the dictionary, which leverages diagonalisation with invariance of the l1 ball, revealing a phase transition that is consistent with numerical experiments. We will then discuss image and texture generative modeling and super-resolution, a prime example of high-dimensional inverse problem. In that setting, we will explain how multi-scale convolutional neural networks are equipped to beat the curse of dimensionality and provide stable estimation of high frequency information. Finally, we will discuss recent research in which we explore to what extent the non-convexity of the loss surface arising in deep learning problems is hurting gradient descent algorithms, by efficiently estimating the number of basins of attractions.

Slides