Monthly Archives: May 2016

Talk by Matthijs Snel

You are all cordially invited to the AMLab seminar at Tuesday May 31 at 16:00 in C3.163, where Matthijs Snel from Optiver will give a talk titled “An introduction to market making and data science at Optiver”. Afterwards there are the usual drinks and snacks!

AbstractOptiver is an electronic market maker with significant presence on equity and derivatives exchanges around the world. Our automated trading strategies operate as semi-autonomous agents, processing information and making multiple decisions in the blink of an eye. In this talk, I will explain some basic market making concepts, supported by real-world examples of market microstructure. I will also provide an overview of what kind of data and challenges our strategies and machine learning applications deal with.

Talk by Ted Meeds

You are all cordially invited to the AMLab seminar at Tuesday May 24 at 16:00 in C4.174, where Ted Meeds will give a talk titled “Likelihood-free Inference by Controlling Simulator Noise”. Afterwards there are the usual drinks and snacks!

AbstractLikelihood-free inference, or approximate Bayesian computation (ABC), is a general framework for performing Bayesian inference in simulation-based science.  In this talk I will discuss two new approaches to likelihood-free inference that involve explicit control over a simulation’s randomness.  By re-writing simulation code with two sets of arguments, the simulation parameters and its random numbers, many algorithmic options open up.  The first approach, called Optimisation Monte Carlo, in an algorithm that efficiently and independently samples parameters from the posterior by first sampling a set of random numbers from a prior distribution, then running an optimisation algorithm—with fixed random numbers—to match simulation statistics with observed statistics.   The second approach is recent and ongoing research on a variational ABC algorithm that has been written in an auto-differentiation language allowing for the gradients of the variational parameters to be computed through the simulation code itself.  

Talk by Karen Ullrich

You are all cordially invited to the AMLab colloquium Tuesday May 17 at 16:00 in C3.163, where Karen Ullrich will give a talk titled “Combining generative models and deep learning”. Afterwards there are the usual drinks and snacks!

Abstract: Deep learners prove to perform well on very large datasets. For small datasets, however, one has to come up with new methods to model and train. My current project is in line with this thought. By combining a simple deep learner with a state space model we hope to perform well on visual odometry.

Talk by Stephan Bongers

You are all cordially invited to the AMLab colloquium coming Tuesday May 10 at 16:00 in C3.163, where Stephan Bongers will give a talk titled “Marginalization and Reduction of Structural Causal Models”. Afterwards there are drinks and snacks!

Abstract: Structural Causal Models (SCMs), also known as (Non-Parametric) Structural Equation Models (NP-SEMs), are widely used for causal modelling purposes. One of their advantages is that they allow for cycles, i.e., causal feedback loops. In this work, we give a rigorous treatment of Structural Causal Models. Two different types of variables play a role in SCMs: “endogenous” variables and “exogenous” variables (also known as “disturbance terms” or “noise” variables). We define a marginalization operation (“latent projection”) on SCMs that effectively removes a subset of the endogenous variables from the model. This operation can be seen as projecting the description of a full system to the description of a subsystem. We show

that this operation preserves the causal semantics. We also show that in the linear case, the number of exogenous variables can be reduced so that only a single one-dimensional disturbance term is needed for each endogenous variable. This “reduction” can reduce the model complexity significantly and offers parsimonious representations for the linear case. We show under some suitable conditions this reduction is not possible in general.

Slides

Talk by Patrick Putzky

You are all cordially invited to the AMLab colloquium coming Tuesday May 3 at 16:00 in C3.163, where Patrick Putzky will give a talk titled “Neural Networks for estimation in inverse problems”. Afterwards there are drinks and snacks!

Abstract: Many statistical problems arising in the natural sciences can be treated as an inverse problem: Measurements are transformed, subsampled or noisy observations of a quantity of interest. The main task is to infer the quantity of interest from the measurements.

Inverse problems are challenging because they are typically ill-posed. For example, if the number of variables in the quantity of interest exceeds the number observed variables, there is no unique solution to the inverse problem. To constrain the solution space the inverse problem is often phrased in terms of Bayes’ theorem. This allows to inject prior knowledge about the quantity of interest into the inference procedure. In practice, however, priors are often chosen to be overly simple (1) with respect to the complexity of the data, and (2) due to limitations in the inference procedure.

To overcome these limitations we propose an inference method which prevents the explicit notion of a prior. Instead, we suggest a neural network architecture which learns an inverse model for a given inference task. This approach has been frequently adressed before to solve problems such as image denoising, image deconvolution or image superresolution. However, the notion of the forward model has been mostly ignored in these approaches.

Our approach builds on previous neural network approaches for learning inverse models while explicitly making use of the forward model. The result is an iterative model which draws inspiration from gradient based inference methods. Our approach enables for learning a task specific inference model that has – compared to the traditional approach – the potential to (1) model complex data more reliably and (2) perform more efficiently in time critical tasks.

In the talk I will use the deconvolution problem in radio astronomy as a running example of an inverse problem, and on simulated data I will demonstrate how our approach compares to more traditional approaches. As a second example I will show some results for image superresolution.