Monthly Archives: July 2016

Talk by Christos Louizos

You are all cordially invited to the AMLab seminar at Tuesday July 12 at 16:00 in C3.163, where Christos Louizos will give a talk titled “Bayesian Deep Learning and Uncertainty”. Afterwards there are the usual drinks and snacks!

Abstract: In the first part of this talk we will show how we can extend upon recent advances in variational inference for Bayesian neural networks with a simple idea. Instead of the relative limited fully factorized Gaussian assumption in the posterior for the parameters of each layer we will instead assume that each weight matrix is distributed as a Matrix Gaussian. This parametrisation has several potential advantages; it introduces correlations among the weights, therefore increases the flexibility of the posterior, reduces the amount of variational parameters and furthermore allows for a (finite-rank) Gaussian Process interpretation for each layer and a Deep Gaussian Process interpretation of the entire network. We will show that this model is more effective than other Bayesian approaches in a regression and a classification task.

In the second part of this talk we will explore the predictive uncertainties that various Bayesian neural network approaches provide in classification tasks. Surprisingly we will see that none of the methods seem to perform well in inputs that are not from the data distribution, and as a result provide erroneously certain predictions. Interestingly this seems to be problem with the model class as even frequentist methods suffer from the same problem. We conclude with open questions and possible directions of research in order to tackle this intriguing problem.

Talk by Peter O’Connor

You are all cordially invited to the AMLab seminar at Tuesday July 5 at 16:00 in C3.163, where Peter O’Connor will give a talk titled “Deep Spiking Networks”. Afterwards there are the usual drinks and snacks!

AbstractWe introduce the Spiking Multi-Layer Perceptron (SMLP). The SMLP is a spiking version of a conventional Multi-Layer Perceptron with rectified-linear units. Our architecture is eventbased, meaning that neurons in the network communicate by sending “events” to downstream neurons, and that the state of each neuron is only updated when it receives an event. We show that the SMLP behaves identically, during both prediction and training, to a conventional deep network of rectified-linear units in the limiting case where we run the spiking network for a long time. We apply this architecture to a conventional classification problem (MNIST) and achieve performance very close to that of a conventional MLP with the same architecture. Our network is a natural architecture for learning based on streaming event-based data, and has potential applications in robotic systems systems, which require low power and low response latency.