Monthly Archives: June 2016

Talk by Tameem Adel

You are all cordially invited to the AMLab seminar at Tuesday June 28 at 16:00 in C3.163, where Tameem Adel will give a talk titled “Collapsed Variational Inference for Sum-Product Networks”. Afterwards there are the usual drinks!

Abstract: Sum-Product Networks (SPNs) are probabilistic inference machines that admit exact inference in linear time in the size of the network. Existing parameter learning approaches for SPNs are largely based on the maximum likelihood principle and hence are subject to overfitting compared to more Bayesian approaches. Exact Bayesian posterior inference for SPNs is computationally intractable. We recently proposed a novel deterministic collapsed variational inference algorithm for SPNs that is computationally efficient, easy to implement and at the same time allows us to incorporate prior information into the optimization formulation. Experiments show a significant improvement in accuracy compared with a maximum likelihood based approach.

Talk by Matthias Reisser

You are all cordially invited to the AMLab seminar at Tuesday June 21 at 16:00 in C3.163, where Matthias Reisser will give a talk titled “Distributed Bayesian Deep Learning”. Afterwards there are the usual drinks and snacks!

Abstract: I would like to give you an overview on what my PhD topic going to be about, as well as present my first project along with initial results: Although deep learning becomes more and more data efficient, it is still true that with more data, more complex models with better generalization capabilities can be trained. More data and bigger models require more computation, resulting in longer training times and slow experiment cycles. One valid approach to speed up computations is by distributing them across machines. At the same time, in the truly huge data regime, as well as for privacy reasons, data may not be accessible from any machine, requiring distributed computations. In a first project, we look at variational inference and a principled approach to distributed training of one joint model. I am looking forward to your opinion and will be grateful for any feedback. Although I am a QUVA member, every UVA-employee is welcome to attend, independent on whether you have signed the QUVA NDA.

Talk by Thijs van Ommen

You are all cordially invited to the AMLab seminar at Tuesday June 7 at 16:00 in C3.163, where Thijs van Ommen will give a talk titled “Robust probability updating”. Afterwards there are the usual drinks and snacks!

AbstractIn the well-known Monty Hall problem, a car is hidden behind one of three doors, and the contestant wants to compute the probabilities of where the prize is hidden given partial information (a ‘message’) from the quizmaster. Most analyses of this problem assume that the quizmaster uses a fair coin flip to decide what message to give, whenever he has a choice. We don’t make this assumption, but instead use game theory to find a strategy for the contestant that works well against any strategy the quizmaster might use. With this approach, we can also deal with a large generalization of the problem: to any finite number of doors, with any initial distribution of the winning door, and with an arbitrary set of messages (subsets of doors) from which the quizmaster can choose. In Bayesian terms, this translates to computing a posterior distribution without knowing the full joint distribution. It turns out that in general, the optimal strategies for both players in this game depend on the loss function used to evaluate the contestant’s posterior distribution. However, for certain classes of message sets, there is a single optimal posterior that does not depend on the loss function, so that we obtain an objective and general answer to how one should update probabilities in the light of new information.

Slides