You are all cordially invited to the UvA-Bosch Delta lab seminar on Thursday October 17th October at 15:00 on the Roeterseilandcampus A2.11 , where David Blei, well known for his fantastic work on LDA, Bayesian nonparametrics, and variational inference. He will give a talk on “The Blessings of Multiple Causes”.
Causal inference from observational data is a vital problem, but itcomes with strong assumptions. Most methods require that we observeall confounders, variables that affect both the causal variables andthe outcome variables. But whether we have observed all confounders isa famously untestable assumption. We describe the deconfounder, a wayto do causal inference with weaker assumptions than the classicalmethods require.
How does the deconfounder work? While traditional causal methodsmeasure the effect of a single cause on an outcome, many modernscientific studies involve multiple causes, different variables whoseeffects are simultaneously of interest. The deconfounder uses thecorrelation among multiple causes as evidence for unobservedconfounders, combining unsupervised machine learning and predictivemodel checking to perform causal inference. We demonstrate thedeconfounder on real-world data and simulation studies, and describethe theoretical requirements for the deconfounder to provide unbiasedcausal estimates.
This is joint work with Yixin Wang.
David Blei is a Professor of Statistics and Computer Science atColumbia University, and a member of the Columbia Data ScienceInstitute. He studies probabilistic machine learning, including itstheory, algorithms, and application. David has received several awardsfor his research, including a Sloan Fellowship (2010), Office of NavalResearch Young Investigator Award (2011), Presidential Early CareerAward for Scientists and Engineers (2011), Blavatnik Faculty Award(2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship(2017), and a Simons Investigator Award (2019). He is theco-editor-in-chief of the Journal of Machine Learning Research. He isa fellow of the ACM and the IMS.
You are all cordially invited to the special AMLab seminar on Tuesday 15th October at 12:00 in C1.112, where Will Grathwohl, from David Duvenaud’s group in Toronto will give a talk titled “The many virtues of Incorporating energy-based generative models into discriminative learning”.
Will is one of the authors behind many great recent papers. To name a few:
Abstract: Generative models have long been promised to benefit downstream discriminative machine learning applications such as out-of-distribution detection, adversarial robustness, uncertainty quantification, semi-supervised learning and many others. Yet, except for a few notable exceptions, methods for these tasks based on generative models are considerably outperformed by hand-tailored methods for each specific task. In this talk, I will advocate for the incorporation of energy-based generative models into the standard discriminative learning framework. Energy-Based Models (EBMs) can be much more easily incorporated into discriminative models than alternative generative modeling approaches and can benefit from network architectures designed for discriminative performance. I will present a novel method for jointly training EBMs alongside classifiers and demonstrate that this approach allows us to build models which rival the performance of state-of-the-art generative models and discriminative models within a single model. Further, we demonstrate our joint model gains many desirable properties such as a built-in mechanism for out-of-distribution detection, improved calibration, and improved robustness to adversarial examples — rivaling or improving upon hand-designed methods for each task.
You are all cordially invited to the AMLab seminar on Thursday 10th October at 14:00 in D1.113, where Andy Keller will give a talk titled “Approaches to Learning Approximate Equivariance”. There are the usual drinks and snacks!
Abstract: In this talk we will discuss a few proposed approaches to learning approximate equivariance directly from data. These approaches range from weakly supervised to fully unsupervised, relying on either mutual information bounds or inductive biases respectively. Critical discussion will be encouraged as much of the work is in early phases. Preliminary results will be shown to demonstrate validity of concepts.
You are all cordially invited to the AMLab seminar on Thursday 3rd October at 14:00 in B0.201, where Bhaskar Rao (visiting researcher: bio below)will give a talk titled “Scale Mixture Modeling of Priors for Sparse Signal Recovery”. There are the usual drinks and snacks!
Abstract: This talk will discuss Bayesian approaches to solving the sparse signal recovery problem. In particular, methods based on priors that admit a scale mixture representation will be discussed with emphasis on Gaussian scale mixture modeling. In the context of MAP estimation, iterative reweighted approaches will be developed. The scale mixture modeling naturally leads a hierarchical framework and empirical Bayesian methods motivated by this hierarchy will be highlighted. The pros and cons of the two approaches, MAP versus Empirical Bayes, will be a subject of discussion.
You are all cordially invited to the second AMLab seminar this week, on Thursday November 1 at 16:00 in C3.163, where Stephan Alaniz will give a talk titled “Iterative Binary Decision”. Afterwards there are the usual drinks and snacks!
Abstract: The complexity of functions a neural network approximates make
it hard to explain what the classification decision is based on. In this
work, we present a framework that exposes more information about this
decision-making process. Instead of producing a classification in a
single step, our model iteratively makes binary sub-decisions which,
when combined as a whole, ultimately produce the same classification
result while revealing a decision tree as thought process. While there
is generally a trade-off between interpretability and accuracy, the
insights our model generates come at a negligible loss in accuracy. The
decision tree resulting from the sequence of binary decisions of our
model reveal a hierarchical clustering of the data and can be used as
learned attributes in zero-shot learning.
Noud de Kroon has joined the UvA in October 2018 as a PhD student of AMLab, under the joint supervision of dr. Joris Mooij and dr. Danielle Belgrave (Microsoft Research Cambridge). Previously, he obtained a bachelor’s degree in software science at Eindhoven University of Technology and a master’s degree in computer science at the University of Oxford. His research focus is on combining causality and reinforcement learning in order to make better
decisions and improve data efficiency, with applications for example in the medical domain.
For more information about this vacancy, please visit Vacancies
For more information on this vacancy, see Vacancies.
You are all cordially invited to the AMLab seminar at Tuesday June 21 at 16:00 in C3.163, where Matthias Reisser will give a talk titled “Distributed Bayesian Deep Learning”. Afterwards there are the usual drinks and snacks!
Abstract: I would like to give you an overview on what my PhD topic going to be about, as well as present my first project along with initial results: Although deep learning becomes more and more data efficient, it is still true that with more data, more complex models with better generalization capabilities can be trained. More data and bigger models require more computation, resulting in longer training times and slow experiment cycles. One valid approach to speed up computations is by distributing them across machines. At the same time, in the truly huge data regime, as well as for privacy reasons, data may not be accessible from any machine, requiring distributed computations. In a first project, we look at variational inference and a principled approach to distributed training of one joint model. I am looking forward to your opinion and will be grateful for any feedback. Although I am a QUVA member, every UVA-employee is welcome to attend, independent on whether you have signed the QUVA NDA.