You are all cordially invited to the next AMLab colloquium on Tuesday, January 26 at 16:00 in C3.163, where **Mijung Park** will give a talk titled “**Bayesian methodologies for efficient data analysis**”.

**Abstract**: Machine learning and data science can greatly benefit from Bayesian methodologies, not only because they improve generalisation performance compared to point estimates that are prone to overfitting, but also they provide efficient and principled ways to solve a broad range of statistical problems. In this talk, I will describe several concrete examples where using Bayesian approaches greatly benefit in tackling problems occurring in many areas of science. These examples include (a) designing priors using domain knowledge for structurally sparse high-dimensional parameters with application to functional neuroimaging data and neural spike data; (b) Bayesian manifold learning that enables evaluating the quality of estimated latent manifold as well as learning the latent dimension from statistical evidence; and (c) approximate Bayesian computation (ABC) for models with intractable likelihoods, where we employ kernel mean embeddings to measure data similarities, which is an essential step in ABC.