You are all cordially invited to the AMLab seminar on Tuesday **January 24 at 16:00 in C3.163**, where **Marco Loog** will give a talk titled “**Semi-Supervision, Surrogate Losses, and Safety Guarantees**”. Afterwards there are the usual drinks and snacks!

**Abstract**: Users of classification tools tend to forget [or worse, might not even realize] that classifiers typically do not minimize the 0-1 loss, but a surrogate that upperbounds the classification error on the training set. Here we argue that we should also study these losses as such and we consider the problem of semi-supervised learning from this angle. In particular, we look at the basic setting of linear classifiers and convex margin-based losses, e.g. hinge, logistic, squared, etc. We investigate to what extent semi-supervision can be safe at least on the training set, i.e., we want to construct semi-supervised classifiers for which the empirical risk is never larger than the risk achieved by their supervised counterparts. [Based on work carried out together with Jesse Krijthe; see https://arxiv.org/abs/1612.08875 and https://arxiv.org/abs/1503.00269].