Monthly Archives: May 2017

Talk by Maurice Weiler

You are all cordially invited to the AMLab seminar on Tuesday May 30 at 16:00 in C3.163 (FNWI, Amsterdam Science Park), where Maurice Weiler will give a talk titled “Learning steerable filters for rotation-equivariant CNNs”. Afterwards there are the usual drinks and snacks.

Abstract: Besides translational invariances, a broad class of images like medical or astronomical data exhibits rotational invariances. While such a priori knowledge was typically exploited by data augmentation, recent research shifts focus to directly implementing rotational equivariance into model architectures. I will present Steerable Filter CNNs which efficiently incorporate rotation equivariance by learning steerable filters. Two approaches, based on orientation-pooling or group-convolutions, are presented and discussed. A common weight initialization scheme is generalized to networks which learn filter banks as a linear combination of a fixed system of atomic filters.

Talk by Joris Mooij

You are all cordially invited to the AMLab seminar on Tuesday May 23 at 16:00 in C3.163 (FNWI, Amsterdam Science Park), where Joris Mooij will give a talk titled “Causal Transfer Learning with Joint Causal Inference”. Afterwards there are the usual drinks and snacks.

Abstract: The gold standard to discover causal relations relies on experimentation. Over the last decades, an intriguing alternative has been proposed: constraint-based causal discovery methods can sometimes infer causal relations from certain statistical patterns in purely observational data. Even though this works nicely on paper, in practice the conclusions of such methods are often unreliable. We introduce Joint Causal Inference (JCI), a novel constraint-based method for causal discovery from multiple data sets that elegantly unifies both approaches. JCI aims to combine the best of two worlds: the reliability offered by experimentation, and the flexibility of not having to perform all theoretically possible experiments. We apply JCI in a causal transfer learning problem and use it to predict how a target variable is distributed (given observations of other variables) in new experiments. We illustrate this with examples where JCI makes the correct predictions, whereas standard feature selection methods make arbitrarily large prediction errors.

Talk by Tim van Erven (Leiden University)

You are all cordially invited to the AMLab seminar on Tuesday May 16 at 16:00 in C3.163, where Tim van Erven (Leiden University) will give a talk titled “Multiple Learning Rates in Online Learning”. Afterwards there are the usual drinks and snacks!

Abstract:
In online convex optimization it is well known that certain subclasses of objective functions are much easier than arbitrary convex functions. We are interested in designing adaptive methods that can automatically get fast rates in as many such subclasses as possible, without any manual tuning. Previous adaptive methods are able to interpolate between strongly convex and general convex functions. We present a new method, MetaGrad, that adapts to a much broader class of functions, including exp-concave and strongly convex functions, but also various types of stochastic and non-stochastic functions without any curvature. For instance, MetaGrad can achieve logarithmic regret on the unregularized hinge loss, even though it has no curvature, if the data come from a favourable probability distribution. MetaGrad’s main feature is that it simultaneously considers multiple learning rates. Unlike all previous methods with provable regret guarantees, however, its learning rates are not monotonically decreasing over time and are not tuned based on a theoretically derived bound on the regret. Instead, they are weighted directly proportional to their empirical performance on the data using a tilted exponential weights master algorithm.

References:
T. van Erven and W.M. Koolen. MetaGrad: Multiple Learning Rates in Online Learning. NIPS 2016.
W.M.Koolen, P. Grünwald and T. van Erven. Combining Adversarial Guarantees and Stochastic Fast Rates in Online Learning. NIPS 2016.

Talk by Raghavendra Selvan (University of Copenhagen)

You are all cordially invited to the AMLab seminar on Tuesday May 9 at 16:00 in C3.163, where Raghavendra Selvan (University of Copenhagen) will give a talk titled “Segmenting Tree Structures with Probabilistic State-space Models and Bayesian Smoothing”. Afterwards there are the usual drinks and snacks!

Abstract: Segmenting tree structures is common in several image processing applications. In medical image analysis, reliable segmentations of airways, vessels, neurons and other tree structures can enable important clinical applications. We present a method for extracting tree structures comprising of elongated branches by performing linear Bayesian smoothing in a probabilistic state-space. We apply this method to segment airway trees, wherein, airway states are estimated using the RTS (Rauch-Tung-Striebel) smoother, starting from several automatically detected seed points from across the volume. The RTS smoother tracks airways from seed points, providing Gaussian density approximations of the state estimates. We use covariance of the marginal smoothed density for each airway branch to discriminate true and false positives. Preliminary evaluation shows that the presented method results in additional branches compared to base-line methods.