Monthly Archives: October 2015

Talk by Ninghang Hu

You are all cordially invited to the next AMLab colloquium next Tuesday, November 3 at 16:00 in C3.163, where Ninghang Hu will give a talk titled “Learning to Recognize Human Activities using Soft Labels”.

Abstract: Human activity recognition system is of great importance in robot-care scenarios. Typically, training such a system requires activity labels to be both completely and accurately annotated. In this paper, we go beyond such restriction and propose a learning method that allow labels to be incomplete and uncertain. We introduce the idea of soft labels which allows annotators to assign multiple, and weighted labels to data segments. This is very useful in many situations, e.g., when the labels are uncertain, when part of the labels are missing, or when multiple annotators assign inconsistent labels. We formulate the activity recognition task as a
sequential labeling problem. Latent variables are embedded in the model in order to exploit sub-level semantics for better estimation. We propose a max-margin framework which incorporate soft labels for learning the model parameters. The model is evaluated on two challenging datasets. To simulate the uncertainty in data annotation, we randomly change the labels for transition segments. The results show significant improvement over the state-of-the-art approach.


Talk by Yash Satsangi

You are all cordially invited to the next AMLab colloquium next Tuesday, October 27 at 16:00 in C3.163, where Yash Satsangi will give a talk titled “Probably Approximately Correct Greedy Maximization”.

Abstract: Submodular function maximization finds application in a variety of real-world optimisation problems. However, most existing methods, based on greedy maximization, assume it is computationally feasible to evaluate F, the function being maximized. Unfortunately, in many realistic settings F is too expensive to evaluate exactly even once. We present probably approximately correct greedy maximization, which requires access only to cheap anytime confidence bounds on F and uses them to prune elements. We show that, with high probability, our method returns an approximately optimal set. Furthermore, we propose novel, cheap confidence bounds for conditional entropy, which appears in many common choices of F and for which it is difficult to find unbiased or bounded estimates. Finally, results from a real-world dataset from a multi-camera tracking system in a shopping mall demonstrate that our approach performs comparably to existing methods, but at a fraction of the computational cost.


Talk by Sach Mukherjee

You are all cordially invited to the next AMLab colloquium next Tuesday, October 13 at 13:00 in C3.163, where Sach Mukherjee from the German Centre for Neurodegenerative Diseases (DZNE) will give a talk titled “Towards empirical assessment of causal inference”.

Abstract: In a growing number of applications, sophisticated computational and statistical methods are used to make inferences about graphs or networks encoding relationships between variables. Such networks are often intended to encode causal relationships such that the object of inference is in effect a causal graph. However, strong assumptions are needed to justify causal inference. Causal inference can easily be led astray by factors such as unobserved confounders and additional, application- or context-specific factors may exacerbate these concerns. How then can we tell whether causal learning methods are really effective in a given setting? I will discuss our recent efforts to develop empirical approaches by which to assess causal network learning. These approaches were used in a recent computational biology challenge (the 2013 DREAM network inference challenge) and I will use data and results from the challenge to illustrate the key ideas.