Author Archives: Stephan Alaniz

Talk by Rodolfo Corona

You are all cordially invited to the AMLab seminar on Thursday May 2nd at 16:00 in C3.163, where Rodolfo Corona will give a talk titled “Perceptual Theory of Mind”. Afterwards there are the usual drinks and snacks!

Abstract: In this talk I will present ongoing on work on applying theory of mind, where an agent forms a mental model of another based on observed behavior, to an image reference game. In our setting, a learner is tasked with describing images using image attributes, and plays the game with a population of agents whose perceptual capabilities vary, which can cause them to guess differently for a given description. In each episode, the learner plays a series of games with an agent randomly sampled from the population. We show that it can improve its performance by forming a mental model of the agents it plays with, using embeddings generated from the gameplay history. We investigate how different policies perform in this task and begin to explore how explanations could be generated for the learner’s decisions.

Talk by Anjan Dutta

You are all cordially invited to the AMLab seminar on Thursday April 18th at 16:00 in C3.163, where Anjan Dutta will give a talk titled “Towards Practical Sketch-based Image Retrieval”. Afterwards there are the usual drinks and snacks!

Abstract: Recently, matching natural images with free-hand sketches has received a lot of attention within the computer vision, multimedia and machine learning community, resulting in the sketch-based image retrieval (SBIR) paradigm. Since sketches can efficiently and precisely express the shape and pose of the target images, SBIR serves a better applicable scenario compared to the conventional text-image cross-modal image retrieval. In this seminar, I will talk about my recent works on SBIR and related topics, specifically my talk will address the questions: (1) how to retrieve multi-labeled images with a combination multi-modal queries, (2) how to generalize SBIR model to the cases with no visual training data, and (3) how to progress towards more practical SBIR in terms of data and model.

Talk by Benjamin Bloem-Reddy

You are all cordially invited to the AMLab seminar on **Monday Mar 18th at 15:00** (Note the non-standard date/time) in C3.163, where Benjamin Bloem-Reddy will give a talk titled “Probabilistic symmetry and invariant neural networks”. Afterwards there are the usual drinks and snacks!

Abstract: In an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings, much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures. We treat the neural network input and output as random variables, and consider group invariance from the perspective of probabilistic symmetry. Drawing on tools from probability and statistics, we establish a link between functional and probabilistic symmetry, and obtain functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Those representations characterize the structure of neural networks that can be used to represent such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We develop the details of the general program for exchangeable sequences and arrays, recovering a number of recent examples as special cases.

Talk by Changyong Oh

You are all cordially invited to the AMLab seminar on Thursday Mar 14th at 16:00 in C3.163, where Changyong Oh will give a talk titled Combinatorial Bayesian Optimization using Graph Representations. Afterwards there are the usual drinks and snacks!

Abstract: This paper focuses on Bayesian Optimization – typically considered with continuous inputs – for discrete search input spaces, including integer, categorical or graph structured input variables. In Gaussian process-based Bayesian Optimization a problem arises, as it is not straightforward to define a proper kernel on discrete input structures, where no natural notion of smoothness or similarity could be provided. We propose COMBO, a method that represents values of discrete variables as vertices of a graph and then use the diffusion kernel on that graph. As the graph size explodes with the number of categorical variables and categories, we propose the graph Cartesian product to decompose the graph into smaller sub-graphs, enabling kernel computation in linear time with respect to the number of input variables. Moreover, in our formulation we learn a scale parameter per subgraph. In empirical studies on four discrete optimization problems we demonstrate that our method is on par or outperforms the state-of-the-art in discrete Bayesian optimization.

Talk by Christos Louizos

You are all cordially invited to the AMLab seminar on Thursday Feb 28th at 16:00 in C3.163, where Christos Louizos will give a talk titled “Learning Exchangeable Distributions”. Afterwards there are the usual drinks and snacks!

Abstract: We present a new family of models that directly parametrize exchangeable distributions; it is realized via the introduction of an explicit model for the dependency structure of the joint probability distribution over the data, while respecting the permutation invariance of an exchangeable distribution. This is achieved by combining two recent advances in variational inference and probabilistic modelling for graphs, normalizing flows and (di)graphons. We, empirically, demonstrate that such models are also approximately consistent, hence they can also provide epistemic uncertainty about their predictions without positing an explicit prior over global variables. We show how to train such models on data and evaluate their predictive capabilities as well as the quality of their uncertainty on various tasks.

Talk by Thomas Kipf

You are all cordially invited to the AMLab seminar on Thursday Feb 21st at 16:00 in C3.163, where Thomas Kipf will give a talk titled “Compositional Imitation Learning: Explaining and executing one task at a time”. Afterwards there are the usual drinks and snacks!

Abstract: We introduce a framework for Compositional Imitation Learning and Execution (CompILE) of hierarchically-structured behavior. CompILE learns reusable, variable-length segments of behavior from demonstration data using a novel unsupervised, fully-differentiable sequence segmentation module. These learned behaviors can then be re-composed and executed to perform new tasks. At training time, CompILE auto-encodes observed behavior into a sequence of latent codes, each corresponding to a variable-length segment in the input sequence. Once trained, our model generalizes to sequences of longer length and from environment instances not seen during training. We evaluate our model in a challenging 2D multi-task environment and show that CompILE can find correct task boundaries and event encodings in an unsupervised manner without requiring annotated demonstration data. Latent codes and associated behavior policies discovered by CompILE can be used by a hierarchical agent, where the high-level policy selects actions in the latent code space, and the low-level, task-specific policies are simply the learned decoders. We found that our agent could learn given only sparse rewards, where agents without task-specific policies struggle.

Talk by Victor Garcia

You are all cordially invited to the AMLab seminar on Thursday 14th Feb at 16:00 in C3.163, where Victor Garcia will give a talk titled “GRIN: Graphical Recurrent Inference Networks“. Afterwards there are the usual drinks and snacks!

Abstract: A graphical model is a structured representation of the data generating process. The traditional method to reason over random variables is to perform inference in this graphical model. However, in many cases the generating process is only a poor approximation of the much more complex true data generation process, leading to poor posterior estimates. The subtleties of the generative process are however captured in the data itself and we can “learn to infer”, that is, learn a direct mapping from observations to explanatory latent variables. In this work we propose a hybrid model that combines graphical inference with a learned inverse model, which we structure as a graph neural network. The iterative algorithm is formulated as a recurrent neural network. By using cross-validation we can automatically balance the amount of work performed by graphical inference versus learned inference. We apply our ideas to the Kalman filter, a Gaussian hidden Markov model for time sequences. We apply our “Graphical Recurrent Inference” method to a number of path estimation tasks and show that it successfully outperforms either learned or graphical inference run in isolation.

 

Talk by Emiel Hoogeboom

You are all cordially invited to the AMLab seminar on Thursday January 31 at 16:00 in C3.163, where Emiel Hoogeboom will give a talk titled “Emerging Convolutions for Generative Normalizing Flows”. Afterwards there are the usual drinks and snacks!

Abstract: Generative flows are attractive because they admit exact likelihood optimization and efficient image synthesis. Recently, Kingma & Dhariwal (2018) demonstrated with Glow that generative flows are capable of generating high quality images. We generalize the 1 x 1 convolutions proposed in Glow to invertible d x d convolutions, which are more flexible since they operate on both channel and spatial axes. We propose two methods to produce invertible convolutions that have receptive fields identical to standard convolutions: Emerging convolutions are obtained by chaining specific autoregressive convolutions, and periodic convolutions are decoupled in the frequency domain. Our experiments show that the flexibility of d x d convolutions significantly improves the performance of generative flow models on galaxy images, CIFAR10 and ImageNet.

 

Talk by Herke van Hoof

You are all cordially invited to the AMLab seminar on Thursday January 17 at 16:00 in C3.163, where Herke van Hoof will give a talk titled “Learning Selective Coverage Strategies for Surveying and Search”. Afterwards there are the usual drinks and snacks!

Abstract: In this seminar, I’ll present a project I’ve been working on with Sandeep Manjanna and Gregory Dudek (Mobile Robotics Lab, McGill University). In this project, we investigated selective coverage strategies for a robot tasked with surveying or searching prioritised locations in a given area. This problem can be modelled as a Markov decision process and solved with reinforcement learning strategies, but the state space is extremely large, requiring these states to be aggregated. The proposed state aggregation method is shown to generalize well between different environments. In field tests over reefs at the Folkestone Marine Reserve, using this method an autonomous surface vehicle was able to improve the number of useable visual data samples.