**Abstract**: Group equivariant and steerable convolutional neural networks (regular and steerable G-CNNs) have recently emerged as a very effective model class for learning from signal data such as 2D and 3D images, video, and other data where symmetries are present. In geometrical terms, regular G-CNNs represent data in terms of scalar fields (“feature channels”), whereas the steerable G-CNN can also use vector and tensor fields (“capsules”) to represent data. In this paper we present a general mathematical framework for G-CNNs on homogeneous spaces like Euclidean space or the sphere. We show that the layers of an equivariant network are convolutional if and only if the input and output feature spaces transform like a field. This result establishes G-CNNs as a universal class of equivariant network architectures. Furthermore, we study the space of equivariant filter kernels (or propagators), and show how an understanding of this space can be used to construct G-CNNs for general fields over homogeneous spaces. Finally, we discuss several applications of the theory, such as 3D model recognition, molecular energy regression, analysis of protein structure, omnidirectional vision, and others.

The goal of this talk is to explain this new mathematical theory in a way that is accessible to the machine learning community.

]]>**Abstract**: The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible.

Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models.

]]>**Abstract**: Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Artificial Intelligence in that (1) how we can generalize the image classification models to the cases with no visual training data available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a correct and an incorrect class label, and explain why the predicted correct label is appropriate for the image and why the predicted incorrect label is not appropriate for the image.

**Abstract**: Structural causal models (SCMs) are a popular tool to describe causal relations in systems in many fields such as economy, the social sciences, and biology. Complex (cyclical) dynamical systems, such as chemical reaction networks, are often described by a set of ODEs. We show that SCMs are not flexible enough in general to give a complete causal representation of equilibrium states in these dynamical systems. Since such systems do form an important modeling class for real-world data, we extend the concept of an SCM to a generalized structural causal model. We show that this allows us to capture the essential causal semantics that characterize dynamical systems. We illustrate our approach on a basic enzymatic reaction.

On **Monday April 9** at 16:00 in room **C1.112**, **Avital Oliver** (Google Brain) will give a talk titled “**Realistic Evaluation of Semi-Supervised Learning Algorithms**“;

On **Tuesday April 10** at 16:00 in room **F1.02**, **Petar Veličković** (University of Cambridge) will give a talk titled “**Keeping our graphs attentive**“.

Abstracts and bio’s are included below. Afterwards there will be the usual drinks and snacks. (Note that room F1.02 for Petar’s talk is a several minute walk away from the main entrance.)

**Avital Oliver: Realistic Evaluation of Semi-Supervised Learning Algorithms**

**Abstract**: Semi-supervised learning (SSL) leverages unlabeled data when labels are limited or expensive to obtain. Approaches based on neural networks have recently proven successful on standard benchmark tasks. In this talk, I will argue that these benchmarks fail to simulate many aspects of real-world applicability.

In order to better test these approaches, I will present a suite of experiments designed to address these issues. These experiments find that simple baselines which do not use unlabeled data can be competitive with the state-of-the-art, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples.

(Joint work with Augustus Odena, Colin Raffel, Ekin Dogus Cubuk and Ian Goodfellow)

**Bio**: Avital Oliver is a Google Brain Resident, currently working on semi-supervised learning. His research interests are in data efficient learning, clustering with neural networks, neural network loss landscape, and applications to education. He previously interned at OpenAI, and graduated summa cum laude with an M.Sc. degree in Mathematics from Bar-Ilan University, where he did research in group theory.

Petar Veličković: Keeping our graphs attentive

**Abstract**: A multitude of important real-world datasets (especially in biology) come together with some form of graph structure: social networks, citation networks, protein-protein interactions, brain connectome data, etc. Extending neural networks to be able to properly deal with this kind of data is therefore a very important direction for machine learning research, but one that has received comparatively rather low levels of attention until very recently.

Attentional mechanisms represent a very promising direction for extending the established convolutional operator on images to work on arbitrary graphs, as they satisfy many of the desirable features for a convolutional operator. Through this talk, I will focus on my work on Graph Attention Networks (GATs), where these theoretical properties have been further validated by solid results on transductive as well as inductive node classification benchmarks. I will also outline some of the earlier efforts towards deploying attention-style operators on graph structures, as well as very exciting recent work that expands on GATs and deploys them in more general circumstances (such as EAGCN, DeepInf, and applications to solving the Travelling Salesman Problem). Time permitting, I will also present some of the relevant related graph-based work on computational biology, currently ongoing in my research group in Cambridge.

Finally, I will present the aims of my ongoing collaboration with Thomas Kipf, centered towards leveraging the intermediate information computed by a GAT layer as a proxy for more challenging tasks, such as graph classification.

**Bio**: Petar Veličković is currently a final-year PhD student in Machine Learning and Bioinformatics at the Department of Computer Science and Technology of the University of Cambridge. He also holds a BA degree in Computer Science from Cambridge, having completed the Computer Science Tripos in 2015. In addition, he has been involved in research placements at Nokia Bell Labs (working with Nicholas Lane) and the Montréal Institute of Learning Algorithms (working with Adriana Romero and Yoshua Bengio). His current research interests broadly involve devising neural network architectures that operate on nontrivially structured data (such as graphs), and their applications in bioinformatics and medicine. He has published his work in these areas at both machine learning venues (ICLR, NIPS ML4H) and biomedical venues and journals (Bioinformatics, PervasiveHealth).

**Abstract**: Reconstructing three dimensional structures from noisy two dimensional orthographic projections is a central task in many scientific domains, examples range from medical tomography to single particle electron microscopy.

We treat this problem from a Bayesian point of view. Specifically, we regard a specimen’s structure and its pose as latent factors which are marginalized over. This allows us to express uncertainty in pose and even local uncertainty in the sample’s structure. This information can serve to detect unstable sub-structures or multiple configurations of a specimen. In particular, we apply amortized deep neural networks to encode observations into latent factors. This bears the advantage of transferability across multiple structures. To this end, we propose to train the model alternately in observation space and latent space, resulting in a generalized version of the wake-sleep algorithm.

We focus our experiments on cryogenic electron microscopy (CryoEM) single particle analysis, a technique that enables deep understanding of structural biology and chemistry by inspecting single proteins. We show our model to be competitive while predicting reasonable uncertainties. Moreover, we empirically demonstrate that the model is more data efficient than competitive methods and that it is transferable between molecules.

**Abstract**: We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.

**Abstract**: In quantum computation one of the key challenges is to build fault-tolerant logical qubits. A logical qubit consists of several physical qubits. In stabilizer codes, a popular class of quantum error correction schemes, a part of the system of physical qubits is measured repeatedly, without measuring (and collapsing by the Born rule) the state of the encoded logical qubit. These repetitive measurements are called syndrome measurements, and must be interpreted by a classical decoder in order to determine what errors occurred on the underlying physical system. The decoding of these space- and time-correlated syndromes is a highly non-trivial task, and efficient decoding algorithms are known only for a few stabilizer codes. In this talk I will explain how we design and train decoders based on recurrent neural networks.