You are all cordially invited to the AMLab seminar on Tuesday **May 22** at 16:00 in C3.163, where **Taco Cohen** will give a talk titled “**The Quite General Theory of Equivariant Convolutional Networks**”. Afterwards there are the usual drinks and snacks!

**Abstract**: Group equivariant and steerable convolutional neural networks (regular and steerable G-CNNs) have recently emerged as a very effective model class for learning from signal data such as 2D and 3D images, video, and other data where symmetries are present. In geometrical terms, regular G-CNNs represent data in terms of scalar fields (“feature channels”), whereas the steerable G-CNN can also use vector and tensor fields (“capsules”) to represent data. In this paper we present a general mathematical framework for G-CNNs on homogeneous spaces like Euclidean space or the sphere. We show that the layers of an equivariant network are convolutional if and only if the input and output feature spaces transform like a field. This result establishes G-CNNs as a universal class of equivariant network architectures. Furthermore, we study the space of equivariant filter kernels (or propagators), and show how an understanding of this space can be used to construct G-CNNs for general fields over homogeneous spaces. Finally, we discuss several applications of the theory, such as 3D model recognition, molecular energy regression, analysis of protein structure, omnidirectional vision, and others.

The goal of this talk is to explain this new mathematical theory in a way that is accessible to the machine learning community.