Two talks: Avital Oliver and Petar Veličković

Next week Monday and Tuesday, the AMLab seminar will host two talks at FNWI, Amsterdam Science Park:

On Monday April 9 at 16:00 in room C1.112, Avital Oliver (Google Brain) will give a talk titled “Realistic Evaluation of Semi-Supervised Learning Algorithms“;

On Tuesday April 10 at 16:00 in room F1.02, Petar Veličković (University of Cambridge) will give a talk titled “Keeping our graphs attentive“.

Abstracts and bio’s are included below. Afterwards there will be the usual drinks and snacks. (Note that room F1.02 for Petar’s talk is a several minute walk away from the main entrance.)

Avital Oliver: Realistic Evaluation of Semi-Supervised Learning Algorithms

Abstract: Semi-supervised learning (SSL) leverages unlabeled data when labels are limited or expensive to obtain. Approaches based on neural networks have recently proven successful on standard benchmark tasks. In this talk, I will argue that these benchmarks fail to simulate many aspects of real-world applicability.

In order to better test these approaches, I will present a suite of experiments designed to address these issues. These experiments find that simple baselines which do not use unlabeled data can be competitive with the state-of-the-art, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples.

(Joint work with Augustus Odena, Colin Raffel, Ekin Dogus Cubuk and Ian Goodfellow)

Bio: Avital Oliver is a Google Brain Resident, currently working on semi-supervised learning. His research interests are in data efficient learning, clustering with neural networks, neural network loss landscape, and applications to education. He previously interned at OpenAI, and graduated summa cum laude with an M.Sc. degree in Mathematics from Bar-Ilan University, where he did research in group theory.


Petar Veličković: Keeping our graphs attentive

Abstract: A multitude of important real-world datasets (especially in biology) come together with some form of graph structure: social networks, citation networks, protein-protein interactions, brain connectome data, etc. Extending neural networks to be able to properly deal with this kind of data is therefore a very important direction for machine learning research, but one that has received comparatively rather low levels of attention until very recently.

Attentional mechanisms represent a very promising direction for extending the established convolutional operator on images to work on arbitrary graphs, as they satisfy many of the desirable features for a convolutional operator. Through this talk, I will focus on my work on Graph Attention Networks (GATs), where these theoretical properties have been further validated by solid results on transductive as well as inductive node classification benchmarks. I will also outline some of the earlier efforts towards deploying attention-style operators on graph structures, as well as very exciting recent work that expands on GATs and deploys them in more general circumstances (such as EAGCN, DeepInf, and applications to solving the Travelling Salesman Problem). Time permitting, I will also present some of the relevant related graph-based work on computational biology, currently ongoing in my research group in Cambridge.

Finally, I will present the aims of my ongoing collaboration with Thomas Kipf, centered towards leveraging the intermediate information computed by a GAT layer as a proxy for more challenging tasks, such as graph classification.

Bio: Petar Veličković is currently a final-year PhD student in Machine Learning and Bioinformatics at the Department of Computer Science and Technology of the University of Cambridge. He also holds a BA degree in Computer Science from Cambridge, having completed the Computer Science Tripos in 2015. In addition, he has been involved in research placements at Nokia Bell Labs (working with Nicholas Lane) and the Montréal Institute of Learning Algorithms (working with Adriana Romero and Yoshua Bengio). His current research interests broadly involve devising neural network architectures that operate on nontrivially structured data (such as graphs), and their applications in bioinformatics and medicine. He has published his work in these areas at both machine learning venues (ICLR, NIPS ML4H) and biomedical venues and journals (Bioinformatics, PervasiveHealth).