Talk by Jorn Peters

You are all cordially invited to the AMLab seminar on Tuesday January 30 at 16:00 in C3.163, where Jorn Peters will give a talk titled “Binary Neural Networks: an overview”. Afterwards there are the usual drinks and snacks!

Abstract: One limiting factor for deploying neural networks in real-world applications (e.g., self-driving cars or smart home appliances) is the requirement for memory, computation and power. As a consequence, it is often infeasible to employ many of today’s deep learning innovations in situations where resources are scarce. One way to combat these resource requirements of neural networks is to reduce the floating point bit-precision for the parameters and/or activations in the neural network, which effectively increases FLOPS and reduces memory requirements. Taking this to the extreme, one obtains binary neural networks, i.e., neural networks in which the parameters and/or activations are constrained to only two possible values (e.g., -1 or 1). In recent years, several methods for training binary neural networks using gradient descent have been developed. In this talk I will give an overview of (a selection) of these methods.