Hi, guys~ We have a remote visitor Abubakar Abid and you are all cordially invited to the AMLab Seminar on Thursday 17th September at 16:00 CEST on Zoom, where Abubakar will give a talk titled ” Interactive UIs for Your Machine Learning Models “.
Title: Interactive UIs for Your Machine Learning
Abstract: Accessibility is a major challenge of machine
learning (ML). Typical ML models are built by specialists and require
specialized hardware/software as well as ML experience to validate. This makes
it challenging for non-technical collaborators and endpoint users (e.g.
physicians) to easily provide feedback on model development and to gain trust
in ML. The accessibility challenge also makes collaboration more difficult and
limits the ML researcher’s exposure to realistic data and scenarios that occur
in the wild. To improve accessibility and facilitate collaboration, we
developed an open-source Python package, Gradio, which allows researchers to
rapidly generate a visual interface for their ML models. Gradio makes accessing
any ML model as easy as opening a URL in your browser. Our development of
Gradio is informed by interviews with a number of machine learning researchers
who participate in interdisciplinary collaborations. We developed these
features and carried out a case study to understand Gradio’s usefulness and
usability in the setting of a machine learning collaboration between a
researcher and a cardiologist.
To gain more deep insights into understanding your machine learning models, feel free to join and discuss it! See you there!
You are all cordially invited to the AMLab Seminar on Thursday 10th September at 16:00 CEST on Zoom, where Elise van der Pol will give a talk titled “MDP Homomorphic Networks for Deep Reinforcement Learning “.
Paper link: https://arxiv.org/pdf/2006.16908.pdf
Title: MDP Homomorphic Networks for Deep Reinforcement Learning
Abstract: This talk discusses MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done.
We construct MDP
homomorphic MLPs and CNNs that are equivariant under either a group of
reflections or rotations. We show that such networks converge faster than
unstructured baselines on CartPole, a grid world and Pong.
To gain more deep insights
on Deep Reinforcement Learning, feel free to join it and discuss!
You are all cordially invited to the AMLab Seminar on Thursday 3rd September at 16:00 CEST on Zoom, where Didrik Nielsen will give a talk titled “SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows”.
Paper link: https://arxiv.org/abs/2007.02731
Title: SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows
Abstract: Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions. However, they both impose constraints on the models: Normalizing flows use bijective transformations to model densities whereas VAEs learn stochastic transformations that are non-invertible and thus typically do not provide tractable estimates of the marginal likelihood. In this paper, we introduce SurVAE Flows: A modular framework of composable transformations that encompasses VAEs and normalizing flows. SurVAE Flows bridge the gap between normalizing flows and VAEs with surjective transformations, wherein the transformations are deterministic in one direction — thereby allowing exact likelihood computation, and stochastic in the reverse direction — hence providing a lower bound on the corresponding likelihood. We show that several recently proposed methods, including dequantization and augmented normalizing flows, can be expressed as SurVAE Flows. Finally, we introduce common operations such as the max value, the absolute value, sorting and stochastic permutation as composable layers in SurVAE Flows.
Hi everyone, you are all cordially invited to the AMLab Seminar on Thursday 30th July at 16:00 CEST on Zoom, where Pim de Haan will give a talk titled “Natural Graph Networks“.
Paper link: https://arxiv.org/abs/2007.08349
Title: Natural Graph Networks
Abstract: Conventional neural message passing algorithms are invariant under permutation of the messages and hence forget how the information flows through the network. Studying the local symmetries of graphs, we propose a more general algorithm that uses different kernels on different edges, making the network equivariant to local and global graph isomorphisms and hence more expressive. Using elementary category theory, we formalize many distinct equivariant neural networks as natural networks, and show that their kernels are ‘just’ a natural transformation between two functors. We give one practical instantiation of a natural network on graphs which uses an equivariant message network parameterization, yielding good performance on several benchmarks.