Check out the videos of the workshop sessions in the schedule page, and all the accepted papers!

Overview

Many perception tasks can be cast as ‘inverse problems’ where the input signal is the outcome of a causal process and perception is to invert that process. For example in visual object perception, the image is caused by an object and perception is to infer which object gave rise to that image. Following an analysis-by-synthesis approach, modeling the forward and causal direction of the data generation process is a natural way to capture the underlying scene structure, which typically leads to broader generalisation and better sample efficiency. Such a forward model can be applied to solve the inverse problem (inferring the scene structure from an input image) using Bayes rule, for example. This workflow stands in contrast to common approaches in deep learning, where typically one first defines a task, and then optimises a deep model end-to-end to solve it. In this workshop we propose to revisit ideas from the generative approach and advocate for learning-based analysis-by-synthesis methods for perception and inference. In addition, we pose the question of how ideas from these research areas can be combined with and complement modern deep learning practices.

The main questions we want to address in the workshop are:

  1. What are the advantages and challenges of the generative approach? Modeling the forward and causal direction of the data vs. learning a task-specific predictor.

  2. Generative reasoning: How can learned generative models be used for inference and perception? Search, Amortised inference vs. online inference, stochasticity and gradients, etc.

  3. What are important ideas used in different (less learning-based) fields such as graphics, which are still missing from, and potentially useful for, modern learning methods for perception?

  4. How can we incorporate different structural assumptions such as hierarchy and graph structure in modern learning methods?

The main goal of this workshop is to bring together researchers from a wide range of research areas and to consider how traditional ideas can be used to improve current (deep) machine learning (ML) models. As such, the ideal outcomes of this workshop would be: a) to familiarise the newer generation of ML researchers with some of the earlier but extremely promising research concepts that could be of relevance to their current projects and, crucially, to provide intuition for how these older ideas fit into newer frameworks. b) To start a conversation between researchers from a variety of areas which are all related in some way to structure and causality and to c) thereby inspire future ML approaches that ideally lead to new research directions and collaborations across research fields.

Invited speakers

Sanja Fidler - Assistant Professor at the University of Toronto and Director of AI at NVIDIA (Canada).

Josh Tenenbaum - Professor at the Department of Brain and Cognitive Sciences, MIT (USA).

Tatiana Lopez-Guevara - PhD Student at Edinburgh Centre for Robotics (UK).

Niloy Mitra - Professor at the Department of Computer Science, University College London (UK).

Danilo J. Rezende - Research Scientist, DeepMind (UK).

Call for papers

Submission deadline: 13 September 2019
Author notification: 1 October 2019

We invite authors to submit papers on topics related to the workshop via the CMT portal. Papers should be in the latest NeurIPS format with a maximum of 4 pages (excluding references and supplementary material). We ask authors to use the supplementary material only for minor details that do not fit in the main paper. The review process will be double blind so papers should be anonymised appropriately.

Organising committee

Dan Rosenbaum Dan Rosenbaum (DeepMind)
Marta Garnelo Marta Garnelo (DeepMind)
Peter Battaglie Peter Battaglia (DeepMind)
Kelsey Allen Kelsey Allen (MIT)
Ilker Yildirim Ilker Yildirim (Yale)