Keynotes

 
Ilya Sutskever
OpenAI

Title: Recent Advances in Robotics and Generative Modeling

Abstract: 
I will present two of the several threads of ongoing research at OpenAI. First I will present improved algorithms training GANs, as well as algorithms for learning unexpectedly interpretable features using GANs. Next, I will present our recent work in robotics, where we develop a very simple method for adapting a controller from a simulation to the real world.  
 
Leon Bottou
Facebook

Title: Two big challenges in machine learning

Abstract: 

Machine learning technologies are increasingly used in complex software systems such as those underlying internet services today or self-driving vehicles tomorrow. Despite famous successes, there is more and more evidence that machine learning components tend to disrupt established software engineering practices. I will present examples and offer an explanation of this annoying and often very costly effect. Our first high-stake challenge consists therefore in formulating sound and efficient engineering principles for machine learning applications.

Machine learning research can often be viewed as an empirical science. Unlike nearly all other empirical sciences, progress in machine learning has largely been driven by a single experimental paradigm: fitting a training set and reporting performance on a testing set. Three forces may terminate this convenient state of affairs: the first one is the engineering challenge outlined above, the second one arises from the statistics of large-scale datasets, and the third one is our growing ambition to address more serious AI tasks. Our second high-stakes challenge consists therefore in enriching our experimental repertoire, redefining our scientific processes, and still maintain our progress speed.

 
Pieter Abbeel
UC Berkeley

Title: Deep Reinforcement Learning for Robotics

Abstract: 

Deep learning has enabled significant advances in supervised learning problems such as speech recognition and visual recognition. Reinforcement learning provides only a weaker supervisory signal, posing additional challenges in the form of temporal credit assignment and exploration. Nevertheless, deep reinforcement learning has already enabled learning to play Atari games from raw pixels (without access to the underlying game state) and learning certain types of visuomotor manipulation primitives.  I will discuss major challenges for, as well as some preliminary promising results towards, making deep reinforcement learning applicable to real robotic problems.
 
Adam Coates
Baidu

Title: Bringing AI to 100 million people with deep learning

Abstract:

Large scale deep learning has made it possible for small teams of researchers and engineers to tackle hard AI problems that previously entailed massive engineering efforts.  I’ll share the story of Baidu’s Deep Speech engine:  how a recurrent neural network has evolved into a state-of-the-art production speech recognition system in multiple languages, often exceeding the abilities of native speakers.  I will cover the vision, the implementation, and some lessons learned to illustrate what it takes to build new AI technology that 100 million people will care about.