Submissions

Accepted Submissions [zip archive]

Congratulations to all authors. We have total 72 accepted submissions this year. The submissions, if permitted by the authors, are available to download.

ORAL PRESENTATIONS:

 Title     Abstract Authors
 PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings For autonomous vehicles (AVs) to behave appropriately on roads populated by human-driven vehicles, they must be able to reason about the uncertain intentions and decisions of other drivers from rich perceptual information. Towards these capabilities, we present a probabilistic forecasting model of future interactions of multiple agents. We perform both standard forecasting and conditional forecasting with respect to the AV's goals. Conditional forecasting reasons about how all agents will likely respond to specific decisions of a controlled agent. We train our model on real and simulated data to forecast vehicle trajectories given past positions and LIDAR. Our evaluation shows that our model is substantially more accurate in multi-agent driving scenarios compared to existing state-of-the-art. Beyond its general ability to perform conditional forecasting queries, we show that our model's predictions of all agents improve when conditioned on knowledge of the AV's intentions, further illustrating its capability to model agent interactions. Nicholas Rhinehart (CMU)*; Rowan McAllister (UC Berkeley); Kris Kitani (CMU); Sergey Levine (UC Berkeley)
 A/B Testing in Dense Large-Scale Networks: Design and Inference Design of experiments and estimation of treatment effects in large-scale networks, in the presence of strong interference, is a challenging and important problem. Most existing methods' performance deteriorates as the density of the network increases. In this paper, we present a novel strategy for accurately estimating the causal effects of a class of treatments in a dense large-scale network. First, we design an approximate randomized controlled experiment, by solving an optimization problem to allocate treatments that mimic the competition effect. Then we apply an importance sampling adjustment to correct for the design bias in estimating treatment effects from experimental data. We provide theoretical guarantees, verify robustness in a simulation study, and validate the usefulness of our procedure in a real-world experiment. Preetam Nandy (LinkedIn Corporation)*; Kinjal Basu (LinkedIn Corporation); Shaunak Chatterjee (LinkedIn); Ye Tu (LinkedIn Corporation)
 Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains. Kaidi Cao (Stanford University)*; Colin Wei (Stanford University); Adrien Gaidon (Toyota Research Institute); Nikos Arechiga (Toyota Research Institute); Tengyu Ma (Stanford)
 Gradient Boosted Decision Tree Neural Network In this paper we propose a method to build GBDT equivalent models using neural networks. We first illustrate how to convert a learned ensemble of decision trees to a single neural network with one hidden unit and an input transformation. We then relax some properties of this network such as thresholds and activation functions to train approximately equivalent decision tree ensemble. The final model, GBDT-NN, is surprisingly simple. It is a fully connected two layers neural network where the input is quantized and one-hot encoded. Experiments on large and small datasets show this simple method can achieve performance similar to GBDT models. Mohammad Saberian (Netflix)*; Pablo Delgado (Netflix); Yves Raimond (Netflix)
 Mid-Level Visual Representations Improve Generalization and Sample Efficiency for Learning Visuomotor Policies Does knowing that the world is 3D help in delivering a package? More generally, how much does having visual priors about the world assist in learning to perform downstream motor tasks? We study this question by integrating a generic perceptual skill set (e.g. a distance estimator, an edge detector, etc.) within a reinforcement learning framework. This skill set (hereafter mid-level perception) provides the policy with a more processed state of the world compared to raw images. We find that using a mid-level perception confers significant advantages over training end-to-end from scratch (i.e. not leveraging priors) in navigation-oriented tasks. Agents are able to generalize to situations where the from-scratch approach fails and training becomes significantly more sample efficient. However, we show that realizing these gains requires careful selection of the mid-level perceptual skills. Therefore, we refine our findings into an efficient max-coverage feature set that can be adopted in lieu of raw images. We perform our study in completely separate buildings for training and testing and compare against state-of-the-art feature learning methods and visually blind baseline policies. Alexander Sax (University of California, Berkeley)*; Bradley Emi (Stanford University); Amir Zamir (Stanford, UC Berkeley); Jitendra Malik (University of California at Berkley); Leonidas Guibas (Stanford University); Silvio Savarese (Stanford University); Jeffrey O Zhang (University of California, Berkeley)
 State-of-the-art Speech Recognition Using Multi-Stream Self-Attention Self-attention has been a huge success for many downstream tasks in NLP, which led to exploration of applying self-attention to speech problems as well. The efficacy of self-attention in speech applications, however, seems not fully blown yet since it is challenging to handle highly correlated speech frames in the context of self-attention. In this paper we propose a new model architecture for self-attention, namely multi-stream self-attention, to address the issue thus make the self-attention mechanism more effective for speech recognition. Kyu Han (ASAPP, Inc.)*; Ramon Prieto (ASAPP Inc.); Tao Ma (ASAPP Inc.)

POSTERS:

Paper TitleAuthors
Computer games or the trajectories of physics? Discovering the Carnot cycle using reinforcement learningStephen Whitelam (Lawrence Berkeley National Lab)*
Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement LearningShariq Iqbal (University of Southern California)*; Fei Sha (Google Research)
Collaborative Evolutionary Reinforcement LearningShauharda Khadka (Intel AI Lab); Somdeb Majumdar (Intel AI Lab)*; Santiago Miret (Intel AI Lab); Evren Tumer (Intel Corporation)
Topic Augmented Generator for Abstractive SummarizationMelissa Ailem (University of Southern California)*; Bowen Zhang (University of Southern California); Fei Sha (Google Research)
Image Captioning: Transforming Objects into WordsSimao Herdade (Yahoo Research)*; Kofi A Boakye (Yahoo Research ); Armin Kappeler (Yahoo Research); Joao V. B. Soares (Yahoo Research)
Cross-View Policy Learning for Street NavigationAng Li (DeepMind, Mountain View)*; Huiyi Hu (Google); Piotr Mirowski (DeepMind); Mehrdad Farajtabar (DeepMind)
Layout-induced Video Representation for Recognizing Agent-in-Place ActionsRuichi Yu (Waymo LLC.)*; Hongcheng Wang (Comcast); Ang Li (DeepMind, Mountain View); Jingxiao Zheng (University of Maryland, College Park); Vlad I Morariu (Adobe Research); Larry Davis (University of Maryland)
Uncertainty Modeling of Contextual-Connection between Tracklets for Unconstrained Video-based Face RecognitionJingxiao Zheng (University of Maryland, College Park)*; Ruichi Yu (Waymo LLC.); Jun-Cheng Chen (University of Maryland); Boyu Lu ("University of Maryland, College Park"); Carlos Castillo (University of Maryland); Rama Chellappa (University of Maryland)
Neural Assistant: Joint Action Prediction, Response Generation, and Latent Knowledge ReasoningArvind Neelakantan (Google Inc)*
BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional Encoder Representations from TransformerGuan-Lin Chao (Carnegie Mellon University)*; Ian Lane (Carnegie Mellon University)
Improved Knowledge Distillation via Teacher Assistant: Bridging the Gap Between Student and TeacherSeyed Iman Mirzadeh (Washington State University); Mehrdad Farajtabar (DeepMind)*; Ang Li (DeepMind, Mountain View); Hassan Ghasemzadeh (Washington State University)
Addressing the Loss-Metric Mismatch with Adaptive Loss AlignmentChen Huang (Apple)*; Shuangfei Zhai (Apple); Walter Talbott (Apple); Miguel Angel Bautista Martin (Apple Inc.); Shih-Yu Sun (Apple); Carlos Guestrin (Apple); Josh Susskind (Apple)
An Improved Triplet Loss FormulationWalter Talbott (Apple)*; Chen Huang (Apple); Shih-Yu Sun (Apple); Josh Susskind (Apple)
Striving for Simplicity in Off-policy Deep Reinforcement LearningRishabh Agarwal (Google Research, Brain Team)*; Dale Schuurmans (Google / University of Alberta); Mohammad Norouzi (Google Brain)
Rapid gamma-ray burst localization aboard the All-Sky-Astrogam satellite using a 3D convolutional neural networkRuoxi Shang (UC Berkeley)*; Andreas Zoglauer (University of California, Berkeley)
Causal Confusion in Imitation LearningPim de Haan (University of Amsterdam)*; Dinesh Jayaraman (UC Berkeley); Sergey Levine (UC Berkeley)
Prediction, Consistency, Curvature: Representation Learning for Locally-Linear ControlNir Levine (Deepmind)*; Yinlam Chow (Google AI); Rui Shu (Stanford University); Mohammad Ghavamzadeh (Facebook AI Research); Ang Li (DeepMind, Mountain View); Hung H Bui (VinAI Research)
Robust Reinforcement Learning for Continuous Control with Model MisspecificationNir Levine (Deepmind)*; Daniel Mankowitz (DeepMind); Rae Jeong (DeepMind); Abbas Abdolmaleki (Google DeepMind); Jost Tobias Springenberg (DeepMind); Timothy Arthur Mann (); Todd Hester (DeepMind); Martin Riedmiller (DeepMind)
Multiagent Evolutionary Reinforcement LearningShauharda Khadka (Intel AI Lab); Somdeb Majumdar (Intel AI Lab)*
Bridging the Gap for Tokenizer-Free Language ModelsDokook Choe (Google)*; Rami Al-Rfou' (rmyeid@google.com); Mandy Guo (xyguo@google.com); Heeyoung Lee (hylee@google.com); Noah Constant (nconstant@google.com)
The Trajectron: Probabilistic Multi-Agent Trajectory Modeling with Dynamic Spatiotemporal GraphsBoris Ivanovic (Stanford University)*; Marco Pavone (Stanford University)
A Fourier Perspective on Model Robustness in Computer VisionDong Yin (University of California, Berkeley)*; Raphael Gontijo Lopes (Google Brain); Jonathon Shlens (Google); Ekin D Cubuk (Google Brain); Justin Gilmer (Google Brain)
When to Trust Your Model: Model-Based Policy OptimizationMichael Janner (UC Berkeley)*; Justin Fu (UC Berkeley); Marvin Zhang (UC Berkeley); Sergey Levine (UC Berkeley)
Compressing Gradient Optimizers via Count-SketchesRyan D Spring (Rice University)*; Anshumali Shrivastava (Rice University); Anastasios Kyrillidis (Rice University ); Vijai Mohan (www.amazon.com)
The invisible hand of fractals and scaling in recommendationsFrancois W Belletti (Google)*; Minmin Chen (Google); Ed Chi (Google); Yi-fan Chen (Google); Nic Mayoraz (Google); Tayo Oguntebi (Google LLC); John Anderson (Google)
Hydroclimate and Snowpack Modeling with cGANsAdrian Albert (MIT)*
Multitask Learning for Recommendation SystemsZhe Zhao (Google Brain)*; Lichan Hong (Google); Jilin Chen (Google Brain); Xinyang Yi (Google); Maheswaran Sathiamoorthy (Google); Li Wei (Google); Ruoxi Wang (Google); Ji Yang (Google); Zhiyuan Cheng (Google); Ed Chi (Google)
Creating xBD: A Dataset for Assessing Building Damage from Satellite ImageryRitwik Gupta (Carnegie Mellon University Software Engineering Institute)*; Bryce Goodman (Defense Innovation Unit); Nirav Patel (Defense Innovation Unit); Richard Hosfelt (Carnegie Mellon University Software Engineering Institute); Sandra Sajeev (Carnegie Mellon University Software Engineering Institute); Eric Heim (Carnegie Mellon University Software Engineering Institute); Jigar Doshi (CrowdAI, Inc.); Keane Lucas (Joint Artificial Intelligence Center); Howie Choset (Carnegie Mellon University); Matthew Gaston (Carnegie Mellon University Software Engineering Institute)
REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic LearningBrian Yang (UC Berkeley); Jesse Zhang (UC Berkeley); Vitchyr H Pong (UC Berkeley); Sergey Levine (UC Berkeley); Dinesh Jayaraman (UC Berkeley)*
Stabilizing Off-Policy Q-Learning via Bootstrapping Error ReductionJustin Fu (UC Berkeley)*; Aviral Kumar (UC Berkeley)
Near Real-time Engagement Optimization of Mobile NotificationsYiping Yuan (LinkedIn)*; Padmini Jaikumar (LINKEDIN CORPORATION); Yan Gao (LINKEDIN CORPORATION); Ajith Muralidharan (LINKEDIN CORPORATION)
Distributed Learning of Latent Representation Social Network Graph EntitiesYiou Xiao (LinkedIn)*; Yafei Wang (LinkedIn); Matthew Walker (Linkedin)
Understanding Posterior Collapse in Variational AutoencodersJames R Lucas (University of Toronto)*; Mohammad Norouzi (Google Brain); George Tucker (Google Brain); Roger B Grosse (University of Toronto)
Zero-Shot Transfer Learning for Query-Item Cold Start in Search Retrieval and RecommendationsTao Wu (iotao@google.com)*; Ellie Ka-In Chio (echio@google.com); Heng-Tze Cheng (hengtze@google.com); Yu Du (cosmodu@google.com); Ritesh Agarwal (riteshag@google.com); Dima Kuzmin (dimakuzmin@google.com); Steffen Rendle (srendle@google.com); Li Zhang (liqzhang@google.com); John Anderson (janders@google.com); Sarvjeet Singh (sarvjeet@google.com); Tushar Chandra (tushar@google.com); Ed H. Chi (edchi@google.com); Wen Li (lwen@google.com); Ankit Kumar (ankitkr@google.com); Xiang Ma (xiangma@google.com); Alex Soares (alexsoares@google.com); Nitin Jindal (nitinjindal@google.com); Pei Cao (pei@google.com)
Neural Modeling for Large Corpus Item RecommendationsXinyang Yi (Google)*; Ji Yang (Google); Zhiyuan Cheng (Google); Zhe Zhao (Google Brain); Lichan Hong (Google); Ed Chi (Google)
Collapsed amortized variational inference for switching nonlinear dynamical systemsZhe Dong (Google)*; Bryan Seybold (Google); Kevin Murphy (Google); Hung H Bui (VinAI Research)
Semantic Coherence Analysis
Moussa Doumbouya (Work done while at Apple Inc.)*; Skyler Seto (Work done while at Apple Inc.); Enguerrand Horel (Work done while at Apple Inc); Luca Zappella (Apple Inc.); Xavier Suau Cuadros (Apple Inc.); Nicholas Apostoloff (Apple Inc.)
Evolving Losses for Video Representation LearningAJ Piergiovanni (Indiana University)*; Anelia Angelova (Google); Michael S Ryoo (Google Brain; Indiana University)
Learning an Adaptive Learning Rate ScheduleZhen Xu (Google)*; Andrew M Dai (Google Brain); Jonas Kemp (Google); Luke Metz (Google Brain)
Learning Differentiable Grammars for VideosAJ Piergiovanni (Indiana University)*; Anelia Angelova (Google); Michael S Ryoo (Google Brain; Indiana University)
EvaNet: A Family of Diverse, Fast and Accurate Video ArchitecturesAJ Piergiovanni (Indiana University); Anelia Angelova (Google)*; Alexander Toshev (Google); Michael S Ryoo (Google Brain; Indiana University)
A Programming System and Automation Libraries for DNN Model CompressionVinu Joseph (UNIVERSITY OF UTAH)*; Saurav Muralidharan (NVIDIA); Animesh Garg (Stanford, Nvidia); Ganesh Gopalakrishnan (University of Utah); Michael Garland (NVIDIA)
Energy-Inspired ModelsDieterich Lawson (NYU); George Tucker (Google Brain)*; Bo Dai (Google Brain); Rajesh Ranganath (New York University)
Unlabeled Data Improves Adversarial RobustnessYair Carmon (Stanford)*; Aditi Raghunathan (Stanford University); Ludwig Schmidt (UC Berkeley); Percy Liang (Stanford University); John Duchi (Stanford University)
Skew-Fit: State-Covering Self-Supervised Reinforcement LearningMurtaza Dalal (UC Berkeley)*; Vitchyr H Pong (UC Berkeley); Steven Lin (UC Berkeley); Ashvin V Nair (UC Berkeley); Shikhar Bahl (UC Berkeley); Sergey Levine (UC Berkeley)
Randomized Bandit Exploration RevisitedBranislav Kveton (Google Research)*; Csaba Szepesvari (DeepMind/University of Alberta); Mohammad Ghavamzadeh (Facebook AI Research); Craig Boutilier (Google Research)
COTA: Improving the customer support experience using Deep LearningAditya V Guglani (Uber Technologies, Inc.)*; Huaixiu Zheng (Uber Technologies); Hugh Williams (Uber); Arun Bodapati (Uber Technologies, Inc.)
Personalization and Optimization of Decision Parameters via Heterogenous Causal EffectsYe Tu (LinkedIn Corporation)*; Kinjal Basu (LinkedIn Corporation); Shaunak Chatterjee (Linkedin); Jinyun Yan (Linkedin); Birjodh Tiwana (Linkedin)
Modeling Local Geometric Structure of 3D Point Clouds using Geo-CNNShiyi Lan (University of Maryland)*; Ruichi Yu (Waymo LLC.); Gang Yu (Megvii Inc); Larry Davis (University of Maryland)
Variance Reduction for Matrix GamesYair Carmon (Stanford)*; Yujia Jin (Stanford University); Aaron Sidford (Stanford); Kevin Tian (Stanford University)
Curvature: A Scalable Deep Learning Architecture for Real-Time Back-Scatter EM SensingLuca Rigazio (Totemic)*; Samuel Joseph (Totemic)
Online Meta-Reinforcement Learning via Gaussian Process Temporal Difference LearningRoman Engeler (ETH Zurich); James Harrison (Stanford University)*; Apoorva Sharma (Stanford University); Emma Brunskill (Stanford University); Marco Pavone (Stanford University)
Stand-Alone Self-Attention for Vision ModelsPrajit Ramachandran (Google)*; Niki Parmar (Google); Ashish Vaswani (Google Brain); Irwan Bello (Google); Anselm Levskaya (Google); Jonathon Shlens (Google)
Practical Thompson Sampling for Constrained Contextual Bandit ProblemsSamuel Daulton (Facebook)*; Shaun Singh (Facebook); Drew Dimmery (Facebook); Eytan Bakshy (Facebook)
InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive SolversTan Minh Nguyen (Rice University)*; Animesh Garg (Stanford, Nvidia); Richard Baraniuk (Rice University); Animashree Anandkumar (Caltech)
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset ShiftYaniv Ovadia (Google Inc); Emily Fertig (Google); Jie Ren (Google Research); Zachary Nado (Google Inc.); D Sculley (Google); Sebastian Nowozin (Google Research); Joshua V Dillon (Google); Balaji Lakshminarayanan (Google DeepMind)*; Jasper Snoek (Google Brain)
Do Deep Generative Models Know What They Don't Know?Eric Nalisnick (DeepMind); Akihiro Matsukawa (DeepMind); Yee Whye Teh (DeepMind); Dilan Gorur (); Balaji Lakshminarayanan (Google DeepMind)*
Social Skill Validation at LinkedInXiao Yan (LinkedIn)*; Jaewon Yang (LINKEDIN CORPORATION); Qi He (LinkedIn)
Off-Policy Policy Gradient with StationaryDistribution CorrectionYao Liu (Stanford University)*; Alekh Agarwal (Microsoft); Adith Swaminathan (Microsoft Research); Emma Brunskill (Stanford University)
Better predictions with contextual pretraining over clinical notesJonas Kemp (Google)*; Alvin Rajkomar (Google); Andrew M Dai (Google Brain)
Ultra Fast Medoid Identification via Correlated Sequential HalvingTavor Z Baharav (Stanford University)*; David Tse (Stanford University)
PEARL: Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context VariablesKate Rakelly (UC Berkeley)*; Aurick Zhou (UC Berkeley); Deirdre Quillen (UC Berkeley); Chelsea Finn (UC Berkeley); Sergey Levine (UC Berkeley)
The Optimal Model Design ProblemPierre-Luc Bacon (Stanford University)*; Emma Brunskill (Stanford University)
Which Tasks Should Be Learned Together in Multi-task Learning?Trevor S Standley (Stanford University)*; Amir Zamir (Stanford, UC Berkeley); Dawn Chen (Google); Leonidas Guibas (Stanford University); Jitendra Malik (University of California at Berkley); Silvio Savarese (Stanford University)
Do deep neural networks train by learning shallowlearnable examples first?Karttikeya Mangalam (Stanford University)*
No More Mode CollapseKe Li (UC Berkeley)*; Jitendra Malik (University of California at Berkley)


Call-for-Submissions


Please submit your proposals via CMT in the form of an abstract as a 2-page pdf in the NeurIPS Style by 11:59:59PM PDT, June 18th, 2019.  References can be included in a third page. 
Note: Submissions are not blind-reviewed, thus please include authors' names and affiliations in the submissions.

Acceptable material includes work which has already been submitted or published, preliminary results and controversial findings. 
We do not intend to publish paper proceedings, only abstracts will be shared through an online repository. Our primary goal is to foster discussion!

For examples of abstracts that have been selected in the past, please see the schedule of talks from BayLearn 2018. This page has videos of the talks and links to PDFs of the abstracts are provided for each of the selected talks.