Speakers

Sergey Levine / UC Berkeley

Reinforcement learning algorithms can learn effective policies automatically, making them appealing for a wide range of real-world decision making problems, from logistics and e-commerce to autonomous driving. However, reinforcement learning is classically seen as an active learning framework, where new data must be collected during the learning process. This poses a serious constraint on the practical real-world application of reinforcement learning algorithms: many real-world domains where we wish to apply automated decision making via RL make it expensive, dangerous, or otherwise impractical to actively collect data during the training process. Offline reinforcement learning methods offer an alternative approach: instead of actively collecting data during learning, such methods aim to learn the best possible policy out of an existing previously collected dataset. Analogously to the widely used supervised methods, such algorithms can in principle use large datasets that are collected once, and then reused to learn policies for a variety of tasks. However, in contrast to supervised learning methods, offline RL makes few assumptions on this data, using it to interpret how the world works, rather than how to solve the task at hand. Unfortunately, offline RL introduces a range of new technical challenges that are not present in standard online RL settings, revolving around the challenges of distributional shift between the learned policy and the (unknown) behavior policy that collected the data. In brief, offline RL is hard because we must learn a policy that behaves differently (and better) than the policy that collected the data, but it is difficult to evaluate this policy, since we do not have any access to what this new policy would have experienced if it were executed in the real world. In this talk, I will discuss recent advances that make offline RL a practical tool for real-world problems, the theoretical foundations of offline RL methods, as well as several recent applications that illustrate the applicability of these methods to real-world tasks.
Speaker Bio: Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more. His work has been featured in many popular press outlets, including the New York Times, the BBC, MIT Technology Review, and Bloomberg Business.
Eric Horvitz / Microsoft

We are in the early days of developing principles and mechanisms for harnessing the complementary skills of people and AI systems. I will present promising directions for weaving together human and machine intellect, including methods for leveraging joint models of human and machine inferences, learning about the distinct strengths and weaknesses of people and machines, and guiding the volley of initiatives undertaken by people and AI systems.
Speaker Bio: Eric Horvitz is a technical fellow at Microsoft, where he serves as the company’s first Chief Scientific Officer. As chief scientist of the company, Dr. Horvitz provides cross-company leadership and perspectives on advances and trends on scientific matters, and on issues and opportunities rising at the intersection of technology, people, and society. He has pursued principles and applications of AI with contributions in machine learning, perception, natural language understanding, and decision making. His research centers on challenges with uses of AI amidst the complexities of the open world, including uses of probabilistic and decision-theoretic representations for reasoning and action, models of bounded rationality, and human-AI complementarity and coordination. His efforts and collaborations have led to fielded systems in healthcare, transportation, ecommerce, operating systems, and aerospace. He received the Feigenbaum Prize and the Allen Newell Prize for contributions to AI. He received the CHI Academy honor for his work at the intersection of AI and human-computer interaction. He has been elected fellow of the National Academy of Engineering (NAE), the Association of Computing Machinery (ACM), Association for the Advancement of AI (AAAI), the American Association for the Advancement of Science (AAAS), the American Academy of Arts and Sciences, and the American Philosophical Society. He has served as president of the AAAI, and on advisory committees for the National Science Foundation, National Institutes of Health, President’s Council of Advisors on Science and Technology, DARPA, and the Allen Institute for AI. Beyond technical work, he has pursued efforts and studies on the influences of AI on people and society, including issues around ethics, law, and safety. He chairs Microsoft’s Aether committee on AI, effects, and ethics in engineering and research. He established the One Hundred Year Study on AI at Stanford University and co-founded the Partnership on AI. Eric received PhD and MD degrees at Stanford University. Previously, he served as director of Microsoft Research Labs, including research centers in Redmond, Washington, Cambridge, Massachusetts, New York, New York, Montreal, Canada, Cambridge, UK, and Bangalore, India. He also ran the Microsoft Research Lab in Redmond, Washington.
Jure Leskovec / Stanford University

Heterogeneous knowledge graphs are emerging as an abstraction to represent complex data, such as social networks, knowledge graphs, molecular graphs, biomedical networks, as well as for modeling 3D objects, manifolds, and source code. Machine learning, especially deep representation learning, on graphs is an emerging field with a wide array of applications from protein folding and fraud detection, to drug discovery and recommender systems. In this talk I will discuss recent methodological advancements that automatically learn to encode graph structure into low-dimensional embeddings. I will also discuss industrial applications, software frameworks, benchmarks, and challenges with scaling-up graph learning systems.
Speaker Bio: Jure Leskovec is Associate Professor of Computer Science at Stanford University, Chief Scientist at Pinterest, and investigator at Chan Zuckerberg Biohub. Dr. Leskovec was the co-founder of a machine learning startup Kosei, which was later acquired by Pinterest. His research focuses on machine learning and data mining large social, information, and biological networks. Computation over massive data is at the heart of his research and has applications in computer science, social sciences, marketing, and biomedicine. This research has won several awards including a Lagrange Prize, Microsoft Research Faculty Fellowship, the Alfred P. Sloan Fellowship, and numerous best paper and test of time awards. It has also been featured in popular press outlets such as the New York Times and the Wall Street Journal. Leskovec received his bachelor's degree in computer science from University of Ljubljana, Slovenia, PhD in machine learning from Carnegie Mellon University and postdoctoral training at Cornell University. You can follow him on Twitter at @jure.
Anima Anandkumar / Caltech, NVidia

AI holds immense promise in enabling scientific breakthroughs and discoveries in diverse areas. However, in most scenarios this is not a standard supervised learning framework. AI4science often requires zero-shot generalization to entirely new scenarios not seen during training. For instance, drug discovery requires predicting properties of new molecules that can vastly differ from training data, and AI-based PDE solvers require solving any instance of the PDE family. Such zero-shot generalization requires infusing domain knowledge and structure. I will present recent success stories in using AI to obtain 1000x speedups in solving PDEs and quantum chemistry calculations.
Speaker Bio: Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.