SPEAKER | BIO | TITLE & ABSTRACT | | Dr. Timnit Gebru | Timnit Gebru co-leads the Ethical Artificial Intelligence research team at Google, working to reduce the potential negative impacts of AI. Timnit earned her doctorate under the supervision of Fei-Fei Li at Stanford University in 2017 and did a postdoc at Microsoft Research NYC in the FATE team. She is also the cofounder of Black in AI, a place for sharing ideas, fostering collaborations and discussing initiatives to increase the presence of Black people in the field of Artificial Intelligence.
| The hierarchy of knowledge in machine learning and related fields and its consequences
Feminist and race and gender scholars have long critiqued "the view from
nowhere" that assumes science is "objective" and studied from no
particular standpoint. In this talk, I discuss how this view has
resulted in a hierarchy of knowledge in machine learning and related
fields, devaluing some types of work and knowledge (e.g. those related
to data production, annotation and collection practices) and mostly
amplifying specific types of contributions. This hierarchy also results
in valuing contributions from some disciplines (e.g. Physics) more than
others (e.g. race and gender studies). With examples from my own life,
education and current work, I discuss how this knowledge hierarchy
limits the field and potential ways forward. | | Prof. Susan Athey
| Susan Athey is the Economics of Technology Professor at Stanford Graduate School of Business. She received her bachelor’s degree from Duke University and her PhD from Stanford, and she holds an honorary doctorate from Duke University. She previously taught at the economics departments at MIT, Stanford and Harvard. Her current research focuses on the economics of digitization, marketplace design, and the intersection of econometrics and machine learning. She has worked on several application areas, including timber auctions, internet search, online advertising, the news media, and the application of digital technology to social impact applications. As one of the first “tech economists,” she served as consulting chief economist for Microsoft Corporation for six years, and now serves on the boards of Expedia, Lending Club, Rover, Turo, and Ripple, as well as non-profit Innovations for Poverty Action. She also serves as a long-term advisor to the British Columbia Ministry of Forests, helping architect and implement their auction-based pricing system. She is the director of the Shared Prosperity and Innovation Initiative at Stanford GSB, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence. | Causal Inference: Topics in the Design and Analysis of Experiments: Surrogates for Long Term Outcomes and Staggered Rollouts
This talk will review two papers motivated by challenges that commonly arise in the design and analysis of experiments in environments such as those commonly encountered in tech firms. In the first paper, we look at the use of “surrogates” to evaluate experiments, for example, when the experiment would ideally evaluate the impact of a treatment on a long term outcome or one that is difficult to observe or expensive to gather. The question is how to best use short-term outcomes that are easier to observe to evaluate the intervention. The paper formalizes the assumptions required to build a surrogate index, and develops an approach to bound the bias in the realistic case where some of the assumptions may not hold. We illustrate how one might select how much time to wait before there is no further benefit of gathering additional short term data, if the goal is to estimate treatment effects on long term outcomes. The second paper is tailored to the case where there are a small number of units, such as cities. Even though firms can design experiments that apply at the individual level, there may be spillovers or equilibrium effects, so that it is necessary to run experiments at the level of a market or a city. Then, statistical power becomes a major challenge, particularly since different markets are likely to have different time trends, e.g. different responses to holidays, weather, etc. The paper develops formal results for the optimal design of staggered rollout experiments in this context. | | Prof. Sandrine Dudoit
| Sandrine Dudoit is Professor and Chair of the Department of Statistics and Professor in the Division of Biostatistics, School of Public, at the University of California, Berkeley. Professor Dudoit's methodological research interests regard high-dimensional inference and include exploratory data analysis (EDA), visualization, loss-based estimation with cross-validation (e.g., density estimation, classification, regression, model selection), and multiple hypothesis testing. Much of her methodological work is motivated by statistical inference questions arising in biological research and, in particular, the design and analysis of high-throughput microarray and sequencing gene expression experiments, e.g.,single-cell transcriptome sequencing (RNA-Seq) for discovering novel cell types and for the study of stem cell differentiation. Her contributions include: exploratory data analysis, normalization and expression quantitation, differential expression analysis, class discovery, prediction,inference of cell lineages,integration of biological annotation metadata (e.g., Gene Ontology (GO) annotation). She is also interested in statistical computing and, in particular, reproducible research. She is a founding core developer of the Bioconductor Project (http://www.bioconductor.org), an open-source and open-development software project for the analysis of biomedical and genomic data. Professor Dudoit is a co-author of the book Multiple Testing Procedures with Applications to Genomics and a co-editor of the book Bioinformatics and Computational Biology Solutions Using R and Bioconductor. She is Associate Editor of three journals, including The Annals of Applied Statistics and IEEE/ACM Transactions on Computational Biology and Bioinformatics. Professor Dudoit was named Fellow of the American Statistical Association in 2010 and Elected Member of the International Statistical Institute in 2014. Professor Dudoit obtained a Bachelor's degree (1992) and a Master's degree (1994) in Mathematics from Carleton University, Ottawa, Canada. She first came to UC Berkeley as a graduate student and earned a PhD degree in 1999 from the Department of Statistics. Her doctoral research, under the supervision of Professor Terence P. Speed, concerned the linkage analysis of complex human traits. From 1999 to 2000, she was a postdoctoral fellow at the Mathematical Sciences Research Institute, Berkeley. Before joining the Faculty at UC Berkeley in July 2001, she underwent two years of postdoctoral training in genomics in the laboratory of Professor Patrick O. Brown, Department of Biochemistry, Stanford University. Her work in the Brown Lab involved the development and application of statistical methods and software for the analysis of microarray gene expression data. | Learning from Data in Single-Cell Transcriptomics
I will discuss statistical methods and software for the analysis of single-cell transcriptome sequencing (RNA-Seq) data to investigate the differentiation of olfactory stem cells. RNA-Seq studies provide a great example of the range of questions one encounters in a Data Science workflow. I will survey the methods and software my group has developed for exploratory data analysis (EDA), dimensionality reduction, normalization, expression quantitation, cluster analysis, and the inference of cellular lineages. Our methods are implemented in open-source R software packages released through the Bioconductor Project (https://www.bioconductor.org). | | Prof. Chelsea Finn | Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University. Finn's research interests lie in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, her work has included deep learning algorithms for concurrently learning visual perception and control in robotic manipulation skills, inverse reinforcement methods for scalable acquisition of nonlinear reward functions, and meta-learning algorithms that can enable fast, few-shot adaptation in both visual perception and deep reinforcement learning. Finn received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through the ACM doctoral dissertation award, the Microsoft Research Faculty Fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley for underprivileged high school students, a mentoring program for underrepresented undergraduates across four universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers. | Meta-Learning for Robustness to the Changing World
Machine learning systems are often designed under the assumption that they will be deployed as a static model in a single static region of the world. However, the world is constantly changing, such that the future no longer looks exactly like the past, and even in relatively static settings, the system may be deployed in new, unseen parts of its world. While such continuous shifts in the data distribution can place major challenges on models acquired in machine learning, the model need not be static either: it can and should adapt. In this talk, I’ll discuss how we can allow deep networks to be robust to such distribution shift via adaptation. I will focus on meta-learning algorithms that enable this adaptation to be fast, first introducing the concept of meta-learning, then briefly overviewing several successful applications of meta-learning ranging from robotics to drug design, and finally discussing several recent works at the frontier of meta-learning research. | |
|
|