|
Convex Optimization: From embedded real-time to large-scale distributed. Convex optimization has emerged as useful tool for applications that include data analysis and model fitting, resource allocation, engineering design, network design and optimization, finance, and control and signal processing. After an overview, the talk will focus on two extremes: real-time embedded convex optimization, and distributed convex optimization. Code generation can be used to generate extremely efficient and reliable solvers for small problems, that can execute in milliseconds or microseconds, and are ideal for embedding in real-time systems. At the other extreme, we describe methods for large-scale distributed optimization, which coordinate many solvers to solve enormous problems.
| | Machine learning and AI via large scale brain simulations.
Stanford University
By building large-scale simulations of cortical (brain) computations, can we enable revolutionary progress in AI and machine learning? Machine learning often works very well, but can be a lot of work to apply because it requires spending a long time engineering the input representation (or "features") for each specific problem. This is true for machine learning applications in vision, audio, text/NLP and other problems. To address this, researchers have recently developed "unsupervised feature learning" and "deep learning" algorithms that can automatically learn feature representations from unlabeled data, thus bypassing much of this time-consuming engineering. Many of these algorithms are developed using simple simulations of cortical (brain) computations, and build on such ideas as sparse coding and deep belief networks. By doing so, they exploit large amounts of unlabeled data (which is cheap and easy to obtain) to learn a good feature representation. These methods have also surpassed the previous state-of-the-art on a number of problems in vision, audio, and text. In this talk, I describe some of the key ideas behind unsupervised feature learning and deep learning, and present a few algorithms. I also describe some of the open theoretical problems that pertain to unsupervised feature learning and deep learning, and speculate on how large-scale brain simulations may enable us to make significant progress in machine learning and AI, especially computer perception. |
| Modeling user interests.
Google Research
In applications ranging from collaborative filtering to advertising it is important to estimate user interest. This can be achieved by modeling how they interact with a site, by capturing how users select from sets of recommended items, and by integrating their page view and search history. In this talk I will discuss latent variable models which can be used to capture such user interest, thereby providing long range context between otherwise disparate sets of activity.
|
|
|