2017 — 2022 |
Singer, Yoram Hazan, Elad (co-PI) [⬀] Arora, Sanjeev [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Af: Large: Collaborative Research: Nonconvex Methods and Models For Learning: Toward Algorithms With Provable and Interpretable Guarantees
Artificial Intelligence along with Machine Learning are perhaps the most dominant research themes of our times - with far reaching implications for society and our current life style. While the possibilities are many, there are also doubts about how far these methods will go - and what new theoretical foundations may be required to take them to the next level overcoming possible hurdles. Recently, machine learning has undergone a paradigm shift with increasing reliance on stochastic optimization to train highly non-convex models -- including but not limited to deep nets. Theoretical understanding has lagged behind, primarily because most problems in question are provably intractable on worst-case instances. Furthermore, traditional machine learning theory is mostly concerned with classification, whereas much practical success is driven by unsupervised learning and representation learning. Most past theory of representation learning was focused on simple models such as k-means clustering and PCA, whereas practical work uses vastly more complicated models like autoencoders, restricted Boltzmann machines and deep generative models. The proposal presents an ambitious agenda for extending theory to embrace and support these practical trends, with hope of influencing practice. Theoretical foundations will be provided for the next generation of machine learning methods and optimization algorithms.
The project may end up having significant impact on practical machine learning, and even cause a cultural change in the field -- theory as well as practice -- with long-term ramifications. Given the ubiquity as well as economic and scientific implications of machine learning today, such impact will extend into other disciplines, especially in (ongoing) collaborations with researchers in neuroscience. The project will train a new generation of machine learning researchers, through an active teaching and mentoring plan at all levels, from undergrad to postdoc. This new generation will be at ease combining cutting edge theory and applications. There is a pressing need for such people today, and the senior PIs played a role in training/mentoring several existing ones. Technical contributions will include new theoretical models of knowledge representation and semantics, and also frameworks for proving convergence of nonconvex optimization routines. Theory will be developed to explain and exploit the interplay between representation learning and supervised learning that has proved so empirically successful in deep learning, and seems to underlie new learning paradigms such as domain adaptation, transfer learning, and interactive learning. Attempts will be made to replace neural models with models with more "interpretable" attributes and performance curves. All PIs have a track record of combining theory with practice. They are also devoted to a heterodox research approach, borrowing from all the past phases of machine learning: interpretable representations from the earlier phases (which relied on logical representations, or probabilistic models), provable guarantees from the middle phase (convex optimization, kernels etc.), and an embrace of nonconvex methods from the latest deep net phase. Such eclecticism is uncommon in machine learning, and may give rise to new paradigms and new kinds of science.
|
0.951 |