Tony Jebara - US grants
Affiliations: | Columbia University, New York, NY |
Area:
Machine Learning, VisionWebsite:
http://www.cs.columbia.edu/~jebaraWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Tony Jebara is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2003 — 2006 | Jebara, Tony | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Representation Learning: Transformations and Kernels For Collections of Tuples @ Columbia University Statistical machine learning tools permit scientists and engineers to automatically estimate computational models directly from real-world data for making predictions, classifications and inferences. However, before learning can take place, the practitioner needs to know how to properly represent data in a consistent, invariant and well-behaved |
1 |
2004 — 2010 | Jebara, Tony | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Columbia University This project aims to develop a tighter integration of generative (e.g., Bayesian networks) and discriminative (e.g., support vector machines) machine learning. These two types of learning are typically not integrated in current practice and are often seen as competing approaches; however, such integration is crucial in complex multi-disciplinary domains, such as vision, speech, and computational biology, where scientists have real expertise and exploitable knowledge about elaborate systems yet also need machine learning to achieve optimal performance for specific tasks. This project's integrated generative-discriminative framework will allow practitioners to flexibly design and structure a given learning problem using generative tools like Bayesian networks and then to maximize the performance of these models using discriminative methods like maximum entropy and probabilistic kernels. This framework will be used in computer vision tracking applications and in classification of gestures. In a laparoscopic robotic surgery platform, the methods will be used for classifying surgical drill movements and predicting surgeon dexterity level. The project's unified approach will be used to create a more comprehensive machine learning course experience for students, complete with online class materials, visual demonstrations and software toolkits. |
1 |
2011 — 2014 | Chudnovsky, Maria (co-PI) [⬀] Jebara, Tony |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Learning and Inference With Perfect Graphs @ Columbia University This proposal investigates new computational and combinatorial tools for learning from data and performing inference about data. As such, it serves as a bridge between the machine learning community and the combinatorics community. The latter field develops efficient algorithms or shows when an efficient solution to a particular problem exists. The project will support graduate students in the intersection of these two fields to combine recent theoretical results with practical data problems. The team will produce a website of tutorials, data sets, downloadable code, interactive visual examples and course materials to allow better integration across the two fields. The combinatorics community has identified a family of inputs called perfect graphs where otherwise hard problems are efficiently solvable. This proposal will investigate how to bridge this powerful theoretical result to the area of machine learning and formally characterize which learning and inference problems are efficiently solvable. |
1 |
2013 — 2016 | Kaiser, Gail (co-PI) [⬀] Jebara, Tony Sethumadhavan, L |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Columbia University Workload characterization is central to development of new computer architectures. The rise of the mobile-cloud paradigm has increased the diversity and rate at which applications are created thus challenging computer architects' ability to build optimized systems for them. In the past, architects have been able to examine software codes of interest (often through slow laborious manual inspection if necessary) when releases were far and few in between to derive intuition necessary to make architectural and microarchitectural discoveries. But this method does not scale to emerging applications that are literally hammered out in the hundreds by the day. Further, new languages and platforms have behaviors that are quite different from legacy codes and there is an urgent need for intuition on these applications. Without new methods to characterize emerging workloads, computer architects risk running into an intuition wall. This risk might prove calamitous if unmitigated, given the added reliance on (micro)architects to develop more energy efficient designs to compensate for the losses due to slowdowns in Dennard's scaling. |
1 |
2014 — 2016 | Jebara, Tony | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: New Optimization Methods For Machine Learning @ Columbia University This proposal explores the optimization of complicated nonlinear equations that underlie machine learning problems by reducing them to simpler easy-to-solve update rules. The learning problems include classification, regression, unsupervised learning and more. Through a method known as majorization, complicated optimization problems are handled by iteratively solving simpler problems like least-squares and traditional linear algebra operations. The proposal focuses on how to parallelize this method so that it can efficiently leverage many CPUs/GPUs simultaneously and in a distributed manner. Furthermore, by making the method stochastic, faster convergence on large or streaming data-sets becomes possible. Other variations are explored such as sparse learning where the recovered solution is forced to be compact which also leads to further efficiency. |
1 |
2015 — 2018 | Jebara, Tony | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iii: Small: Collaborative Research: Approximate Learning and Inference in Graphical Models @ Columbia University This project is designing efficient methods for approximate learning and prediction in graphical models. In a typical setting, the parameters of an unknown graphical model are to be estimated from data observations. Once learned, the parameters are often used to make predictions about unseen data. The learning problem can be solved by estimating the parameters of the model that generate the observed data with the highest probability (a process known as maximum likelihood estimation), and the prediction task is typically performed by a statistical inference method. As exact learning and prediction are computationally intractable, in practice, we seek to replace the maximum likelihood estimation and prediction tasks with more tractable surrogates. This project is developing such surrogates that (a) can be leveraged for learning in large, real-world graphical models with hidden variables, (b) are orders of magnitude faster than the current state-of-the-art methods, and (c) provide a rigorous alternative to the more heuristic methods that are often employed at scale. |
1 |