Trevor Darrell, Ph.D. - US grants
Affiliations: | Electrical Engineering and Computer Science | University of California, Berkeley, Berkeley, CA, United States |
Area:
Computer visionWebsite:
http://www.eecs.berkeley.edu/~trevor/We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Trevor Darrell is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2004 — 2005 | Darrell, Trevor | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Massachusetts Institute of Technology This is funding to support attendance by approximately 15 graduate students in a doctoral consortium (workshop) to be held in conjunction with the Sixth International Conference on Multimodal Interfaces (ICMI'04), to be held October 13-15, 2004 in State College, PA, and sponsored by the Association for Computing Machinery (ACM). The 3-day conference will bring together researchers from academia and industry from around the world to present and discuss the latest multi-disciplinary work on multimodal interfaces, systems, and applications. The conference represents the growing interest in next-generation perceptive, adaptive and multimodal user interfaces. These new interfaces are especially well suited for interpreting natural communication and activity patterns in real-world environments; their emergence represents a radical departure from previous computing, and is rapidly transforming the nature of human-computer interaction by creating more natural, expressively powerful, flexible and robust means of interacting with computers. Participants in the doctoral consortium will receive feedback from an invited committee of 3-5 senior personnel to posters which reflect work-in-progress not mature enough for a full paper at ICMI. |
0.901 |
2005 — 2006 | Darrell, Trevor | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Student Participant Support For International Conference On Multimodal Interfaces 2005 @ Massachusetts Institute of Technology This is funding to support attendance by approximately 15 graduate students in a doctoral consortium (workshop) to be held in conjunction with the Seventh International Conference on Multimodal Interfaces (ICMI'04), to be held in October 2005 in Trento Italy. The 3-day conference will bring together researchers from academia and industry from around the world to present and discuss the latest multi-disciplinary work on multimodal interfaces, systems, and applications. The conference represents the growing interest in next-generation perceptive, adaptive and multimodal user interfaces. These new interfaces are especially well suited for interpreting natural communication and activity patterns in real-world environments; their emergence represents a radical departure from previous computing, and is rapidly transforming the nature of human-computer interaction by creating more natural, expressively powerful, flexible and robust means of interacting with computers. Participants in the doctoral consortium will receive feedback from an invited committee of 3-5 senior personnel to posters which reflect work-in-progress not mature enough for a full paper at ICMI. |
0.901 |
2006 — 2007 | Darrell, Trevor | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Student Participant Support For Icmi 2006 @ Massachusetts Institute of Technology This is funding to support attendance by approximately 15 graduate students in a doctoral consortium (workshop) to be held in conjunction with the Eighth International Conference on Multimodal Interfaces (ICMI'06), to be held November 2-4, 2006 in Banff, Canada, and sponsored by the Association for Computing Machinery (ACM). ICMI is the foremost conference representing the growing interest in next-generation perceptive, adaptive and multimodal user interfaces, systems, and applications, which are especially well-suited for interpreting natural communication and activity patterns in real-world environments; their emergence represents a radical departure from previous computing, and is rapidly transforming the nature of human-computer interaction by creating more natural, expressively powerful, flexible and robust means of interacting with computers. The 3-day conference will bring together researchers from academia and industry from around the world to present and discuss the latest multi-disciplinary work in the field. The theme of this year's conference is multimodal collaboration through different platforms and applications; the conference will focus on major trends and challenges in this area, including distilling a roadmap for future research and commercial success. Participants in the doctoral consortium will receive feedback from an invited committee of half a dozen senior personnel to posters which reflect work-in-progress not mature enough for a full paper at ICMI; they will also get to present their work orally to the conference as a whole during a doctoral spotlight session highlighting top student research |
0.901 |
2007 — 2008 | Darrell, Trevor | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Massachusetts Institute of Technology This is funding to support attendance by approximately 10 graduate students in a doctoral consortium (workshop) to be held in conjunction with the Ninth International Conference on Multimodal Interfaces (ICMI07), which will take place November 12-15, 2007, in Nagoya, Japan, and is sponsored by the Association for Computing Machinery (ACM). ICMI is the foremost conference representing the growing interest in next-generation perceptive, adaptive and multimodal user interfaces, systems, and applications, which are especially well-suited for interpreting natural communication and activity patterns in real-world environments. The emergence of these new interfaces, systems and applications represents a radical departure from previous computing, and is rapidly transforming the nature of human-computer interaction by creating more natural, expressively powerful, flexible and robust means of interacting with computers. The theme of this year's conference is once again multimodal collaboration through different platforms and applications. The conference will focus on major trends and challenges in this area, including distilling the development of a roadmap for future research and commercial success. New topics of interest this year include multimodal applications in the vehicular environment, human-robot interfaces, and interfaces for music and amusements. The 4-day event will bring together researchers from academia and industry from around the world to present and discuss the latest multi-disciplinary work in the field. The invited talks, panels, single-track oral and poster presentations will facilitate interaction and discussion among researchers. Participants in the doctoral consortium will get to showcase their ongoing thesis work, either orally or via posters, in a special "spotlight session" during which they will receive feedback from an invited committee composed of approximately half a dozen senior personnel (including the conference General and Program Chairs). As in previous years, students funded under this award will all be U.S. residents enrolled at U.S. institutions of higher education. Additional information about the ICMI07 conference is available at http://www.acm.org/icmi/2007. |
0.901 |
2007 — 2012 | Peters, Stanley Darrell, Trevor |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hri: Perceptually Situated Human-Robot Dialog Models @ Massachusetts Institute of Technology Humans naturally use dialog and gestures to discuss complex phenomena and plans, especially when they refer to physical aspects of the environment while they communicate with each other. Existing robot vision systems can sense people and the environment, but are limited in their ability to detect the detailed conversational cues people often rely upon (such as head pose, eye gaze, and body gestures), and to exploit those cues in multimodal conversational dialog. Recent advances in computer vision have made it possible to track such detailed cues. Robots can use passive measures to sense the presence of people, estimate their focus of attention and body pose, and to recognize human gestures and identify physical references. But they have had limited means of integrating such information into models of natural language; heretofore, they have used dialog models for specific domains and/or were limited to one-on-one interaction. Separately, recent advances in natural language processing have led to dialog models that can track relatively free-form conversation among multiple participants, and extract meaningful semantics about people's intentions and actions. These multi-party dialog models have been used in meeting environments and other domains. In this project, the PI and his team will fuse these two lines of research to achieve a perceptually situated, natural conversation model that robots can use to interact multimodally with people. They will develop a reasonably generic dialog model that allows a situated agent to track the dialog around it, know when it is being addressed, and take direction from a human operator regarding where it should find or place various objects, what it should look for in the environment, and which individuals it should attend to, follow, or obey. Project outcomes will extend existing dialog management techniques to a more general theory of interaction management, and will also extend current state-of-the-art vision research to be able to recognize the subtleties of nonverbal conversational cues, as well as methods for integrating those cues with ongoing dialog interpretation and interaction with the world. |
0.901 |
2011 — 2012 | Darrell, Trevor | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Support For Workshop On Advances in Language and Vision @ University of California-Berkeley This project supports travel expenses for participants at the workshop on advances in language and vision. In the past few years, great progress has been made in the fields of language and computer vision in developing technologies of extracting semantic content from text and imagery respectively. Each field has desires to adapt methods from the other, but often looking to the past literature rather than the current state of the art. This workshop makes significant scientific progress in multimodal representations and methods by bringing together the top researchers in both fields. The well organized brainstorming and discussion sessions contribute new ideas to this emerging area. The outcome of the workshop provides some guidelines for targeted research in this interdisciplinary area, including anticipated fundamental scientific advances, possible large-scale challenge problems, the needs and prospects for available datasets, and connections to significant applications and their associated long-term economic impact and other societal benefits. |
1 |
2011 — 2015 | Darrell, Trevor | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Hierarchical Probabilistic Layers For Visual Recognition of Complex Objects @ University of California-Berkeley Learning visual representations remains a challenge for systems which interact with the real world and/or analyze visual information available on the web. Significant progress has been made with simple visual features based on gradient histograms: these models work extremely well on objects that have highly textured and nearly planar patterns or parts. However, these systems suffer when faced with certain classes of real world objects that do not have discriminative locally-planar opaque texture patches, especially objects with complex photometric models. This project develops layered visual models for visual recognition, which can model these classes of phenomena. The research team grounds the methods in a probabilistic foundation, primarily exploiting a sparse Bayesian approach to factoring observed image features into a set of component layers corresponding to an additive image formation process. Considering both local descriptor and local feature detector variants of the model, the research team offers a new concept for interest point detection in the case of transparent objects: extrema detection in a latent-factor scale space. This model has the potential to find invariant local detections despite transparency, and could be useful in a range of vision applications beyond pure recognition for which sparse local feature detectors have proven valuable (e.g., registration, mosaicing, SLAM). Robotic vision systems can use this representation for enhanced recognition of everyday objects, supporting domestic and industrial applications. These representations also facilitate intelligent media processing and indexing. |
1 |
2012 — 2017 | Darrell, Trevor Malik, Jitendra (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of California-Berkeley This project is creating a novel paradigm for computer vision, termed "reconstructive recognition", that incorporates the strongest elements of previous machine learning-based recognition efforts and the strongest elements of previous reconstruction efforts based on radiometric reasoning. The goal is to provide a new foundation for machine perception, and the potential for a transformative advance in applications of computer vision. The project seeks novel physics-based methods for recognition as well as novel learning-based methods for interpreting pixel values in terms of the physics of a scene. The agenda is structured around four aims: Aim I develops generalized reconstructive processes that unify the recovery of shape, materials, motion and illumination. Aim II focuses on supervised visual learning methods that exploit such reconstructive image representations. Aim III pursues unsupervised discovery of reconstructive representations that converge to be similar to the engineered models of Aim I. Finally, Aim IV introduces well-defined challenge problems that focus the field and serve as measurable proxies for progress in computer vision applications that have high potential impact on society. |
1 |
2014 — 2017 | Darrell, Trevor | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: Collaborative Research: Shall I Touch This?: Navigating the Look and Feel of Complex Surfaces @ University of California-Berkeley This project improves autonomous robotic perception so that future co-robots can glance around any scene and accurately estimate how it would feel to grasp or step on all of the visible surfaces. Just as people do, robots should use such these physical predictions to guide their interactions with the world, for example avoiding dangerous ice patches on the ground when walking and driving, and adeptly anticipating the grasp force needed to pick up everything from ice cubes to stuffed animals. These research activities are accompanied by significant outreach efforts, including a new program on "Look and Touch Robotics" to get middle-school students, particularly those from underrepresented groups, excited about computer science, engineering, and robotics. This program uses simple experiments to highlight the dual importance of visual and haptic information during interactions with physical objects, along with demonstrations of a robot showing visuo-haptic intelligence. This project also integrates research and education by involving undergraduates in the research and via hands-on projects in the vision and robotics classes taught by the Principal Investigators. |
1 |
2015 — 2019 | Darrell, Trevor | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Aitf: Full: Collaborative Research: Pearl: Perceptual Adaptive Representation Learning in the Wild @ University of California-Berkeley Vast amounts of digitized images and videos are now commonly available, and the advent of search engines has further facilitated their access. This has created an exceptional opportunity for the application of machine learning techniques to model human visual perception. However, the data often does not conform to the core assumption of machine learning that training and test images are drawn from exactly the same distribution, or "domain." In practice, the training and test distributions are often somewhat dissimilar, and distributions may even drift with time. For example, a "dog" detector trained on Flickr may be tested on images from a wearable camera, where dogs are seen in different viewpoints and lighting conditions. The problem of compensating for these changes--the domain adaptation problem--must therefore be addressed both in theory and in practice for algorithms to be effective. This problem is not just a second-order effect and its solution does not constitute a small increase in performance. Ignoring it can lead to dramatically poor results for algorithms "in the field." |
1 |