
cached image
Joshua Tenenbaum - US grants
Affiliations: | Massachusetts Institute of Technology, Cambridge, MA, United States |
Area:
Computation & TheoryWebsite:
http://web.mit.edu/cocosci/josh.htmlWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Joshua Tenenbaum is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2009 — 2010 | Kanwisher, Nancy [⬀] Tenenbaum, Joshua Vul, Edward |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Doctoral Dissertation Research in Drms: Boundedly Optimal Sampling For Decisions Under Uncertainty @ Massachusetts Institute of Technology To model an individual's choices under uncertainty, theorists typically assume the choices made maximize the individual's utility. While frequently a good description of observed behavior, there are instances where people instead choose alternatives in proportion to their associated probabilities of reward. This probability matching behavior is sub-optimal. Probability matching behavior and optimal behavior would both result depending on the time available to make decisions (where more time produces more optimal decisions) if individuals base their choices on a sampling algorithm. In this Doctoral Dissertation Improvement grant, the PI will test whether such an algorithm is responsible for observed choices and, furthermore, whether people are optimally suboptimal (i.e., optimal in their decision regarding when to be more, or less, optimal. |
1 |
2012 — 2017 | Tenenbaum, Joshua | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Massachusetts Institute of Technology In order for robots to collaborate with humans, they need to be able to accurately forecast human intent and action. People act with purpose: that is, they make sequences of decisions to achieve long-term objectives. For instance, in driving from home to a store, people carefully plan a sequence of roads that will get them there efficiently. In predicting a person's next decision, algorithms must be developed that reflect these purposeful actions. |
1 |
2021 — 2024 | Tenenbaum, Joshua Smith, Kevin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Compcog: Adversarial Collaborative Research On Intuitive Physical Reasoning @ Massachusetts Institute of Technology People are able to reason about the world in amazingly complex ways, yet we consider these capacities part of simple “common sense,” generally shared across individuals and cultures. We toss and catch balls, stack dishes in the sink, and pour a morning cup of coffee with almost no effort. Yet the cognitive systems that support these capabilities are not well understood; even our most advanced attempts to reverse engineer them in robots fall short of human-level efficiency or flexibility. This grant was designed as an “adversarial collaboration” to bring together scientists from two different sides of a critical debate about the nature of human physical reasoning abilities. One theory (championed by the MIT PIs) suggests that this physical reasoning is based on a cognitive system that allows people to simulate what might happen next, similar to how physics engines for video games are used to predict what will happen next in those scenes. While this theory has provided many successful explanations of human behavior, including making precise predictions about how people think Jenga towers will fall, or where they think balls flying through the air will land, another growing body of research (led by the NYU PIs) has demonstrated many instances where the simulation theory cannot adequately describe what people do, but where simpler and approximate “rules-of-thumb” (even inaccurate ones) can. Because human physical reasoning is unlikely to be purely simulation or purely based on simplified rules, a team of experts from both sides of this debate will be crucial for advancing our understanding of the cognitive processes that underlie these reasoning capabilities. Towards reconciling these views, this grant advances the idea that consideration of known human limitations -- e.g., in memory or attention -- can explain the processes that people use when reasoning about the physical world. The goal is to integrate these constraints into a more complete theory of human reasoning that can account for both our failures and our successes in comprehending the physical world. True understanding of these processes will require “reverse engineering” human cognition and perception by designing computational models with similar limitations and capabilities to people. These scientific models may provide insight for researchers in AI and robotics who are interested in designing systems that interact with the world like people, including self-driving cars or the control of prosthetic limbs. Furthermore, exploring how people learn and reason about physics may provide new approaches for physics education. Finally, studying and modeling these facets of physical reasoning will require developing extensible tools, which will be released as open-source software to open up the research into human physical reasoning to a wider set of scientists. |
1 |
2021 — 2024 | Kanwisher, Nancy [⬀] Tenenbaum, Joshua Dicarlo, James (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Massachusetts Institute of Technology The last ten years have witnessed an astonishing revolution in AI, with deep neural networks suddenly approaching human-level performance on problems like recognizing objects in an image and words in an audio recording. But impressive as these feats are, they fall far short of human-like intelligence. The critical gap between current AI and human intelligence is that, beyond just classifying patterns of input, humans build mental models of the world. This project begins with the problem of physical scene understanding: how one extracts not just the identities and locations of objects in the visual world, but also the physical properties of those objects, their positions and velocities, their relationships to each other, the forces acting upon them, and the effects of forces that could be exerted on them. It is hypothesized that humans represent this information in a structured mental model of the physical world, and use that model to predict what will happen next, much as the physics engine in a video game generates physically plausible future states of virtual worlds. To test this idea, computational models of physical scene understanding will be built and tested for their ability to predict future states of the physical world in a variety of scenarios. Performance of these models will then be compared to humans and to more traditional deep network models, both in terms of their accuracy on each task, and their patterns of errors. Computational models that incorporate structured representations of the physical world will then be tested against standard convolutional neural networks in their ability to explain neural responses of the human brain (using fMRI) and the monkey brain (using direct neural recording). These computational models will provide the first explicit theories of how physical scene understanding might work in the human brain, at the same time advancing the ability of AI systems to solve the same problems. Because the ability to understand and predict the physical world is essential for planning any action, this work is expected to help advance many technologies that require such planning, from robotics to self-driving cars to brain-machine interfaces. Each of the participating labs will also expand their established track records of recruiting, training, and mentoring women and under-represented minorities at the undergraduate, graduate, and postdoctoral levels. Finally, the collaborating laboratories will continue and increase their involvement in the dissemination of science to the general public, via public talks, web sites, and outreach activities. |
1 |