Daniel L K Yamins, PhD - US grants
Affiliations: | Stanford University, Palo Alto, CA |
Area:
computational neuroscience, artificial intelligence, psychology, vision, auditionWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Daniel L K Yamins is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2017 — 2020 | Yamins, Daniel [⬀] | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Stanford University This project studies the effects of incorporating, into deep neural networks for visual processing, several heretofore unincorporated features of biological visual cortical circuits. Deep neural networks are artificial circuits loosely inspired by the brain's cerebral cortex. Their abilities to solve complex problems, such as recognizing objects in visual scenes, have revolutionized artificial intelligence and machine learning in recent years. The hierarchy of layers in a deep network trained for visual object recognition also provides the best existing models of the hierarchy of areas in the visual cortex implicated in object recognition (the "ventral stream"). This project seeks to understand whether and how incorporating additional features of brain circuits may (1) improve machine learning performance, particularly on tasks that are more challenging than those typically studied; and (2) yield improved models of visual cortex. Improving the performance of deep networks would yield great benefits across wide swaths of society and industry that are impacted by advances in artificial intelligence. Improved models of visual cortex will advance understanding of cortical function, which may lead to significant further benefits for understanding normal mental functioning and perception and their potential enhancement, as well as mental illness and perceptual and cognitive deficits. |
0.915 |
2021 — 2024 | Yamins, Daniel [⬀] | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Stanford University The last ten years have witnessed an astonishing revolution in AI, with deep neural networks suddenly approaching human-level performance on problems like recognizing objects in an image and words in an audio recording. But impressive as these feats are, they fall far short of human-like intelligence. The critical gap between current AI and human intelligence is that, beyond just classifying patterns of input, humans build mental models of the world. This project begins with the problem of physical scene understanding: how one extracts not just the identities and locations of objects in the visual world, but also the physical properties of those objects, their positions and velocities, their relationships to each other, the forces acting upon them, and the effects of forces that could be exerted on them. It is hypothesized that humans represent this information in a structured mental model of the physical world, and use that model to predict what will happen next, much as the physics engine in a video game generates physically plausible future states of virtual worlds. To test this idea, computational models of physical scene understanding will be built and tested for their ability to predict future states of the physical world in a variety of scenarios. Performance of these models will then be compared to humans and to more traditional deep network models, both in terms of their accuracy on each task, and their patterns of errors. Computational models that incorporate structured representations of the physical world will then be tested against standard convolutional neural networks in their ability to explain neural responses of the human brain (using fMRI) and the monkey brain (using direct neural recording). These computational models will provide the first explicit theories of how physical scene understanding might work in the human brain, at the same time advancing the ability of AI systems to solve the same problems. Because the ability to understand and predict the physical world is essential for planning any action, this work is expected to help advance many technologies that require such planning, from robotics to self-driving cars to brain-machine interfaces. Each of the participating labs will also expand their established track records of recruiting, training, and mentoring women and under-represented minorities at the undergraduate, graduate, and postdoctoral levels. Finally, the collaborating laboratories will continue and increase their involvement in the dissemination of science to the general public, via public talks, web sites, and outreach activities. |
0.915 |