Area:
High level vision (scene perception) & long term memory
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Elissa Aminoff is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2014 — 2017 |
Tarr, Michael [⬀] Aminoff, Elissa |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Compcog: Human Scene Processing Characterized by Computationally-Derived Scene Primitives @ Carnegie-Mellon University
How do our brains take the light entering our eyes and turn it into our experience of the world around us? Critically, this experience seems to involve a visual "vocabulary" that allows us to understand new scenes based on our prior knowledge. The investigators explore the nature of this visual language, exploring the specific computations that are realized in the brain mechanisms used for scene perception. The work combines data from state-of-the-art computer vision systems with human neuroimaging to both predict brain responses when viewing complex, real-world scenes, and to analyze and understand the hidden structure embedded in real-world images. This effort is essential for building a theory of how we are able to see and for improving machine vision systems. More broadly, biologically-inspired models of vision are essential for the effective deployment of intelligent technology in navigation systems, assistive devices, security verification, and visual information retrieval.
The artificial vision system adopted in this research is highly data-driven in that it is learning about the visual world by continuously "looking at" real-world images on the World Wide Web. The model, known as "NEIL" (Never Ending Image Learner, http://www.neil-kb.com/), leverages cutting-edge big-data methods to extract a vocabulary of scene parts and relationships from hundreds of thousands of images. The relevance of this vocabulary to human vision will then be tested using both functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) neuroimaging. The hypothesis is that the application of prior knowledge about scenes expresses itself through learned associations between the specific parts and relations forming the vocabulary for scene perception. Moreover, different kinds of associations may be instantiated within distinct components of the functional brain network responsible for scene perception. Overall, this research will build on a recent, highly-successful artificial vision system in order to provide a more well-specified theory of the parts and relations underlying human scene perception. At the same time, the research will provide information about the human functional relevance of computationally-derived scene parts and relations, thereby helping to refine and improve artificial vision systems.
|
0.957 |