Area:
Vision, ideal observer models, perceptual learning, visual search, population coding
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Melchi M. Michel is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2015 — 2018 |
Michel, Melchi |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Visual Memory Mechanisms in Transsaccadic Integration and Overt Search @ Rutgers University New Brunswick
A fundamental question in vision science concerns how people perceive a continuous visual environment before them when visual information enters the visual system through a series of brief glances interrupted by frequent rapid eye movements. Part of the answer may be that the visual system relies on a form of visual memory to allow for continuity between glances. In the present work, the research team will investigate how this type of visual memory operates. Achieving a better understanding of the basic operation of visual memory across eye movements could potentially lead to practical applications, such as optimized procedures for radiologists, baggage screeners, satellite image analysts, and others whose occupations require them to search for critical pieces of visual information during a visual search process. Another potential application could be optimized design of artificial vision systems.
The proposed work aims to answer two basic questions regarding the operation of visual memory between glances. First, the research aims to investigate the capacity of such visual memory: How much visual information can this type of memory hold? Second, the research also aims to investigate how the capacity limitations of the visual memory mechanism constrain human performance in visual search tasks. For example, when the capacity of the visual memory is exceeded, how does this impact a person's ability to search for a specific piece of visual information that may be hidden among lots of other visual items? A set of rigorous computational and experimental techniques will be used to characterize the information contained within visual memory during visual search as well as to assess the impact of visual memory on visual search performance. The techniques will allow for detailed and novel quantitative predictions about how visual sensitivity and visual memory capacity interact to determine human performance in visual search tasks. Model predictions will be tested through psychophysical experiments.
|
0.915 |