We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Jamie K. Fitzgerald is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2009 — 2010 |
Fitzgerald, Jamie Kamel |
F31Activity Code Description: To provide predoctoral individuals with supervised research training in specified health and health-related areas leading toward the research degree (e.g., Ph.D.). |
Associative Representations in Parietal Cortex
DESCRIPTION (provided by applicant): The human brain is adept at rapidly making and breaking associations. Associative learning has traditionally been considered the domain of frontal and temporal brain areas;however, increasing evidence suggests that parietal cortex is also involved. The overarching hypothesis of this proposal is that the lateral intraparietal area (LIP) is involved in flexibly encoding visual stimuli, depending on behavioral demands. This proposal will test the hypothesis that visual selectivity in LIP is plastic and may be molded by learned associations between visual stimuli. In Aim 1, monkeys will be trained to perform a delayed paired association task in which they signal whether a sample and test stimulus (arbitrary shapes) belong to a learned associated pair. Preliminary data suggest that single neurons in LIP show robust representations of paired associations between shapes: the activity evoked by a particular shape is most similar to the activity elicited by that shape's learned associate. LIP neurons appear to be strongly modulated by learned shape-shape associations, and we previously showed that LIP neurons also flexibly encode motion direction depending on behavioral demands. Anatomical studies reveal a segregation of inputs from the dorsal and ventral visual streams to LIP. As these streams are thought to be specialized for processing of visual motion and form respectively, in Aim 2a, we will examine whether distinct populations may reflect learned associations for shape and motion stimuli. Preliminary data from single neurons as the monkey switches between tasks indicate that the same neurons are modulated by both types of associations, and the strength of the effects is positively correlated. It is also well known that many LIP neurons signal the location of stimuli in visual space, either abstractly or as an oculomotor plan - but we have also shown that LIP neurons convey non-spatial signals about the identity or association of visual stimuli within the receptive field. How does the brain make sense of such disparate signals encoded in one area? In Aim 2b, we will test whether the same or separate neurons encode spatial and non-spatial information, by having animals rapidly switch between a memory delayed saccade task (to reveal spatial signals) and shape pair association paradigm (to reveal non-spatial signals). This proposal addresses basic questions about the neural basis of learning and memory. Understanding how healthy brains change as a result of learning will help us to understand how psychological and neurodegenerative disorders affect cognition. Understanding systems-level plasticity mechanisms may also aid in elucidating how the brain can regain normal cognitive function.
|
1 |