We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Eswen E. Fava is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2009 — 2011 |
Fava, Eswen Elizabeth |
F31Activity Code Description: To provide predoctoral individuals with supervised research training in specified health and health-related areas leading toward the research degree (e.g., Ph.D.). |
Neural Contribution of Visual Speech to Language Development in Preverbal Infants
DESCRIPTION (provided by applicant): Although human language requires multi-sensory processing of information, much of the research on children's language development focuses on the auditory-only speech signal. This is in spite of the fact that the speaking face provides surprising amounts of information that observers use when processing language. Although behavioral evidence is beginning to emerge about the degree to which preverbal infants can coordinate complex visual and auditory information, there is relatively little neurophysiological data to inform our understanding of how this coordination process develops and how it influences the neural underpinnings of language processing at different developmental time points and in different infant populations. The proposed experiments will test the hypothesis that, while infants may be predisposed to process auditory speech in the left temporal region, this processing is influenced by environmental experience, such as that provided by increasingly extensive exposure to visual speech or to more than one language. Our investigation will examine the role of visual- and auditory-speech both separately and in coordination (audiovisual speech) in order to understand how these distinct sources of perceptual information facilitate the development of language processing abilities in preverbal infants. First, we propose to use near-infrared spectroscopy to test the influence of isolated visual- and auditory-speech on patterns of neural activity in the bilateral temporal cortices of 9-month-old infants, and compare that to the activity observed in response to coordinated audiovisual speech (Aim 1). We will then compare the neural activity elicited by these three speech conditions across three age groups (6-, 9-, and 12-month-olds) to track the developmental trajectory the coordination process follows (Aim 2). Finally, we will compare the bilateral processing patterns of monolingual (English-exposed) infants with age-matched bilingual (Spanish/English-exposed) infants (Aim 3). This would be the first study to demonstrate the privileged nature of audiovisual, speech in early language processing, as reflected by a more robust neurovascular response in the left temporal region relative to the right when audio- or visual-speech are presented in isolation. We expect to find that this effect is experientially based, such that there is a measurable tuning process that infants go through that is specific to their amount of prior exposure to coordinated speech. Findings from the studies outlined here will help us better understand how the auditory and visual systems interact to influence early language development, as well as the normal time course of perceptual tuning to coordinated speech in one's native language(s).
|
1 |