We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Lee M. Miller is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2002 — 2003 |
Miller, Lee M |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Network Interactions in Crossmodal Speech Perception @ University of California Berkeley
[unreadable] DESCRIPTION (provided by applicant): The goal of the proposed project is to characterize the neural network interactions that mediate auditory-visual integration of speech in a noisy environment. Understanding speech in degraded and reverberant conditions is perhaps the most frequent audiological complaint by the hearing impaired, now including an estimated 28 million Americans. Depression, loneliness, and social anxiety are common conditions afflicting those who suffer this reduced ability to communicate with their friends, family, and co-workers (Knutson, 1990). In addition to its practical implications, crossmodal speech perception also serves as a paradigm for our brains' ability to combine diverse sources of information into a unified percept. Integration across sensory modalities allows us to detect and discriminate stimuli faster and more accurately than with one system alone, especially when the stimuli are degraded. Our brains therefore use expectations from prior experience along with complementary information from different senses to form coherent perceptual objects. The prior knowledge, known as top-down influence, must be combined with the raw stimuli, or bottom-up influences, just as auditory is combined with visual information. The negotiation of these top-down/bottom-up and crossmodal interactions may be mediated by networks of areas in the superior temporal sulcus, intraparietal sulcus, and prefrontal cortex. A whole-brain technique such as functional magnetic resonance imaging is required to simultaneously assess neural activity in many widespread regions. A network analytic approach, including structural equation modeling and partial least squares, is essential to address crossmodal and top-down/bottom-up interactions among these regions during speech perception in a noisy environment.
|
0.984 |
2006 — 2010 |
Miller, Lee M |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Networks For Speech Perception in Noise @ University of California Davis
[unreadable] DESCRIPTION (provided by applicant): This research addresses the neural bases of speech perception in noisy environments. With its singular role in communication, speech is perhaps the most important everyday stimulus for human beings. Yet rarely does speech occur under pristine conditions; competing voices, reverberations, and other environmental sounds typically corrupt the signal. This poses a continual challenge for normal listeners and especially for those with hearing loss. Among the 30 million Americans with hearing loss, many suffer depression and social isolation because of their difficulty communicating. In the half-century since the original formulation of the "cocktail party effect", scientists have established three key perceptual/cognitive factors that improve speech intelligibility in a competing background: acoustic cues, audiovisual integration (voice + mouth movements), and linguistic context. However, little is known about how these mechanisms are implemented in the brain, particularly at the level of large-scale functional neural networks. The proposed research uses functional magnetic resonance imaging (fMRI) integrated with psychophysics to address the three factors that determine intelligibility. Innovative neural network analyses test how interactions among brain regions accommodate degraded speech and improve comprehension. Our specific AIMS are to identify the neural networks mediating speech perception in noise, when intelligibility depends on: 1) Acoustic Cues, 2) Audiovisual Integration, and 3) Linguistic Context. This research program comprises a multipronged and highly cohesive body of work that will help secure our understanding of speech perception to its neurobiological foundations. Relevance to public health: We study how our brains understand speech in a noisy background, such as at a restaurant, ballgame, or office. Research like this may someday help to design better hearing aids and similar devices. It may also result in more effective listening strategies, both for those with healthy hearing and especially for those with hearing loss. [unreadable] [unreadable] [unreadable]
|
1 |