We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Yi Zhou is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2015 — 2018 |
Zhou, Yi |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Neural-Cognitive Analysis of Spatial Scenes With Competing, Dynamic Sound Sources @ Arizona State University
This project investigates neurocognitive mechanisms that extract important information from a mixture of sound sources. Imagine a day where you could no longer distinguish the honking horn of a car coming right at you from other street sounds. This cognitive ability to attend to one sound source while ignoring others presents an everyday challenge for people with hearing impairments. While the basic neural mechanisms for detecting and localizing single sounds are known, we do not know how the brain accomplishes auditory scene analysis with multiple sound sources. So far, studies have focused on lower brain centers in rodents and carnivores, while the neural mechanisms for source segregation are expected to be at higher levels, in the auditory cortex. This study will record the responses of single cortical neurons and conduct human-subject experiments for the same acoustic scenarios. Based on the integration of these results, a functional auditory model will be developed. This will provide new scientific insights and enable intelligent algorithms for hearing aids, social robotics, and surveillance systems. The project will provide research opportunities for graduate and undergraduate students and include outreach activities and online learning resources for high-school and college students to increase the public awareness of neuroscience. The research results and the model will be shared with the academic community.
This proposal will use an interdisciplinary approach to gain understanding of the central mechanisms of auditory scene analysis by integrating psychoacoustical experiments with single-unit electrophysiology. The study will investigate how the auditory system localizes a target sound temporally embedded in a spatially separated masker. Single-unit recording will target the caudal region of the auditory cortex, the putative "where" pathway for complex sound analysis. We hypothesize that cortical activity represents both the old and new sounds, so that the internal representation of the "old" masking source can be subtracted from the overall mixture. This facilitates a clearer perception of the "new" target element, demonstrating a fundamental psychophysical phenomenon within auditory scene analysis. To test this hypothesis, we will identify the neural signals for individual sound sources separately and in combination. We will then interpret these signals based on the perceptual data gained from sound localization tests with multiple moving and stationary sound sources. Discovering the fundamental brain mechanisms for auditory scene analysis will provide new neurophysiological insight into a well-established psychophysical field and offer potential technical solutions for sound-source segregation.
|
1 |
2020 — 2021 |
Zhou, Yi |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Visual Modulation of Panoramic Auditory Spatial Processing @ Arizona State University-Tempe Campus
In everyday activity, our sense of space is guided by coordinated multisensory analyses of sensory information from the surrounding environment. Multisensory spatial information is critical for identifying and attending to a target sound in a noisy environment (e.g., night bars, restaurants). Multisensory information is also crucial for detecting heard but unseen dangers because the spaces encoded by sounds and sights do not always align. For foveal species like humans and monkeys, the visual field is restricted to the frontal space, whereas the auditory field is panoramic, covering the entire frontal and rear space. The rear space, however, has been largely overlooked in multisensory research. It remains largely unknown where and how vision directly influences auditory spatial processing in the brain. The long-term objective of this study is to understand the fundamental strategies of multisensory spatial perception and cortical neural mechanisms that implement these strategies in the brain. This proposal will investigate how visual information modulates auditory encoding of 360-degree, panoramic space in auditory cortex using an integrated approach based on neurophysiology, mechanistic computational modeling, and predictive statistical modeling. We hypothesize that visuo-spatial information increases auditory representation of the frontal space by changing the directional preference of neural network dynamics. Neurophysiological experiments will provide a comprehensive assessment of changes in the 360-degree spatial tuning of auditory cortex neurons after frontal visual stimulation. Computational models will aid in identifying putative cell types and reveal how heterogeneous recorded extracellular spiking waveforms depend on stimulus conditions and cell type. Predictive statistical modeling will determine the sources of variance in cortical neuron spiking data and will predict spiking output of different cell types under different conditions, all with laminar specificity. This integrated approach will provide an understanding of visual modulation of auditory spatial processing with a focus on the layer-specific interactions between local rhythm generators and single unit activity. The impact of this work will be maximized through sharing of data in standardized formats, rigorous and transparent model validation, and use of model description standards, which allows for code generation for simulating models in many different programming languages or simulation platforms for model re-use. RELEVANCE (See instructions): The ability of the nervous system to integrate multisensory inputs is essential to communications in complex sensory and social environments. Impairment of this ability is the most noticeable outcome of hearing loss. Identifying how the auditory cortex encodes sound features in a visual environment will improve our understanding of how multisensory perception might be implemented in neural circuits, thereby revealing potential sources of perceptual impairment in real-world conditions.
|
1 |