2005 — 2010 |
Groh, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eye Position and the Neural Basis of Sound Localization
What we see helps us understand what we hear. Knowing the locations of objects seems effortless, regardless of whether we see the object, hear it, or both. Each sensory system determines stimulus location differently. For example, the retina provides the brain with a snapshot-like map of where objects are located with respect to where the eyes are looking. This is an eye-centered frame of reference. In contrast, the perception of sound location involves assessing subtle differences in sound arrival time and volume between the two ears. These differences allow our brains to determine where sounds are located in space with respect to the head and ears - a head-centered frame of reference. This project investigates the neural mechanisms that underlie the communication and coordination between our visual and auditory systems. In particular, the experiments focus on the spatial information gleaned from our eyes and ears. When an object can be both seen and heard, the fact that the eyes can move poses a major problem for relating the visual and auditory components of the object. An object making a sound might lie to the left of your nose, but if your eyes are gazing even farther to the left, the visual image of that object will land on the right side of the retina's map of space. To coordinate and integrate the information gathered by our eyes and ears, the brain must use information about the angle of gaze to translate visual and auditory information into a common frame of reference. How does the brain do this? Previous work has shown that the auditory pathway possesses considerable information about angle of gaze. The specific features of the eye position influence are poorly understood and will be investigated in this project. Specifically, this project will address the role of this eye position signal by recording electroical activity in neurons in the auditory pathways of awake monkeys trained to perform behavioral tasks. The relationship between the brain's eye position signal and the ability to look to the location of a sound will be investigated and the computational features of the eye position signal and the nature of the interaction between eye position and neural responses to sounds will be determined. The results of these experiments will be used to build a detailed model of how visual and auditory information are combined in the brain of an animal whose visual, auditory and oculomotor capacities are similar to those of humans. In tandem with these research objectives this project will foster advanced computer training of graduate and undergraduate students with psychology backgrounds, bring undergraduate and graduate students into laboratory research and disseminate scientific advances to the public by writing articles on these and other scientific findings for newspapers and magazines that reach a broad non-scientific audience.
|
0.915 |
2006 — 2010 |
Groh, Jennifer M |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Visual Signals in Auditory Midbrain
[unreadable] DESCRIPTION (provided by applicant): What we see helps us interpret what we hear. Integrating visual and auditory information is a fundamental aspect of human behavior. Impairments in visual-auditory integration impact numerous aspects of human health, ranging from communication and language to attention and movements in a complicated environment. However, the neural basis of this ability is not well understood. Previous work in barn owls has focused on how the visual information is used to calibrate the representation of auditory space, and has implicated the inferior colliculus in this process. However, little is known about this area in primates. Rhesus monkeys have visual and auditory systems that are quite similar to humans, and serve as an excellent animal model for studying the neural mechanisms underlying visual-auditory integration. Visual-auditory integration in humans and monkeys is different from barn owls because primates can move their eyes whereas barn owls cannot. Thus, primate visual auditory integration requires three signals: visual, auditory, and eye position signals. It has long been known that the primate IC contains auditory information, and we have recently shown that it contains eye position information (Groh et al., Neuron, 2001). Here, we turn to visual information in this structure. We have recently discovered robust visual responses in the IC of awake monkeys. In this proposal, we seek to study the properties of these visual responses and how they relate to auditory and eye position information in this brain region. Specifically, we aim to (1) characterize the visual response properties (Aim 1), (2) determine the frame of reference of these visual responses (Aim 2), and determine how IC neurons respond to combined visual and auditory stimuli, especially when a spatial disparity between these stimuli produces a ventriloquism affect/aftereffect (Aim 3). The mechanisms of visual influence over auditory processing investigated here bear on a variety of disorders involving both visual and auditory components, such as dyslexia and autism. These experiments will yield a greater understanding of subcortical mechanisms for visual-auditory integration within the auditory pathway and may ultimately lead to treatment regimens to help ameliorate the deficits experienced by patients with these disorders. [unreadable] [unreadable]
|
1 |
2009 — 2015 |
Groh, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Neural Basis of the Perception of Sound Location
In our daily lives, we are confronted with many sounds. Distinguishing among them--identifying them, and determining where they are coming from--is essential to many aspects of our existence from carrying on a conversation in a room full of people to avoiding cars when crossing the street. With funding from the National Science Foundation, Dr. Jennifer Groh and colleagues at Duke University are investigating how the locations of sounds are represented in the brain. Unlike the visual system, the auditory system does not appear to use maps for encoding stimulus location. Rather, emerging evidence suggests that at least the horizontal dimension of auditory space is encoded via populations of neurons that respond with monotonically increasing or decreasing responses the farther a sound is located to the right or left. In theory, neurons with this kind of response pattern encode sound location via their firing rates, which exhibit a roughly one-to-one correspondence with a particular sound location. A critical limitation of this kind of coding scheme, however, is that the representation of multiple simultaneous sound locations would appear to be impossible because neurons cannot discharge at more than one firing rate at a time. And yet, humans can perceive multiple sound locations. So how does this happen? This project tests several competing hypotheses for how neurons might respond when more than one sound is present.
The answers to these questions will yield important new insights into how the neural representation of sound location subserves this critical component of auditory perception. The findings generated by this research will be informative not only in the auditory domain, but will also help clarify the kinds of computational strategies the brain may employ more generally to compress information in the face of limited neural resources. The funding from this project will be used to support a research group at Duke, providing training opportunities for undergraduate, graduate, and post-doctoral trainees in cognitive and computational neuroscience.
|
0.915 |
2015 — 2019 |
Groh, Jennifer M |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Information in Limited-Capacity Neural Codes
DESCRIPTION (provided by applicant): Many things happen at once - there are always abundant stimuli to be perceived, items to be remembered, and courses of action to be planned. Considerable research on attention has explored how our limited-capacity brains screen out the onslaught. The flip side of this problem is that sometimes information about multiple stimuli must be preserved despite limitations in processing capacity. This proposal explores potential brain mechanisms capable of preserving multiple auditory items of information simultaneously. We seek to understand how different types of neural codes accomplish this feat. We focus on sound location, which is critical for communicating and avoiding hazards in complex scenes. Localizing multiple sounds involves two types of codes in auditory brain areas: a firing-rate code for horizontal sound location which is superimposed on a map for sound frequency. A critical limitation of the firing- rate code for location, however, is that the representation of multiple iems would appear to be impossible because neurons cannot discharge at more than one firing rate at a time. We will therefore investigate two possibilities: that the code preserves multiple items via segregation into different hills of activity as afforded by the associated frequency map (aim 1), and that the code preserves multiple items by some form of switching behavior, in which individual neurons encode only one stimulus at a time but multiple sounds are represented at the population level and/or across time (aim 2). These experiments will be carried out in the inferior colliculus (IC), an essential processing stage of the auditory pathway: nearly all information reaching higher auditory areas must pass through this structure. Thus, what the IC can encode places boundary conditions on the information that is available to subsequent stages of processing. We will conduct single and multiple electrode recordings of spiking and local field potential activity during performance of a task involving eye movements to multiple sounds. These experiments will yield important insights into many aspects of normal perception and cognition as well as related disorders. The ability to keep multiple items in mind is central t communication, working memory, attention, and sensory-motor skills, and it may be adversely affected in disorders such as attention-deficit disorder, autism, central auditory processing disorder, and age- related hearing loss.
|
1 |
2017 — 2021 |
Groh, Jennifer M Tokdar, Surya T (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Spatial Information Codes
SUMMARY This project concerns computational challenges in the integration of visual and auditory signals in the brain. The primate brain uses different coding formats to encode the locations of visual and auditory stimuli, with maps of visual space and meters or rate codes for auditory space. The disparity in coding format poses challenges for connecting the visual and auditory aspects of common underlying objects or events. The proposed experiments investigate the possibility that visual stimuli cause fundamental changes in auditory representations, by changing the overall structure of auditory spatial sensitivity (Aim 1) and/or by capturing temporal fluctuations in auditory- evoked activity (Aim 2). These experiments will shed light on the neural mechanisms that can support visual-auditory perceptual phenomena such as the use of lip reading cues to facilitate speech comprehension, and the use of visual information to help focus on a particular sound in a crowded auditory scene such as a cocktail party. These abilities are important in normal, aging, and sensory-impaired populations. The experiments involve a combined approach involving behavior, neurophysiology, and advanced statistical analyses concerning evaluation of fluctuating patterns of neural activity. The work is spearheaded by a collaborative team led by Profs. Surya Tokdar (Statistical Science, Duke University) and Jennifer Groh (Neurobiology, Duke University), and builds on analogies from technological systems such as the use of time division multiplexing to intersperse signals from different sounds into a fluctuating neural firing pattern. This computational approach will yield insights into how sensory systems overcome their differences to work in synergy with each other, particularly when there is competition among stimuli for a slot in neural representations.
|
1 |
2019 — 2020 |
Groh, Jennifer M |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multisensory Processes in the Mechanics of Hearing
PROJECT SUMMARY Vision helps us hear. For example, lip reading aids speech comprehension, and visual cues are thought to be critical for learning how to localize sounds. Visual-auditory interactions are usually considered to be a cortical phenomenon, but mounting evidence suggests that cross-talk between sensory systems begins well before the cortical stage. In this project, we investigate visual influences over the auditory processing at the earliest point that they occur: in the ear. The auditory pathway has a robust system for modulating the processing of incoming auditory signals via the motoric actions of outer hair cells and middle ear musculature. These structures are under descending control from the brain, providing potential routes for visual-related signals to influence auditory processing. This system is affected by movements of the eyes: we have recently reported a previously unknown oscillation of the eardrum that is time locked to the occurrence of saccades and occurs in the absence of a delivered sound. We hypothesize that the processes that underlie this eye movement-related effect on peripheral auditory processing play a role in reconciling the disparities in visual and auditory spatial signals that accompany eye movements. We will test this hypothesis by determining what eye movement properties determine the properties of the eardrum oscillation, where in the oculomotor system commands that contribute to this effect originate; and how eye-movement related peripheral processing changes in conjunction with changes in the mapping between visual and auditory space. Together, these experiments will shed light on how the brain tunes its auditory input to coordinate with the visual system. The discovery of multisensory signals at the very gateway of the auditory pathway represents a fundamental shift in our understanding of the scope and mechanisms employed not only in service of multisensory integration, but also on what was previously supposed to be unimodal processing. The findings will be relevant to disorders involving hearing as well as other conditions that involve visual and auditory deficits, such as dyslexia, auditory processing disorder, Meniere's disease, autism, schizophrenia, and age-related cognitive impairments.
|
1 |