1992 — 2021 |
Carney, Laurel H. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. R29Activity Code Description: Undocumented code - click on the grant title for more information. |
Auditory Processing of Complex Sounds @ University of Rochester
DESCRIPTION (provided by applicant): The neural mechanisms of auditory perception cannot be understood without detailed knowledge of physiological responses to sounds for which psychophysical responses are well described. This proposal presents a comprehensive approach to this important problem and focuses on the question of how information carried by amplitude modulations of complex sounds is encoded and processed by the brain. Despite recent advances in digital hearing-aid technology, our limited understanding of the neural mechanisms involved in processing complex sounds remains a significant limitation in our ability to aid listeners with hearing loss. New strategies to assist listeners with processing sounds in complex acoustic environments will emerge from our investigations of how the healthy auditory system handles this challenge. The three Aims of this proposal feature a novel combination of behavioral, physiological, and computational modeling approaches to address the problem of encoding and processing amplitude-modulated (AM) sounds. The 1st Aim will test three hypotheses concerning behavioral and psychophysical thresholds. The first hypothesis focuses on defining AM detection and discrimination thresholds for rabbits and humans and uses rigorously matched test procedures that are compatible with physiological approaches (Aim 2). The second hypothesis probes a long-standing puzzle: behavioral AM-detection thresholds improve as sound level increases, whereas single-unit physiological coding (based on current theories) degrades as level increases. The third hypothesis concerns the identification of detailed cues for masked AM detection. These cues will be identified with a novel application of reproducible maskers in the modulation domain. The 2nd Aim will test hypotheses of physiological AM coding at the level of the inferior colliculus (IC) in awake rabbit. These studies will include stimuli selected on the basis of behavioral results from Aim 1. We have developed new physiological methods for temporally precise recordings from populations of IC neurons using tetrodes. These recordings enable rigorous tests of the relative reliance of neural encoding on average discharge rates and temporal response patterns, including statistical analyses of spike rates and patterns across ensembles of neurons. The 3rd Aim uses computational techniques to test competing theories for AM-rate tuning in the IC. Recent models have proposed various neural mechanisms to explain AM responses in the midbrain. We will rigorously test these models and will include tests with stimuli other than those for which the models were designed. The results will explicitly determine which models are most consistent with our physiological data. These studies will advance our understanding of the mechanisms underlying AM coding and processing in the auditory system. This information will instruct efforts to enhance and restore critical aspects of complex sounds for listeners with hearing loss by improving hearing-aid signal-processing algorithms. The Public Health Relevance of this project is to determine how the healthy auditory system encodes complex sounds. We will use a novel combination of behavioral, physiological, and computer modeling approaches to identify how the brain encodes and extracts amplitude fluctuations in complex sounds. Because hearing loss in humans typically involves difficulty understanding complex sounds, knowledge of how the brain codes these ubiquitous sounds will provide new and important insights for aiding listeners with hearing loss.
|
1 |
2003 — 2004 |
Carney, Laurel H. |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Physiologically-Based Signal Processing Schemes
DESCRIPTION (provided by applicant): Despite significant advances in hearing-aid technology, several problems for hearing-impaired listeners have not been solved by current signal-processing schemes. This project will develop novel, physiologically-based signal processing strategies to address two major problems faced by hearing-aid users: difficulty listening in noisy environments and loudness distortion, which limits hearing-impaired listeners' dynamic range of comfortable listening levels. Our recent physiological studies have suggested neural encoding and processing mechanisms for masked detection of signals in background noise and for level coding. These studies have resulted in quantitative models that successfully predict the performance of human listeners on psychophysical tasks related to masked detection and level discrimination. In this project, we will take advantage of the basic concepts behind these models and convert them into signal-processing algorithms. This effort not only provides additional tests for our models, but also provides an opportunity to apply ideas suggested by our physiological and modeling studies to real problems for hearing-impaired listeners. Two strategies will be explored in this project. (1) Noise reduction based on a neural model for masking. We will use our masked-detection model to identify signals in the presence of background noise. Frequency bands that are dominated by signal energy will be amplified, and other channels will be attenuated. The confirmed success of our model in detecting signals in fluctuating noises is an important aspect of this approach. (2) Compensation of perceived loudness based on a neural model for level coding. We will introduce into the signal the level-dependent cross-frequency phase differences that are created in the healthy cochlea, taking advantage of nonlinear filters that simulate auditory-nerve tuning. The goal is to increase the comfortable range of levels and to improve speech recognition by providing these nonlinear cues to the impaired ear.
|
0.957 |
2011 |
Carney, Laurel H. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Developing and Testing Models of the Auditory System With &Without Hearing Loss @ University of Rochester
DESCRIPTION (provided by applicant): The most common problem reported by people with sensorineural hearing loss is listening in the presence of background noise. The new efforts presented in this proposal will focus on the development, testing, and application of a composite computational model for physiological and psychophysical responses to complex sounds, especially sounds in the presence of background noise. A model that explains both the impressive ability of normal-hearing listeners, and the difficulty of listeners with hearing loss, to hear sounds in noisy environments will be an invaluable tool to better understand and predict listeners'performance in difficult auditory situations. This information can then be used to design new and improved hearing-aid signal-processing strategies that are successful in noisy situations. Previous computational models have successfully described auditory processing at several levels of the auditory system with phenomenological models for the auditory periphery that include cochlear tuning, transduction, and discharge times of individual auditory-nerve fibers. More recent models describe single neurons and neural circuits in the brain stem and mid-brain, including binaural interactions and neural amplitude-modulation processing. Computational models of neural population responses have also been developed to predict the performance of listeners with and without hearing loss in basic psychophysical tasks. In the proposed project, experience with these models will be leveraged to develop a novel, composite model that ties together these different levels of processing, providing a tool for studying the interactions of stimulus cues and neural mechanisms along the auditory pathway. This computational model for monaural and binaural processing of complex sounds will be tested and refined using physiological recordings from the midbrain (inferior colliculus) of awake rabbit and psychophysical tests in human listeners. The model will be used to predict existing psychophysical data for masked detection, both with and without binaural cues, by listeners with normal hearing. These psychophysical studies will be extended to include listeners with sensorineural hearing loss. Finally, the new model will predict performance of listeners with and without hearing loss on a masked amplitude-modulation (AM) detection task using reproducible modulation maskers. Physiological tuning for amplitude-modulation frequency first emerges at the level of the midbrain, the highest level of the proposed model. Thus this task will allow direct comparison between physiological aspects of AM processing at the mid-brain and psychophysical performance. This proposal provides a systematic transition from modeling basic physiological responses to predicting performance of listeners with and without hearing loss in psychophysical detection tasks in the audio- and modulation-frequency domains. The long term goal of this research program is to develop a robust tool for the development and testing of novel signal-processing strategies for listeners with hearing loss. PUBLIC HEALTH RELEVANCE: The Public Health Relevance of this project is to develop a better understanding of the difficulties in noisy situations for listeners with hearing loss. We will build a computational model for the auditory system of listeners with and without sensorineural hearing loss. This model will be used to predict performance of listeners for detection in noisy situations. Because hearing loss typically involves difficulty understanding complex sounds, especially in noise, knowledge of how the healthy brain copes with difficult listening environments will provide new and important insights for aiding listeners with hearing loss.
|
1 |
2012 — 2015 |
Carney, Laurel H. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Developing and Testing Models of the Auditory System With & Without Hearing Loss @ University of Rochester
DESCRIPTION (provided by applicant): The most common problem reported by people with sensorineural hearing loss is listening in the presence of background noise. The new efforts presented in this proposal will focus on the development, testing, and application of a composite computational model for physiological and psychophysical responses to complex sounds, especially sounds in the presence of background noise. A model that explains both the impressive ability of normal-hearing listeners, and the difficulty of listeners with hearing loss, to hear sounds in noisy environments will be an invaluable tool to better understand and predict listeners' performance in difficult auditory situations. This information can then be used to design new and improved hearing-aid signal-processing strategies that are successful in noisy situations. Previous computational models have successfully described auditory processing at several levels of the auditory system with phenomenological models for the auditory periphery that include cochlear tuning, transduction, and discharge times of individual auditory-nerve fibers. More recent models describe single neurons and neural circuits in the brain stem and mid-brain, including binaural interactions and neural amplitude-modulation processing. Computational models of neural population responses have also been developed to predict the performance of listeners with and without hearing loss in basic psychophysical tasks. In the proposed project, experience with these models will be leveraged to develop a novel, composite model that ties together these different levels of processing, providing a tool for studying the interactions of stimulus cues and neural mechanisms along the auditory pathway. This computational model for monaural and binaural processing of complex sounds will be tested and refined using physiological recordings from the midbrain (inferior colliculus) of awake rabbit and psychophysical tests in human listeners. The model will be used to predict existing psychophysical data for masked detection, both with and without binaural cues, by listeners with normal hearing. These psychophysical studies will be extended to include listeners with sensorineural hearing loss. Finally, the new model will predict performance of listeners with and without hearing loss on a masked amplitude-modulation (AM) detection task using reproducible modulation maskers. Physiological tuning for amplitude-modulation frequency first emerges at the level of the midbrain, the highest level of the proposed model. Thus this task will allow direct comparison between physiological aspects of AM processing at the mid-brain and psychophysical performance. This proposal provides a systematic transition from modeling basic physiological responses to predicting performance of listeners with and without hearing loss in psychophysical detection tasks in the audio- and modulation-frequency domains. The long term goal of this research program is to develop a robust tool for the development and testing of novel signal-processing strategies for listeners with hearing loss.
|
1 |
2016 — 2021 |
Carney, Laurel H. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Developing and Testing Models of the Auditory System With and Without Hearing Loss @ University of Rochester
This proposal presents plans to develop and test a new model for the processing of acoustic cues in both psychophysical tasks and real-world hearing. Masking paradigms are typically interpreted in the context of two models: The power-spectrum model is based on energy in the responses of one or more band-pass filters that represent peripheral tuning. The envelope-power-spectrum model is based on the responses of a bank of modulation filters. These popular models, however, fail to explain robust performance in a number of psychophysical tasks, especially roving- or equalized-level, and roving- or equalized-envelope-energy tasks. The continued use of these models is largely due to a lack of viable alternatives. Here, we propose a new, alternative model for masked detection and spectral coding that provides a mechanistic explanation for a number of psychophysical results, for listeners with or without hearing loss. Building upon our recent studies of envelope-related cues in masked detection, our proposal focuses on the role of neural-fluctuation cues in the responses of auditory-nerve fibers, and ultimately on how these cues are represented by modulation-tuned neurons in the midbrain. These cues are robust in the healthy ear but, because they are strongly dependent upon peripheral nonlinearities, they are substantially degraded in most common types of hearing loss. We will make detailed measurements on the use of envelope vs. energy cues by individual listeners as a function of frequency and hearing thresholds. These results will provide individualized models that will be used to predict thresholds in specific masking and discrimination tasks. We will use computational, physiological and psychophysical tools to test a diotic model of masked detection, focusing on two classic paradigms: notched-noise and forward-masking tasks. These psychophysical tools have been used extensively to characterize tuning bandwidth, compression, and temporal processing in listeners with and without hearing loss. We will re-examine these tasks with neural fluctuation- based representations. Our preliminary results show that the contrast in fluctuations across peripheral channels establishes a representation of stimulus features at the level of the midbrain that is robust in noise across a wide range of levels, thus addressing the primary challenges of roving-parameter paradigms. These cues are particularly strong near spectral slopes, and thus warrant consideration for other stimulus features with sharp spectral slopes, such as fricative consonants and pinna cues. We therefore also propose to extend our dichotic model based on interaural differences in neural fluctuations to the spectral slopes of pinna cues, which code sound location and externalization. Our preliminary work indicates that neural-fluctuation cues associated with the diotic and dichotic stimuli occur in the modulation frequency range where the majority of midbrain neurons are tuned. Consideration of these tasks and stimuli in the framework of neural-fluctuation cues provides a novel and general understanding for coding stimulus spectra by the normal and impaired ear.
|
1 |