1985 — 1993 |
Sinex, Donal G |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Auditory Nerve Fiber Responses to Speech @ University of Calif-Los Alamos Nat Lab
In order to add to the understanding of the neural processing of complex sounds, population studies of the responses of chinchilla auditory-nerve fibers to speech and speechlike sounds are proposed. Much information about the encoding of synthesized vowels with varying degrees of complexity and of selected consonants has recently become available. There is less information about the representation of other consonants, particularly those differing in voice-onset time. In one series of experiments, sounds in which acoustic correlates of the voicing feature are systematically varied will be studied. To aid in the interpretation of the data, the speech stimuli will be selected from those for which behavioral discrimination has previously been measured for the chinchilla. Comparison of neural and psychophysical responses to the same sounds and in the same species may provide insight into the kind of neural code that could transmit adequate spectral information to the central nervous system (CNS). This is so because the results of psychophysical tests define the limits of the chinchilla's ability to process human speech sounds and therefore provide an estimate of the amount of spectral information that should be observable in neural responses. In a second series of experiments, responses of auditory nerve fibers will be obtained in response to synthesized speechlike sounds in which important acoustic properties are parametrically varied. Previous studies of speech encoding in the auditory nerve have emphasized the contribution of nonlinear auditory processing to the observed response patterns. However, it has also been noted that responses to certain sounds or spectral regions are less affected by nonlinear mechanisms. A systematic exploration of the stimulus parameters that result in nonlinear transformations is lacking. Studies of auditory nerve fiber responses to harmonic complexes, in which the amplitudes of selected components can be independently varied, will be conducted. The results of these studies will provide additional specific information about the effects of nonlinear transformation on the extraction of these important spectral features.
|
0.939 |
1995 — 1999 |
Sinex, Donal G |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Auditory Processing of Speech @ Arizona State University-Tempe Campus
The long-term goal is to understand the processing of human speech sounds in the auditory nervous system. Studies of the processing of both speech and nonspeech stimuli in the chinchilla inferior colliculus (IC) are proposed. Responses of single neurons will be measured to determine the distributed response to consonant-vowel syllables that differ in voice onset time (VOT). Both humans and chinchillas hear syllables differing in VOT as belonging to qualitatively different categories separated by a perceptual boundary. Both species also exhibit increased acuity for VOT syllables adjacent to the category boundary, and decreased acuity for syllables within each category. The experiments will examine neural mechanisms that may contribute to the formation of VOT categories; this aspect of speech sound processing is important but is poorly understood. The results of these experiments could lead to better speech processors for use with hearing aids, or with cochlear or brainstem implants intended to provide speech information to profoundly-deaf patients. Specific Aim I will test three hypotheses about the representation of VOT in the IC: that the temporal pattern of the population response to a given syllable is related to characteristic frequency; second, that the temporal pattern covaries with other response properties such as the shape of the histogram elicited by a pure tone; and third, that the information that can be observed in the neural representation varies nonmonotonically across the continuum of VOT syllables in agreement with the pattern of psychophysical acuity. Measurements will be made with a limited stimulus set, from as many single neurons as possible, and from all parts of the IC. This approach is analogous to the "population study" approach that has been essential for understanding the coding of speech in the auditory periphery. Specific Aim 2 will test the hypothesis that the representation of VOT in the IC is nonmonotonic despite changes in interaural time or intensity. Specific Aim 3 will test the hypothesis that the region of the VOT continuum that is represented with the greatest precision shifts, in agreement with shifts in the location of the psychophysical boundary for VOT. Specific Aim 4 will test the hypothesis that responses to VOT syllables can be predicted from pure tone response properties.
|
1 |
2001 — 2008 |
Sinex, Donal G |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Auditory Processing of Temporally-Complex Sounds @ Arizona State University-Tempe Campus
DESCRIPTION: The long-term goal is to understand and quantify the processing of temporally-complex sounds in the mammalian auditory system. The Specific Aims for this cycle address neural mechanisms of "sound source identification" (Yost, 1991). These mechanisms are likely to make use of temporal information in the stimulus. Responses of single neurons in the inferior colliculus (IC) of the chinchilla will be measured. Studies of the representation of complex tones that are heard as if they consist of two separate sound sources are proposed. These studies take advantage of a novel technique for evaluating the processing "weight" that IC neurons give to particular spectral components. Other studies will examine neural correlates of the masked threshold, for stimulus configurations that promote "across-channel" processing. The psychoacoustic phenomena called "Comodulation Masking Release" (CMR) and "Modulation Detection Interference" (MDI) will be used as a framework to study integrative neural mechanisms that contribute to the identification of sound sources. In CMR, across-channel processing makes the detection of a signal in noise easier. In MDI, across-channel processing makes the detection of modulation more difficult. The neural mechanisms that underlie CMR and MDI are not known, but they must involve integration of information across bandwidth. The IC is likely to participate in that integration. Experiments to quantify the ability of IC neurons to represent temporal envelopes will also be conducted, to aid in the interpretation of other experiments. Specific Aim 1 tests the hypothesis that IC neurons respond differentially to components in a complex sound that are perceived as belonging to separate sound sources because they differ in harmonicity. Specific Aim 2 tests the hypothesis that the thresholds of IC neurons to tones in comodulated noise bands are lower than their thresholds in noise bands with deviant envelopes; that is, that correlates of CMR will be found in the IC. Specific Aim 3 tests the hypothesis that rsponses of IC neurons to modulated tones are resistant to interference from other modulated tones; that is, that correlates of MDI will not be found in the IC. Specific Aim 4 is to measure the perception of mistuning and CMR in the chinchilla with psychophysical techniques.
|
1 |
2009 — 2010 |
Sinex, Donal G |
RC1Activity Code Description: NIH Challenge Grants in Health and Science Research |
Neurophysiologically-Based Sound Separation For Auditory Prostheses
DESCRIPTION (provided by applicant): The goal is to improve the performance of hearing aids in noisy environments. This will be done by developing a computational method for processing sound mixtures to enhance a target signal in a mixture that includes that signal and additional noise or competing speech sounds. The result of the processing will be a waveform whose physical signal-to-noise ratio (SNR) has been increased. It is expected that when this processed signal is used as the input to a hearing aid or implantable prosthesis, listeners with hearing loss will have less difficulty identifying the signal. Engineering approaches have been applied to this problem in the past, but with limited success. The innovative approach to be used here is to design a processor that duplicates some of the processing carried out by neurons in the auditory pathway. The PI has measured the responses of neurons to sounds that are perceived by normal listeners as a single sound or as mixtures of two sounds ("double sounds"). Waveforms that are perceived as double elicit stereotypical complex temporal discharge patterns in central auditory neurons. A computational model devised by the PI can duplicate the fine temporal details of those discharge patterns. Studies based on the computational model have led to specific hypotheses about the encoding of double sounds across populations of neurons. The proposed experiments will generate computational methods for decoding the responses of populations of neurons. After the decoding step, the original sound mixture has been separated into two parts. The part that corresponds to the target signal can then be used to synthesize a new sound. It is hypothesized that in this re-synthesized sound, the important properties of the target signal will have been retained, while the properties of the competing background will have been rejected or diminished. The hypothesis will be tested with psychophysical speech-identification experiments. The identification of signals in the presence of noise will be measured. The sounds will be processed, and identification will be re-measured. Studies with normal-hearing listeners will be conducted first, and parameters of the model will be refined based on those results. Subsequent experiments will measure the identification of speech in noise, with and without processing, in listeners with hearing loss. It is projected that by the end of the two-year period, the processing algorithm will have been improved to the point at which it could be incorporated into commercial hearing devices. PUBLIC HEALTH RELEVANCE: Hearing impaired listeners have great difficulty understanding speech in noisy environments. That is largely because they cannot segregate simultaneous sounds as effectively as listeners with no hearing loss can. Hearing aids and cochlear implants do not restore the ability to process speech in noise. The proposed project will lead to the development of a signal processing strategy for hearing aids or implants that extracts speech from noise before it is delivered to the prosthetic device. This will at least partially restore the ability to segregate simultaneous sounds.
|
0.979 |