1998 — 2000 |
Auer, Edward T |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Spoken and Printed Word Recognition in Deaf Adults
Research is proposed to investigate the individual differences observed in spoken and printed word recognition by people with profound hearing impairments. The long-term aims of the research are to discover how the psycholinguistic processing system is affected by perceptual experience and to apply this knowledge to therapies involving sensory aids (i.e., cochlear implants, hearing aids, tactile aids) and communication strategies. Two studies are proposed that examine processing-channel (i.e., speech versus orthography) relationships during spoken and printed word recognition. Study I tests the hypothesis that phonological code in reading is a function of speechreading ability in individuals with perlingual profound hearing impairment. Experiments will employ lexical and semantic decision techniques to probe use of phonology. Study II tests the hypothesis that individuals who make use of phonological codes during printed word recognition can readily transfer their knowledge of printed words to recognition of spoken versions. Study II will employ a transfer task in which printed pseudo- words are learned in the first part of the procedure, followed by spoken pseudo-word learning. Transfer from printed to the spoken learning will be assessed. It is predicted that a distinct pattern of learning will be observed across participant groups. Participants in Studies I and II will be adults with normal hearing and adults with perlingual-onset profound hearing impairments. Direct comparisons will be made among three groups of adults with perlingual-onset, profound hearing impairments, those with: tested performance that is high for printed word vocabulary and high for speechreading ability (HPHS); High printed vocabulary and low speechreading (HPLS); and low printed word vocabulary and high speechreading (LPHS). Knowledge about the causes of individual differences can be employed to guide therapies involving sensory aids and communication strategies.
|
0.912 |
2001 — 2005 |
Auer, Edward T |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Experience and Spoken Word Recognition @ University of Kansas Lawrence
The goal of the proposed research is to discover how perceptual experience affects the spoken word recognition processing system. This research capitalizes on wide individual differences in linguistic experience among prelingually deaf adults with English as a first language and on differences in access to perceptual information for normal-hearing versus the same deaf adults. The aim of Project I is to provide a detailed understanding of the relationship between how words are learned or used in terms of input channel - spoken, orthographic, finger spelled - and how words are processed. The hypothesis to be tested in terms of individual differences is that having more lexical experience in a communication channel facilitates word recognition via that same channel. The hypothesis will be investigated using behavioral and functional neuroanatomical (functional magnetic resonance imaging, fMRI) methods. Behavioral methods will include perceptual word identification, lexical decision, and subjective estimates of how words were learned. The fMRI task will be lipreading words by deaf adults, and analyses will focus on individual differences in magnitude and extent of activation. Behavioral and fMRI studies will expand on our previous research with the same methods and subject groups. The aim of Project II is to understand effects of perceiving words from phonetically impoverished (lipread) stimuli. The hypothesis to be tested is that phonetic information activates experientially derived word-form representations that are isomorphic with the chronically available perceptual (phonetic) information and not linguistically derived word-forms isomorphic with the phonemically distinct words in the language. This hypothesis predicts that spoken word-form representations based primarily on either optical (lipread by deaf adults) or acoustic (heard by hearing adults) signals during development will differ in the extent to which they approximate the phonemically different words in the language. Behavioral methods will include semantic priming and discrimination. fMRI experiments will investigate brain activation as a function of word-form similarity and phonotactics. The two projects will investigate fundamental scientific issues with direct clinical implications. For example, findings showing that how words are experienced affects how they are perceived would suggest that clinicians and educators of deaf children need to pay attention to the channel by which deaf children acquire language.
|
1 |
2012 — 2013 |
Auer, Edward T Bernstein, Lynne Esther [⬀] |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Visual Form-Based Spoken Word Processing @ George Washington University
DESCRIPTION (provided by applicant): Speech can be learned and perceived on the basis of vision, as demonstrated by prelingually deaf adults who rely on seeing spoken language. Visual speech perception (lipreading/speechreading) is not limited to deaf individuals, as almost everyone demonstrates some visual spoken word recognition; but ability varies widely from individual to individual. A mechanistic account at the neural level is needed to help explain lipreading ability, its individual variation, its potential for plasticity, and its role in the spoen language processing system. Based on extensive perceptual research and initial neural evidence, this development project will test a novel hypothesis that visual spoken word representations are stored in the high-level vision cortical pathway. Neuroimaging, computational, and behavioral methods will be used to test this hypothesis in individuals with normal hearing and vision. To test this hypothesis requires overcoming deficiencies in the resolution of conventional functional magnetic resonance imaging (fMRI) data and achieving control over visible spoken stimuli. We will apply advanced neuroimaging techniques (rapid adaptation fMRI and connectivity analyses) and computational modeling of speech dissimilarity to localize in the cerebral cortex activity in response to visible spoken nonsense syllable and word stimuli. Localizers will be used to identify in individuals an area in high-level visual corte that has previously been shown to be selective for visible speech, an area selective for non-speech face motion, areas selective for visual orthographic word forms and for semantic processing, the lateral occipital complex, the fusiform face area, and the human visual motion area. A longterm goal of this project is to determine whether the visual speech pathway is organized the same way as the auditory speech pathway, with multiple levels of representation from speech features to words. The visual speech pathway might be hierarchically shallow, with visual features that are not specific to speech projecting directly to speech representations in high-level vision cortical areas. This project will test specific hypotheses about the selectivity f localized brain areas for visible spoken words and syllables. It will also investigate adaptation, connectivity, and psychophysiological effects in whole brain analyses focused on relationships among brain areas that might contribute to visual perception of speech. The project involves a highly innovative collaboration between speech science researchers and vision scientists using a multidisciplinary approach that can benefit both areas of research. Detailed understanding of how the complex and dynamic visual speech stimulus is processed in the visual pathway will contribute to vision science. Clinical relevance. If evidence is obtained for high-level visual system involvement in speech recognition, innovative clinical methods for training lipreading in those with hearing loss become available from vision science. Individuals with hearing loss frequently depend on visible speech in face-to-face communication, and enhanced ability to use visual information could improve the quality of their lives. Explanations for individual difference in lipreading and methods to improve lipreading have been sought for over a century. In future studies, we can use the proposed methods to isolate causes for lipreading differences. Studies can test the hypothesis that poor lipreading corresponds to poorly tuned neuronal spoken visual word form representations with excessive adaptation for dissimilar and poorly discriminated stimuli, and good lipreading corresponds to minimal neuronal adaptation beyond a certain level of stimulus dissimilarity.
|
0.951 |
2014 |
Auer, Edward T Bernstein, Lynne Esther [⬀] |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Multisensory Training For Unisensory Perceptual Learning @ George Washington University
When auditory speech stimuli are degraded due to external factors such as noise or internal factors such as hearing loss, being able to see the talker typically improves speech perception. This effect is usually explained as the result of auditory and visual speech information combining, so that more speech information is available to the perceiver. In this project, we will examine the novel hypothesis that the visual information can also guide perceptual learning of the information in the auditory speech stimulus. Auditory speech perception is altered as a result of experience or training with audiovisual (AV) speech stimuli. We hypothsize that the basis for the perceptual learning effect is the ability to exploit correlations or contingencies between auditory and visual speech cues in the input stimuli. These relationships exist because the biomechanics of speech produce both sights and sounds; and perceivers gain implicit knowledge of these audiovisual relationships in the speech they encounter in daily life. Experiments on auditory speech perceptual learning will be carried out with normal hearing and sighted adults. The stimuli will derive from natural recordings and the acoustic speech will be degraded by vocoding. The video will be either natural or synthetic. Training will use a paired-associates task in which participants will learn associations between each two-syllable nonsense word and its assigned nonsense picture. The measure of learning will be scores during training and scores in an auditory-only test that follows paired-associates training. The measure of generalization will be consonant identification in new two-syllable nonsense words, before any training and at the conclusion of the experiment. In Exp. 1, the necessity for synchronized AV speech to achieve optimal auditory learning will be tested with synchronized and desynchronized speech, printed words, and auditory-only control conditions. In Exp. 2, the ability of visual speech to distort auditory speech perceptual representations will be tested by mismatching the auditory and visual stimuli during training. In Exp. 3, AV conditions with natural or synthesized video stimuli will be compared to test whether the quality of the visual speech information affects auditory perceptual learning. In Exp. 4, attention to auditory speech cues during AV training will be challenged by training with different talkers for each paired-associate. Clinical relevance: Perceptual learning is critical to successful use of sensory prostheses, for example, cochlear implants and hearing aids. Perceptual learning under unisensory conditions can be limited by the necessity to access new stimulus information based only on the information delivered through the unisensory prosthesis. Multisensory stimuli with natural correlations or contingencies delivered to the senses in realtime could guide unisensory perceptual learning more effectively and efficiently. Research is needed to understand how multisensory learning achieves these goals, and how to develop practical training regimes.
|
0.951 |
2014 — 2015 |
Auer, Edward T Bernstein, Lynne Esther [⬀] |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Speech Perception Impairments in Healthy Normal-Hearing Adults: Neural Mechanisms @ George Washington University
DESCRIPTION (provided by applicant): The goals of this project are to develop measures for studying speech perception impairments in healthy younger adults with normal hearing and without developmental disorders and to obtain a database of individual differences measures to analyze in relationship to experimental measures. Healthy adults with speech perception impairments may present at an audiology clinic complaining of difficulty perceiving speech under degraded listening conditions, yet upon examination they are found to be audiologically normal. But speech-in- noise testing confirms their complaint. They may then be considered for a clinical diagnosis of central auditory processing disorder (CAPD), but there is not agreement as to how to diagnosis CAPD; there are no standard tests of CAPD; and unknown even is whether its cause is a disorder of auditory processing. No neural bases have been confirmed. Cognitive neuroscience offers a framework for investigating the neural bases for speech perception impairment (SPI) in otherwise healthy young adults. We hypothesize those individuals who complain of speech perception difficulties exist on a continuum of individual differences. We hypothesize that the neural mechanisms responsible for the SPI could arise singly or in interaction at the levels of stimulus representation, attention, and working memory. A delayed recognition memory paradigm will be developed to investigate these mechanisms. The paradigm is designed to obtain behavioral and electrophysiological (EEG) measures in response to speech. Healthy adults with normal hearing and with SPI will be recruited for study and compared to similar age normal adults without speech perception impairments. All participants will undergo a battery of audiological tests and other screening, including tests for verbal intelligence, mental status, and phonological working memory. Then they will be tested in the proposed delayed recognition memory paradigm to gain behavioral and EEG results related to working memory, attention, and speech stimulus encoding. Stimuli will be nonsense syllables or real words that are either natural recordings or vocoded. Extensive individual differences and group statistical analyses will be carried out using the EEG, behavioral, and clinical measures. Individual differences scores will be sought across normal and SPI groups and as reliable predictors of group membership. With better understanding of neural mechanisms responsible for differences in speech perception across groups and individuals, clinical evaluations can become more accurate and treatments can be developed. We envision carrying this project forward to develop clinical tests for adults and also children.
|
0.951 |