2013 — 2014 |
Werker, Janet F. |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Multisensory Foundations of Speech Perception in Infancy @ University of British Columbia
DESCRIPTION (provided by applicant): Infants are born with a preference for listening to speech over non-speech, and with a set of perceptual sensitivities that enable them to discriminate most of the speech sound differences used in the world's languages, thus preparing them to acquire any language. By 10-months of age infants become experts at perceiving their native language. This involves improvements in discrimination of native consonant contrasts, but more importantly for this grant, a decline in discrimination of non-native consonant distinctions. In the adult, speech perception is richly multimodal. What we hear is influenced by visual information in talking faces, by self-produced articulations, and even by external tactile stimulation. While speech perception is also multisensory in young infants, the genesis of this is debated. According to one view, multisensory perception is established through learned integration: seeing and hearing a particular speech sound allows learning of the commonalities in each. This grant proposes and tests the hypothesis that infant speech perception is multisensory without specific prior learning experience. Debates regarding the ontogeny of human language have centered on the issue of whether the perceptual building blocks of language are acquired through experience or whether they are innate. Yet, this nature vs. nurture controversy is rapidly being replaced by a much more nuanced framework. Here, it is proposed that the earliest developing sensory system - likely somatosensory in the case of speech, including somatosensory feedback from oral-motor movements that are first manifest in the fetus, provides an organization on which auditory speech can build once the peripheral auditory system comes on-line by 22 weeks gestation. Heard speech, both of the maternal voice via bone conduction and of external (filtered) speech through the uterus, is organized in part by this somatosensory/motor foundation. At birth, when vision becomes available, seen speech maps on to this already established foundation. These interconnected perceptual systems, thus, provide a set of parameters for matching heard, seen, and felt speech at birth. Importantly, it is argued that these multisensory perceptual foundations are established for language-general perception: they set in place an organization that provides redundancy among the oral-motor gesture, the visible oral-motor movements, and the auditory percept of any speech sound. Hence, specific learning of individual cross-modal matches is not required. Our thesis, then, is that while multisensory speech perception has a developmental history (and hence is not akin to an 'innate' starting point), the multisensory sensitivities should be in place without specific experience of specific speech sounds. Thus multisensory processing should be as evident for non-native, never-before-experienced speech sounds, as it is for native and hence familiar ones. To test this hypothesis against the alternative hypothesis of learned integration, English infants will be tested on non-native, or unfamiliar speech sound contrasts, and will be compared to Hindi infants, for whom these contrasts are native. Four sets of experiments, each using a multi-modal Distributional Learning paradigm, are proposed. Infants will be tested at 6-months, an age at which they can still discriminate non-native speech sounds, and at 10-months, an age after they begin to fail. It is proposed that if speech perception is multisensory without specific experience, the addition of matching visual, tactile, or motor information should facilitate discrimination of a non-native speech sound contrast at 10-months, while the addition of mismatching information should disrupt discrimination at 6-months. If multisensory speech perception is learned, this pattern should be seen only for Hindi infants, for whom the contrasts are familiar and hence already intersensory. The Specific Aims are to test the influence of: 1) Visual information on Auditory speech perception (Experimental Set 1); 2) Oral-Motor gestures on Auditory speech perception (Experimental Set 2); 3) Oral- Motor gestures on Auditory-Visual speech perception (Experimental Set 3); and 4) Tactile information on Auditory speech perception (Experimental Set 4). This work is of theoretical import for characterizing speech perception development in typically developing infants, and provides a framework for understanding the roots of possible delay in infants born with a sensory or oral-motor impairment. The opportunities provided by, and constraints imposed by an initial multi-sensory speech percept allow infants to rapidly acquire knowledge from their language-learning environment, while a deficit in one of the contributing modalities could compromise optimal speech and language development.
|
1 |