1997 |
Nygaard, Lynne C |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Emotional Tone of Voice and Spoken Language
The purpose of the proposed studies is to evaluate how emotional tone of voice affects the perception of word and sentence meaning. Traditionally, the study of emotional tone of voice has been considered separately from the study of the formal linguistic content of spoken language. On the one hand, researchers have focused on how listeners detect anger, sadness, happiness, etc., in an individual's voice. On the other hand, considerable research has focused on how listeners perceive the formal linguistic aspects, the syllables, words, and phrases, of spoken language. Little research has been done, however, on how these two types of information interact during spoken language communication. The proposed studies address this gap by investigating how linguistic information and emotional tone of voice are integrated and used by the perceiver during spoken language communication. Five experiments are proposed that will concentrate on two specific stages in language processing: I) lexical access and spoken word recognition, and 2) sentence comprehension. The proposed research will first test whether an emotional tone of voice that is congruent or incongruent with word meaning can affect the nature and time course of word recognition. Transcription of emotional homophones, lexical decision, and naming paradigms will be used. Second, the proposed research will examine whether tone of voice can influence the perception of larger units of speech. Listeners will rate the affective meaning of sentences in which emotional tone of voice is either consistent or inconsistent with the affective meaning of the sentence. The goal is to determine at what stage of analysis emotional voice information is integrated into a listener's interpretation of an utterance. Most theories assume that the evaluation of the talker's emotional state is carried out independently of linguistic analysis and may only be taken into consideration after linguistic processing is complete. Language comprehension is assumed to consist of a series of processing stages in which abstract, context-free units are extracted from the speech stream and used to access abstract, symbolic linguistic representations. Surface characteristics of an utterance are assumed to be discarded. The results of the proposed studies will directly address this common theoretical assumption by uncovering the interplay during perception of linguistic information and emotional tone of voice.
|
1 |
2007 — 2011 |
Nygaard, Lynne C |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Learning in Spoken Language Comprehension
[unreadable] DESCRIPTION (provided by applicant): Although talker-specific properties of speech are extremely informative to the listener, differences in the way speech is produced also create enormous variability in spoken language. During the perception of speech, listeners somehow contend with this variability, extracting the same linguistic content even when it occurs in a variety of different forms. Most models of spoken language processing assume that variation due to differences among talkers is discarded by the listener during speech perception. The end product of this normalization process is assumed to be a series of abstract, context-free linguistic units. In contrast, alternative accounts propose that listeners contend with variability by retaining specific aspects of each talker's voice. Retaining rather than discarding these perceptual properties of speech enables listeners to customize their perceptual processing for each individual talker. The purpose of the proposed research is to investigate how listeners perceptually adapt over time to specific non-linguistic characteristics of talkers' voices. Studies are proposed that examine how listeners adapt to variation introduced by individual talkers' voices and to systematic variation introduced by accentedness. A voice-learning paradigm in which listeners are familiarized with non-linguistic properties of speech over several days of training will be used to compare and contrast the processes involved in talker-specific and accent-general perceptual compensation. The experiments will address the general hypothesis that perception learning of "nonlinguistic" dimensions of spoken language can change the nature of linguistic representation and processing. Given the diversity and variety of conversational partners that are typically encountered in today's society, research designed to evaluate how listeners cope with differences in speaking style and accent becomes imperative. Investigating the process by which listeners accommodate perceptually to differences among speakers, as well as to synthetic and pathological speech, will have important implications for maximizing effective spoken communication in work and learning environments. [unreadable] [unreadable] [unreadable]
|
1 |
2015 — 2017 |
Deshpande, Gopikrishna (co-PI) [⬀] Nygaard, Lynne C Sathian, Krishnankutty [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crossmodal Correspondences Between Visual and Auditory Features
? DESCRIPTION (provided by applicant): We live in a multisensory world, in which stimuli of various types constantly compete for our attention. Information about objects or events typically appears on more than one sensory channel, so that integrating inputs across sensory systems (e.g. vision and hearing) can enhance the signal-to-noise ratio and lead to more efficient perception and action. There is increasing interest in studying how stimulus properties in one sensory modality (e.g. vision) correspond to those in another modality (e.g. hearing). For instance, sounds of high pitch are linked to small-sized visual objects whereas sounds of low pitch are linked with large objects; sounds of high/low pitch are associated with, respectively, visual stimuli of high/low elevation; and even aspects of linguistic stimuli such as vowel quality are associated with visual properties such as object size. Such crossmodal correspondences are important factors in multisensory binding. While information has exploded on the kinds of stimulus features that are reliably associated by human observers across modalities, currently there is little neural evidence to allow a mechanistic account of how crossmodal correspondences arise, or how they relate to synesthesia, a phenomenon in which some individuals experience unusual percepts (e.g. colors) triggered by particular stimuli (e.g. letters. Our goal is to address these important gaps in knowledge, by using functional magnetic resonance imaging (fMRI) in humans to investigate the neural mechanisms underlying crossmodal and synesthetic correspondences and thus to distinguish between alternative explanations that have been offered. A number of possible mechanisms have been entertained for crossmodal correspondences. These include: Hypothesis A - learned associations due to statistical co-occurrences, which would predict that the correspondences are based in multisensory or even classic unisensory regions; Hypothesis B - semantic mediation (e.g. the common word high may mediate the link between high pitch and high elevation); and Hypothesis C - conceptual linking via a high-level property such as magnitude. In a series of eight experiments that comprise three Specific Aims, we propose to examine these competing accounts, recognizing that some or all of them may be operative, and that the mechanisms may vary between different types of crossmodal correspondences.
|
1 |