1981 — 1984 |
Massaro, Dominic |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Instructional Laboratory in Experimental Psychology @ University of California-Santa Cruz |
0.915 |
1982 — 1986 |
Massaro, Dominic |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computer-Based Instruction in Experimental Psychology @ University of California-Santa Cruz |
0.915 |
1984 — 1987 |
Massaro, Dominic |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Developmental Changes in Speech Perception @ University of California-Santa Cruz |
0.915 |
1985 — 1988 |
Massaro, Dominic W |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Processing and Memory of Auditory Stimuli @ University of California Santa Cruz
The main objective of the proposed research is to described how humans process auditory stimuli. The research is carried out in the framework of an information processing model of the sequence of stages between the sound stimulus and meaning in the mind of the observer. The research paradigms include speech and nonspeech perception experiments. Factorial designs and functional measurement techniques are used to determine the acoustic features of speech sounds and how the features are combined and integrated in speech perceptoin. Listeners are asked for continuous rather than discrete judgments to provide a more direct assessment of the nature and utilization of featural information. Phonological context will be varied simultaneously with the variation of acoustic featural information to assess how higher-order context is integrated with featural information in speech perception. This experimental framework will be extended to studies of the similarities and differences in the perception of speech and nonspeech signals.
|
1 |
1988 — 1991 |
Massaro, Dominic |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Constancy and Change in Speech Perception Across Adulthood @ University of California-Santa Cruz
This is a laboratory study involving experiments to understand changes in modes of speech perception and language understanding as a function of aging. The experiments compare the performance of adults across the lifespan in speech perception faced with multiple sources of information. Young adults are compared with middle-aged adults and elderly persons in their identification of samples of speech composed of single or multiple sources of information. Their performance will be used to test how aging influences the relative value of each source of information about speech and the processing of each type of information. There are three lines of inquiry. The first involves the contribution of visible speech to speech perception. As acuity of hearing diminishes, individuals may compensate by paying close attention to configuration and movement of the lips. The second assesses the evaluation and integration of a variety of bottom-up sources of information in speech perception (audible and visible characteristics of speech). The third focuses on top-down sources such as phonological, lexical, semantic, and semantic constraints. Older adults have less information about some sources, but not about others. The experiments will determine to what extent elderly individuals have less information in communication and to what extent they process the information they have more or less efficiently than young adults. These measurements of how speech perception and language communication change with aging are necessary before it can be determined how any deficits might be compensated for in day-to-day communication.
|
0.915 |
1988 |
Massaro, Dominic W |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Processing &Recognition of Speech @ University of California Santa Cruz
The long-term goal of the research endeavor focuses on a theoretical account of bimodal speech perception, or speech perception by eye and ear. The research is carried out within the framework of the falsification and strong-inference paradigm to eliminate alternative interpretations and to provide constraints on proposed theoretical explanations. The experiments utilize the methodology of information processing, information integration, and the testing of mathematical models. A wide variety of experimental tasks perceptual judgments, and dependent variables are studied to provide converging operations on the phenomena of interest. The proposed research is aimed at understanding the evaluation and integration of auditory and visual information in speech perception. The experimental studies address 1) the ability to learn to attend selectively to one modality or the other in bimodal speech perception, 2) the degree to which preschool and adolescent children can be taught lipreading and the consequences of learning on bimodal speech perception, and 3) the psychophysical study of audible and visiblespeech involving the extension of forms of degradation of the auditory speech and the assessment of various forms of degradation of the auditory and visual signals.
|
1 |
1989 — 1990 |
Massaro, Dominic W |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Processing and Recognition of Speech @ University of California Santa Cruz
The long-term goal of the research endeavor focuses on a theoretical account of bimodal speech perception, or speech perception by eye and ear. The research is carried out within the framework of the falsification and strong-inference paradigm to eliminate alternative interpretations and to provide constraints on proposed theoretical explanations. The experiments utilize the methodology of information processing, information integration, and the testing of mathematical models. A wide variety of experimental tasks perceptual judgments, and dependent variables are studied to provide converging operations on the phenomena of interest. The proposed research is aimed at understanding the evaluation and integration of auditory and visual information in speech perception. The experimental studies address 1) the ability to learn to attend selectively to one modality or the other in bimodal speech perception, 2) the degree to which preschool and adolescent children can be taught lipreading and the consequences of learning on bimodal speech perception, and 3) the psychophysical study of audible and visiblespeech involving the extension of forms of degradation of the auditory speech and the assessment of various forms of degradation of the auditory and visual signals.
|
1 |
1991 — 1998 |
Massaro, Dominic W |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Synthesis, Analysis, and Perception of Visible Speech @ University of California Santa Cruz
Watching a speaker's face and lips provides powerful information in speech perception and language understanding. Visible speech is particularly effective when the auditory speech is degraded, because of noise, bandwidth filtering, or hearing impairment. The proposed research involves three main areas of inquiry on the use of visible information in speech perception. The first area involves research and development of computer animated facial displays. Synthetic visible speech has a great potential for advancing our knowledge about the visible information in speech perception, how it is utilized by human perceivers, and combined with auditory speech. But a better model of speech articulation is needed- incorporating physical measurements from real speech and rules describing coarticulation between segments. Further work is proposed to increase the available information and to improve the realism of the face. Standard tests of intelligibility will be used to assess the quality of the facial synthesis. The second area of inquiry is the measurement of facial movements and tongue during speech production, and analysis of features used by human observers rn visual-auditory speech perception. Systematic measurements of visible speech will be made using a computer controlled video motion analyzer. These measurements will be used for control of synthetic visual speech and also will be correlated with perceptual measures to identify which physical characteristics are actually used by human observers. The third area evaluates the contribution of facial information in general (and various visual features in particular) to speech perception. Experimental studies with human observers will be carried out to assess the quality of the synthetic facial display and to better understand speech perception by eye and ear. Synthetic visible speech will allow the visual signal to be manipulated directly, an experimental feature central to the study of psychophysics and perception. Although these three areas of inquiry address different problem domains in cognitive science and engineering, their simultaneous study affords potential developments not feasible in separate investigations. The general hypotheses examined in this research are that l) animated visual speech from synthetic talkers is a valuable communication medium 2) research with this medium will contribute our understanding of speech perception by ear and by eye, and 3) the research will have valuable applications for improving communication for deaf and hearing-impaired individuals, people in noisy environments, people in difficult language situations such as second language learning, and human-machine interactions.
|
1 |
1993 — 1997 |
Massaro, Dominic Friedman, Daniel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Optimal and Adaptive Learning Models For Nondeterministic Tasks @ University of California-Santa Cruz
9310347 Daniel Friedman This research examines how people learn in nondeterministic tasks (such as medical diagnosis). Learning is difficult such tasks because occasionally a decision maker can be right for the wrong reason or (like an expert doctor with an inconclusive chart) wrong for the right reason. In experimental settings we will vary the complexity of the task, the type of decision required, and the learning environment in order to see when people are able learn effectively and when they do not. An example of the type of task would be medical decision making. A doctor examining a medical chart mentally combines the information from each separate symptom and makes a diagnosis. Doctors' diagnostic abilities increase over time as they learn more about the informativeness of each symptom and how best to combine the information. Often the chart information is inconclusive, so even the best doctor sometimes makes an incorrect diagnosis. This kind of task appears frequently in other decision domains. Economists and some other social scientists use equilibrium models that in effect assume instantaneous learning, and results of this study should be useful in showing where these models are likely to be accurate or inaccurate. The results also should be directly useful to cognitive scientists and decision makers (including medical diagnosticians) who wish to improve the learning process in nondeterministic tasks. ***
|
0.915 |
2000 — 2004 |
Massaro, Dominic W |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perception of Visible and Bimodal Speech @ University of California Santa Cruz
The proposed research involves the contribution of visible information in face-to-face communication and how it is combined with auditory information in bimodal speech perception. The experimental research methodology utilizes a strength-inference strategy of hypothesis, independent manipulations of multiple sources of information, and the testing of mathematical models against the results of individual participants. Synthetic speech will allow the auditory and visual signals to be manipulated directly, an experimental feature central to the study of psychophysicals and perception. In addition, expanded factorial designs are used to provide the moist powerful test of quantitative models of perceptual recognition. Expanded factorial designs are used to study how auditory speech and visual speech are processed alone and in combination, and under different degrees of ambiguity. Experiments are proposed to clarify the classic McGurk effect , to assess the contribution of segment frequency in the language and the psychophysical properties of the auditory and visual speech, and to contrast the influence of visible speech with written text in terms of how it is integrated with auditory speech. Experiments are also proposed to test whether previous results and theoretical conclusions based on syllable perception extend to meaningful items, such as words and sentences. Experiments will evaluate the integration of paralinguistic information in bimodal speech perception and the relative influence on dynamic and static sources of visible information in speechreading and bimodal speech perception. To further substantiate the model testing, Bayesian selection as well as RMSD goodness-of-fit criteria will be used in the evaluation of extant models. Many communication environments involve a noisy auditory channel, which degrades speech perception and recognition. Having available speech from the talker's face improves intelligibility in these situations. Visible speech also supplements other (degraded) sources of information for persons with hearing loss. The use of visible speech and its combination with auditory speech is therefore critical for improving universal access to spoken language. It has potential to improve the quality of speech of persons with perception and production deficits, 2) enhance learning and communication, 3) provide remedial training for poor readers, and 4) facilitate human-machine interactions.
|
1 |
2000 — 2004 |
Massaro, Dominic |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Synthesis and Evaluation of Visible Speech @ University of California-Santa Cruz
This research involves development, implementation, and evaluation of a computer animated talking head, which generates realistic visible speech coordinated with auditory speech. Synthetic visible speech has an obvious potential for advancing knowledge about speech production, the visible information in face-to-face speech perception, how it is utilized by human perceivers, and how it can best be used in communicating with individuals with hearing loss and in language learning situations. Realistic speech is obtained by animating the appropriate facial targets for each segment of speech along with the appropriate coarticulation, paralinguistic content, and emotion. The research goal is to achieve true photo-realism (visual speech indistinguishable from a real talker) or even superrealism (visual speech easier to read than a real talker). To achieve this goal, it is necessary to obtain additional physical measurements from real speech, to refine the control of the talking head, and to evaluate these changes using intelligibility testing with human observers and by comparisons to natural speech.
The working hypotheses of this research are 1) a synthetic talker is an important challenge to speech research and computer animation and offers a potentially valuable medium for communication among both normal and disabled individuals, human-computer interaction, and virtual worlds, 2) synthetic visual speech will provide a valuable experimental tool for better understanding of speech perception by ear and by eye, 3) visual speech information offers an additional source of information for both normal and hearing-impaired individuals, and 4) the research will have immediate and direct application to improving the communication alternatives in noisy environments and for individuals with hearing loss.
|
0.915 |