Area:
Speech Communication, Audiology, Neuroscience Biology
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Peter Assmann is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
1991 — 1995 |
Assmann, Peter |
R29Activity Code Description: Undocumented code - click on the grant title for more information. |
Perception of Speech in the Presence of Competing Voices @ University of Texas Dallas
The proposed research has three overall objectives: first, to investigate the acoustic properties which enable listeners with normal hearing to separate speech from competing sounds, including other voices; second, to study the perceptual strategies used by listeners to extract these properties from the composite waveform; and third, to develop and evaluate a model of speech-source segregation. Individuals suffering from hearing impairment of cochlear origin often report difficulties in understanding speech when competing voices are present. Research on the perceptual processes involved in separating speech from other sounds may provide insights into the difficulties faced by hearing-impaired listeners,-and may suggest forms of signal processing to enhance the intelligibility of speech signals corrupted by background noise and thereby improve the design of future signal-processing hearing aids. The experimental component of the project will investigate the role of voice fundamental frequency (f 0) and formant frequencies in the perceptual segregation of competing voices. In one set of experiments, listeners will attend to a mixture of two synthesized "Voices" and identify what each of them is saying. The contribution to voice separation of f 0 differences, f, changes over time, and formant frequency changes over time will be assessed in terms of the accuracy of identification performance. A second set of experiments will use a matching paradigm to examine the link between pitch perception and voice segregation. A third set will use a vowel-matching task to examine the perceptual consequences of formant overlap, which frequently exists when voices compete. The modeling component of the project involves the further development and evaluation of a computational model of the auditory and perceptual processes underlying the performance of listeners in tasks involving two competing voices (Assmann and Summerfield, 1990). New development& will include the introduction of a sliding time analysis window to accommodate changing f o's and changing formant patterns, and an elaborated segregation procedure which reflects the findings of the perceptual experiments.
|
1 |
2003 — 2008 |
Assmann, Peter |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Perception of Frequency-Shifted Speech @ University of Texas At Dallas
With National Science Foundation support, Dr. Peter Assmann will conduct three years of research on how listeners adapt to speech that is shifted up or down along the frequency scale. Such shifts affect everyday speech communication, as listeners adjust to men, women, and children of varying ages. Recent studies have shown that intelligibility drops sharply when the spectrum envelope of speech is shifted upward by a factor of 1.5 or more, or downward by 0.7 or less. The detrimental effects of such shifts can be counteracted, to some degree, by incorporating talker-matched changes in fundamental frequency. This finding, together with predictions from a pattern recognition model, suggests that listeners are sensitive to statistical regularities in natural speech and that they may adapt to frequency-shifted speech through long-term exposure. To test these hypotheses, a speech vocoder will manipulate the spectrum envelope and fundamental frequency of natural speech. The first set of experiments will investigate the conditions that preserve the intelligibility of frequency-shifted speech and test the predictions of pattern recognition models. The second set of experiments will investigate perceptual accommodations to frequency-shifted speech following extended listening experience to determine how well these adjustments generalize across talkers, speech materials, and shift factors.
Models of speech perception must explain the ability to understand frequency-shifted speech. Research on this topic may provide insights into two problems faced by hearing-impaired listeners. First, present-day cochlear implant electrode arrays cannot be inserted completely into the cochlea; they provide electrical stimulation only to the basal portion. Implant users need to accommodate to the re-mapping of the frequency spectrum provided by the device. Second, frequency shifts are used in frequency-transposing hearing aids that attempt to restore speech intelligibility for impaired listeners by shifting the spectrum into the region of better hearing. Frequency lowering provides improved speech recognition for some hearing-impaired listeners, especially after extended exposure. But the limited extent of its benefit warrants further study. Studies of the perception of frequency-shifted speech by listeners with normal hearing may provide a better understanding of the perceptual adaptations to the altered frequency mapping provided by cochlear implant processors and frequency-transposing hearing aids. Studies of the perceptual tolerance for frequency shifts may also provide a basis for improving voice quality in speech synthesis, and suggest ways to achieve a greater degree of talker independence in automatic speech recognition systems.
|
1 |
2011 — 2017 |
Assmann, Peter |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Acoustic Variability and Perception of Children's Speech @ University of Texas At Dallas
No two voices are exactly alike, and speech sounds can vary dramatically when produced by different individuals. A substantial component of this variability stems from anatomical differences between speakers that reflect their age and sex. Although these factors complicate the relationship between acoustic cues and phonetic properties of speech, they provide information from which the listener can determine the age, sex and size of the speaker, referred to as indexical properties.
The aim of the proposed research is to investigate the relationship between indexical and phonetic properties in children's speech through four linked projects. Project 1 involves the construction and acoustic analysis of a vowel database from children ranging in age from 5 through 18 years. The database will provide the materials for experiments investigating the perceptual consequences of age-related changes in speech. In Project 2, natural and modified versions of the recordings will be used to examine the cues that distinguish male from female voices at different ages. Project 3 will investigate the perception of speaker age in children's voices and evaluate the effectiveness of vocal age conversion using synthesis techniques based on models of vocal tract scaling. Project 4 investigates the link between vowel identification and indexical properties, requiring listeners to provide vowel identification responses together with judgments of the perceived sex and age of the speaker. Pattern recognition models will be implemented using acoustic measurements from the database to model the statistical relationships between the acoustic properties of children's speech as a function of age and sex, and to predict listeners' responses in the perceptual experiments.
This research will provide valuable information on speech development and the processes by which listeners extract linguistic and indexical information from children's speech. The findings could provide useful information for automatic speech recognition systems applied to children's speech, reveal effective strategies for synthesizing children's voices, and serve as normative data in clinical studies of disordered speech.
|
1 |