2003 |
Mcmurray, Bob |
F31Activity Code Description: To provide predoctoral individuals with supervised research training in specified health and health-related areas leading toward the research degree (e.g., Ph.D.). |
Temporal Integration of Acoustic Detail in Speech @ University of Rochester
DESCRIPTION (provided by applicant): Recent research from our lab and others has challenged the traditional notion that the goal of the speech perception system is to discard unnecessary variability in the signal in favor of discrete lexical or sub lexical units. Rather, it appears that the perceptual system is sensitive to this information and is able to retain it long enough for it to be of use in resolving temporal ambiguities and predicting upcoming phonetic material. This proposal will extend these basic results by examining the phonetic environments in which the perceptual system is sensitive to fine-grained detail and the consequences of this sensitivity for lexical neighborhoods. It will further extend these findings by examining situations in which knowledge of this acoustic detail may predict upcoming phonemes or words using knowledge of phonological assimilation, and also help resolve prior ambiguities created by speaking rate, lexical status, and sentential meaning.
|
0.976 |
2007 — 2009 |
Mcmurray, Bob |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Lexical Integration of Continuous Acoustic Detail: Normal and Impaired Listeners
[unreadable] DESCRIPTION (provided by applicant): Speech perception is a process of mapping continuous detail in the acoustic signal onto discrete units of meaning like words. Given the variability in the signal and speed at which it arrives, the system must cope with a great deal of variation in a small amount of time. The long term objective of this research is to understand how this process works. This proposal tests an implication of existing work showing that lexical activation is sensitive to continuous detail in the signal and for current models of spoken word recognition: that online lexical-activation processes (which are fast and work in parallel to build stable representations) can actively integrate continuous detail over time to anticipate upcoming material, resolve ambiguity in the past, and organize perceptual processes. This will be tested in four series of behavioral experiments based on visual world paradigm. In this paradigm, subjects hear carefully controlled spoken language and manipulate objects in a visual environment while eye-movements are monitored. The probability of fixating each object yields a moment-by-moment estimate of the activation for that word (how much the system is considering that word) as it unfolds over time. The first two projects examine these temporal integration processes in unimpaired listeners for the perception of phonologically modified speech and compensation for speaking rate. In each we will show that system can actively anticipate upcoming material and resolve ambiguous material in the past and that these processes are modulated by lexical factors. In the third project we explicitly test this framework examining situations in which continuous detail could facilitate ambiguity resolution, but only if it can retained longer than short-term echoic memory stores are known to operate. This would suggest that lexical processes play a unique role in this maintenance. The fourth project applies this framework to language impairments, testing the hypothesis that perceptual deficits associated with SLI originate in lexical, not perceptual, processes. Ultimately this project will contribute to basic knowledge of speech perception and its relationship to language disorders. Since perceptual and lexical abilities typically develop before higher level language, diagnostics and therapies based on them may be applied earlier (and as a result, more successfully) than other techniques. Thus, the basic knowledge acquired here may contribute to earlier detection and treatment of SLI. [unreadable] [unreadable] [unreadable]
|
1 |
2012 — 2016 |
Mcmurray, Bob |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Speech Perception, Acoustic Variability and Time: Normal and Impaired Listeners
DESCRIPTION (provided by applicant): Speech perception poses two difficult problems for listeners. First, the acoustic signal is variable and context dependent, making phoneme identification difficult. Second, it unfolds over time and at early points in a word there may not e sufficient information to identify it. This research aims to understand how listeners solve both problems, how these problems relate to each other, and to use this to understand two groups of impaired listeners: listeners with Language Impairment (LI) and listeners who use Cochlear Implants (CIs). Project 1 asks how listeners compensate for variation due to talker and phonetic context, and how compensation interacts with unfolding competition between candidate words that listeners momentarily consider during word recognition. It employs event related potentials to assess whether compensation occurs at the level of auditory encoding or during later categorical processes. It also uses eye-tracking to examine moment-by-moment activation of lexical competitors (how strongly listeners consider multiple words in parallel), asking when acoustic cues and compensation processes impact lexical processing. Finally, it examines CI users whose difficulty identifying talkers may inhibit their compensation abilities. This may lead to better processing strategies, device configurations and therapies. Project 2 examines how listeners represent the order of information in a word (e.g., how they distinguish anadromes like cat and tack). Most models use the serial order of the phonemes to exclude anadrome competitors. However, recent data indicate that listeners do not completely rule out anadromes, suggesting that order is not explicitly represented. Project 2 uses eye-tracking and visual world paradigm with known words and small artificial languages to determine whether listeners use fine-grained acoustic detail (differences in how a phoneme is pronounced in syllable-initial and final positions) as a proxy for order. It also examines listeners with LI, who may have deficits with both fine-grained auditory detail and serial order; and CI users who lack access to fine-grained spectral detail. This will assess theories of language impairment that emphasis auditory or sequencing deficits as the source of LI. It will also help us understand the variability in outcomes among CI users and further refine our understanding of what acoustic information must be transmitted by the CI. Project 3 asks how long lexical competitors remain active during word recognition. The prior grant discovered that listeners with LI do not fully suppress lexical competitors during word recognition. Project 3 develops an eye-tracking paradigm to assess how long competitors are active, and to ask what mechanisms maintain it, examining inhibition between words, echoic memory and phonological short-term memory. It ex- amines listeners with LI and CI users to determine the consequences of this heightened competition, how it relates to other language processes, and the locus of the impairment. Across all three projects, this proposal aims to better characterize the underlying mechanisms of speech perception in normal listeners with the goal of using this characterization to better understand the unique problems faced by impaired listeners.
|
1 |
2018 — 2021 |
Mcmurray, Bob |
P50Activity Code Description: To support any part of the full range of research and development from very basic to clinical; may involve ancillary supportive activities such as protracted patient care necessary to the primary research or R&D effort. The spectrum of activities comprises a multidisciplinary attack on a specific disease entity or biomedical problem area. These grants differ from program project grants in that they are usually developed in response to an announcement of the programmatic needs of an Institute or Division and subsequently receive continuous attention from its staff. Centers may also serve as regional or national resources for special research purposes. |
Cognitive Mechanisms of Language Processing
ABSTRACT ? PROJECT 4 A major issue in hearing loss is variability. Hearing impaired (HI) listeners with similar profiles often show different outcomes. Correlational studies show that signal quality (audibility, frequency separation) is related to outcomes. However, equally important are factors like device experience, cognition and brain function. It is unclear how these adaptations, cognitive resources, or brain areas improve perception. This project tackles this by leveraging mechanisms and measures from cognitive science that describe how sound is mapped to meaning, focusing on the issue of time. Since speech unfolds over time, there are ambiguous periods when the input is compatible with many words. For example, at the onset of butter, the signal could match bump, but and buck. Normal hearing (NH) listeners manage this ambiguity by immediately activating multiple words which compete dynamically over time. For HI listeners, this natural ambiguity may be more problematic and managed differently. We assess the dynamics of word recognition with an eye-tracking paradigm that traces how this competition unfolds over several hundred milliseconds. Prior work suggests cochlear implant (CI) users tune these dynamics differently than NH listeners; these differences are correlated with outcomes and may help cope with poor input. This project asks why these competition processes differ in HI listeners. Are such differences a poor version of typical language processing imposed by degraded input? Or are they a compensatory adaptation for coping with uncertainty? To answer this question in a way that translates to the real-world, Aim 1 moves beyond isolated words to examine sentences, where factors like semantics constrain this competition. Aim 2 uses a longitudinal study to link differences in competition to peripheral auditory function (Project 2), listening effort (Project 1) and cortical processing (Project 3); and Aim 3 complements this with laboratory studies of adaption. Aim 4 examines how HI listeners fuse information from different types of input, for example, from aided acoustic hearing and a CI. All aims leverage natural variation in multiple types of HI listeners (standard CIs, acoustic+electric CI configurations, and hearing aids) to investigate how differences in the peripheral input impact the mechanisms of language processing.
|
1 |
2019 — 2021 |
Mcmurray, Bob |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Development of Real Time Spoken and Written Word Recognition: Cognitive Bases of Language and Educational Outcomes
PROJECT SUMMARY Language and reading impairments affect 16%-20% of US children, and are stable, persisting through adolescence and adulthood. Deficits in even low level skills like phonological processing and spoken word recognition persist through adulthood, and half of middle-school struggling readers show deficits in decoding and written word recognition. This proposal examines the development of spoken and written word recognition during late childhood. While words are a low-level language skill at these ages, they are central to language, linking phonology, orthography, and meaning. At a cognitive level, word recognition is seen as a competition process. As the input (e.g., wizard) is heard (or read) people consider multiple partially matching words (whistle, lizard) which compete over time. Prior work assessed this in children using a paradigm in which listeners match words to pictures while eye-movements are monitored. As listeners begin to hear a word their eyes move between candidates. These fixations reveal momentary consideration of alternative words and trace the dynamics of competition over milliseconds. We applied this to children, showing that competition is resolved more automatically between 9 and 16 years. Adolescents with language impairment showed a different pattern: they were similarly automatic, but did not fully resolve competition by the end of processing. This research documents that real-time processing develops, but it is unclear how. In older children, it is likely due to multiple causes such as vocabulary growth, the organization of phonological systems, the onset of reading instruction, and changes in executive function. This project examines the development and disorders in the automaticity and degree of competition resolution during lexical processing. It examines both spoken and written word processing to unpack the relationship between language and reading, and identify outcomes (good and poor) linked to differences in real-time processing. The first aim is to determine the cognitive and developmental factors that shape real-time word recognition, and the consequences of this for language and reading outcomes. We conduct an accelerated longitudinal study of 400 children (normal and impaired) between 7 and 12 combining eye-tracking measures of word recognition with tests of phonological processing, reading, vocabulary, and executive function. The second aim uses cross-sectional laboratory studies to examine the consequences of differences in real-time processing for learning and for related processes like semantic processing (meaning) or orthographic decoding (mapping sound to print). The third aim uses laboratory training procedures to understand plasticity in real-time lexical processing; this may pave the way for potential interventions targeting lexical processing. Finally, the fourth aim develops computational models of normal and disordered lexical processing to attain a deeper understanding of what mechanisms of language processing are changing with development or differ in disordered language users.
|
1 |
2020 |
Mcmurray, Bob |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Development of Real Time Spoken and Written Word Recognition: Neural Bases.
PROJECT SUMMARY Language and reading impairments affect 16%-20% of children. Deficits in even low-level skills like word recognition persist through adulthood, and half of middle-school struggling readers show deficits in reading single words. While words are a low-level language skill at these ages, they link phonology, orthography, and meaning. Thus understanding how word recognition develops is crucial for remediating language and reading disorders. This proposal is a revision to the funded Growing Words Project, which examines the development of spoken and written word recognition during childhood. Growing Words uses eye-tracking to ask how word recognition unfolds in real-time, from the moment a word is heard (or seen) to the moment it is recognized. Our work suggests these real-time processes develop through adolescence, and children with language disorders show a distinct profile of real-time processing. Growing Words asks how real-time processing in language and reading develops from 1st to 6th grade. This study is testing 400 children longitudinally across two laboratories (including an off campus lab to enhance diversity). Children undergo measures of language, reading, cognition as well as eye-tracking measures of real-time language processing. Growing Words asks what causes a child to become more automatic (e.g., vocabulary growth, executive function, reading skills) and what are the consequences of better real-time word recognition for language and reading outcomes. This revision builds on Growing Words to examine structural properties of the brain as both a cause and consequence of differences in real-time processing, language, reading and environmental factors. Prior grants, offered a serendipitous opportunity to collect structural MRI and eye-tracking data on the same children. This led to new analyses that discovered that static properties of the brain are correlated with processing at different points in time and under different conditions. We thus leverage the infrastructure of Growing Words to investigate the neural origins of real-time processing. A subset of Growing Words participants will undergo structural MRI (measuring gray matter surface area and thickness), and diffusion weighted imaging (measuring white matter coherence) at two point, two years apart. We combine these data with a multidimensional characterization of language and reading, with eye-tracking data, and with measures of language/literacy input to address three aims. First, we ask how the development of structural brain properties leads to development of reading and language skills. Second, we identify structural properties of the brain in which development or individual differences are associated with differences in real-time lexical processing. Finally, we ask how structural properties of the brain are shaped by prior language and reading development, real-time processing skills, and language and reading input in the environment. These aims provide insight into the neural basis of language and reading development and disorders, and a new understanding about how brain structure contributes to efficient real-time processing in many domains.
|
1 |