2011 — 2013 |
Niziolek, Caroline |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Phonetic Influences On Auditory Feedback Control @ University of California, San Francisco
DESCRIPTION (provided by applicant): A fundamental issue in speech research is the interaction between production and perception. The nature of this interaction has profound implications for understanding and modeling speech development, production deficits, and rehabilitation strategies. Our goal is to characterize how perception of others'speech, particularly at the phoneme boundary, influences the auditory-motor feedback processes that guide self- produced speech. On the one hand, a speaker may perceive his own speech in the categorical manner in which listeners perceive it, allowing for rapid, robust auditory processing. On the other hand, a speaker may monitor his output at a sub-categorical level, before high-level auditory cortex imposes phonetic structure on the acoustic signal. We aim to distinguish these hypotheses by examining speech under conditions of auditory change and probing the neural signal for an increased response to that change when it is phonetically relevant. Our project combines psychophysical and magnetoencephalography experiments to investigate the neural dynamics elicited by a sudden modification of speakers'auditory feedback. The proposed experiments were designed to achieve two specific aims. First, we aim to measure the neural responses to real-time phonetic category changes. Drawing on the results of past studies, we hypothesize that a cross-category or "phonetic" shift causes a greater neural response than a within-category or "non-phonetic" shift of the same magnitude, but the dynamics of this response are still unknown. The goal is to use MEG to examine the time- varying neural response to unexpected feedback perturbation, contrasting that response under conditions of phonetic and non-phonetic change. Secondly, we aim to assess the effects of speech training on auditory feedback control. Learning a novel vowel target in formant space has the effect of adding new category boundaries between the novel vowel and the well-learned native vowels. The proposed experiments evaluate the degree to which these newly-learned categories affect the responses to perturbation of an existing vowel. The proposed research adds to the existing feedback literature by introducing the distinction between meaningful linguistic changes and mere acoustic variations imposed in feedback. We aim to improve models of speech motor control by determining whether auditory feedback control is influenced by categorical perception, and therefore whether it occurs at a high or low level in auditory cortex. This research is directly applicable to stuttering, a motor control disorder thought to reflect abnormalities in feedback processing. These studies will also ultimately contribute to improved diagnosis and treatment of communication disorders such as Parkinsons'Disease or spasmodic dysphonia, since neuroimaging of feedback control can be used diagnostically to probe the specific abnormalities in brain networks involved in perception and production. Finally, the training studies proposed here could potentially be useful in developing feedback-related training strategies for a variety of speech disorders. PUBLIC HEALTH RELEVANCE: This research investigates how motor cortical areas and feedback-related auditory cortical areas interact to control speech output, and will afford a better understanding of the neural basis of speech motor control. This research is directly applicable to stuttering, a motor control disorder that is thought to reflect an abnormality in the processing of auditory feedback. Auditory feedback control is also beginning to be used as a diagnostic measure for spasmodic dysphonia and Parkinson's disease;once better understood, knowledge gleaned through these studies will ultimately lead to an improved diagnosis and treatment of disorders with manifestations in speech impairments.
|
0.915 |
2015 — 2019 |
Niziolek, Caroline |
K99Activity Code Description: To support the initial phase of a Career/Research Transition award program that provides 1-2 years of mentored support for highly motivated, advanced postdoctoral research scientists. R00Activity Code Description: To support the second phase of a Career/Research Transition award program that provides 1 -3 years of independent research support (R00) contingent on securing an independent research position. Award recipients will be expected to compete successfully for independent R01 support from the NIH during the R00 research transition award period. |
Neural Markers of Speech Error Detection and Correction Abilities in Aphasia @ Boston University (Charles River Campus)
? DESCRIPTION (provided by applicant): Individuals with aphasia, a disorder caused by damage to language-related brain regions, are often afflicted with speech production difficulties that greatly impair communication. The goal of this K99/R00 Pathway to Independence Award is to provide the candidate, Dr. Caroline Niziolek, with a strong grounding in patient- based research and cutting-edge neural connectivity analysis, enabling her to apply her expertise in speech motor control to investigate the functional abnormalities at the root of this communication disorder. In this proposal, Dr. Niziolek aims to identify the neurophysiological causes of speech production deficits in aphasia and to assess whether feedback-based speech training can ameliorate them. Her recent neuroimaging research in healthy speakers suggests that the auditory system constantly monitors its own speech for small deviations from intended speech sounds, and that successful monitoring may drive an unconscious motor correction of these deviations before they are realized as errors. The central hypothesis of this project is that in speakers with aphasia, production deficits may be due to a failure of detection: that is, the auditory system is not sensitive to an aphasic speaker's own deviations until after they become full-blown speech errors. Importantly, in testing this hypothesis, Dr. Niziolek will look across aphasic patients, regardless of their lesion location or clinically-defined subtype, using an objective neural marker she developed to assess detection ability. Her immediate goal for the K99 phase is to use this neural marker, along with behavioral metrics, to characterize each individual's deficit as either perceptual (difficulty detecting one's own errors) or motor (preservd ability to detect errors but difficulty in carrying out corrective commands). She will then relate these objectively-measured deficits to patterns of lesions and magnetoencephalographic (MEG) connectivity to determine the structural and functional network abnormalities that cause each type of deficit. With this understanding, she plans to carry out a novel speech production training study in the R00 phase of the award. The proposed speech training game uses a visual cursor that is mapped to acoustic input so that participants can use their voice to move the cursor to a visual target. The target will correspond to the production of a given speech sound, such as the e in bed. This training provides a secondary source of sensory feedback for the detection of deviations from a target. By training aphasic patients to learn to hit vocal targets using visual feedback, Dr. Niziolek aims to use an intact system (vision) to retrain the damaged one (auditory detection). Her long-term goal is to use this paradigm in conjunction with the neural marker assessment to develop personalized treatments that are tailored to each patient's specific functional deficit (auditory detection or motor correction). This research career development plan will be carried out at Boston University with an impressive co-mentor team from whom she will gain invaluable clinical and technical training, with the ultimate aim of developing a translational research program that can be extended to other speech disorders.
|
0.915 |
2021 |
Niziolek, Caroline Parrell, Benjamin (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Establishing the Clinical Utility of Sensorimotor Adaptation For Speech Rehabilitation @ University of Wisconsin-Madison
Project Summary & Abstract Individuals with brain injuries or disorders that affect movement (such as Parkinson?s disease, cerebral palsy, amyotrophic lateral sclerosis, and many others) often have difficulties in being understood when they speak. While treatments exist, they often require substantial conscious attention to the way speech is produced, or require increased breath support to speak louder. Many individuals with speech disorders have cognitive or respiratory difficulty that renders these treatments ineffective. These individuals will benefit from alternative strategies that promote motor learning: the ability to alter motor actions through practice. One type of motor learning, sensorimotor adaptation, is a particularly promising pathway for alternative rehabilitation. In this paradigm, the auditory feedback people receive while speaking is externally perturbed, causing them to quickly change their speech to oppose these perturbations. Because of its ability to rapidly induce changes in speech production without conscious control, sensorimotor adaptation holds unique promise for rehabilitation. However, its potential clinical applicability is limited by poor understanding of key clinically-relevant features. First, existing sensorimotor adaptation paradigms do not affect speech in a way that facilitates communication. To improve rehabilitation outcomes, sensorimotor learning must target clinically-relevant speech parameters such as intelligibility. We address this barrier through a novel auditory perturbation that artificially decreases the perceived space between vowels, causing speakers to produce more vowel contrast. Critically, reduced vowel contrast is a hallmark of motor speech disorders and significantly contributes to decreased intelligibility. We determine the effectiveness of this paradigm to increase intelligibility and test how these increases are retained across multiple training sessions, how they generalize to untrained words, and how they can be elicited in complex sentences?characteristics which are key for potential clinical applications. Second, while sensorimotor adaptation is a robust effect on average, not all individuals learn to the same degree. This variability limits the potential impact to only those who show a large degree of learning. This proposal uses behavioral interventions and brain stimulation that target the hypothesized causes of this variability. By directly manipulating these factors, we can determine, for the first time, the mechanisms that underlie speech motor learning. Additionally, establishing how these factors can be modulated to increase learning would allow treatment to benefit a wider range of individuals. Although sensorimotor adaptation can quickly induce changes in speech, its current clinical applicability is limited by substantial gaps in our understanding of its mechanisms. By establishing the capacity of sensorimotor adaptation to increase speech intelligibility, characterizing retention and transfer of learning, and identifying the mechanisms underlying variability between individuals, this work lays a critical foundation for future treatments that optimize the clinical impact of motor learning.
|
0.915 |
2021 — 2024 |
Parrell, Benjamin (co-PI) [⬀] Niziolek, Caroline |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sensorimotor Adaptation as a Window to Speech Movement Planning @ University of Wisconsin-Madison
Speaking is normally thought of as a two-part process. First, a language planning system transforms ideas, concepts, and/or words into a series of smaller units, such as syllables or speech sounds. These basic linguistic units are then thought to be “read out” into articulatory movements by a separate motor planning system. However, this divided planning process cannot explain speech production in the real world, because it is possible to learn changes to speech movements that are dependent on language context. For example, if you move from California to Wisconsin, you may learn a new accent: you may unconsciously start saying “bag” with the vowel sound in “say” instead of the vowel sound in “sat”. If the same change occurred in all words with this sound (“snag”, “dragon”, “agriculture”), it would suggest you learned a change to the smaller unit (“ag”). However, you might change the way you say “bag” while keeping your old pronunciation of “bagpipe”, suggesting learning that is dependent on word context. The purpose of this research is to understand when linguistic context influences learning and when it does not and to use those results to determine the span of speech motor planning in different contexts. A more accurate characterization of speech motor planning is critical for understanding how we typically speak and how this process breaks down in neurological disorders that impair the way we plan speech movements, such as aphasia and apraxia of speech. The investigators are involved with Frontiers for Young Minds, a journal that aims to engage the next generation of scientists (kids ages 8-15) by involving them in the peer review process. They will host live demonstrations at the yearly Wisconsin Science Festival, and will provide hands-on science experiences as a part of summer and afterschool programming for Madison-area kids.
To investigate how speech movements are planned, the investigators will use a speech learning task that causes an unconscious change to pronunciation. Participants will talk into a microphone while hearing playback of their voice with certain frequencies shifted, making one vowel sound like another (for example, “bed” could be shifted to sound like “bad”). Over time, this causes participants to unknowingly shift their vowel pronunciation in the opposite direction. Importantly, participants can simultaneously learn to shift a single vowel sound in different ways based on the word in which it appears (for example, pronouncing “bed” and “head” differently, even though they share the same vowel sound). This shows that the word context can differentiate planning of this vowel sound. The investigators will test other linguistic contexts that might allow participants to learn different pronunciations for the “same” speech sound, contexts such as word meaning, phrase structure, pitch or intonation, and gesturing while speaking (for example, pointing). If this learning is possible, it suggests that these contexts are part of the movement plans for speaking, together forming a cohesive unit that can scaffold learning. The work will establish how speech motor planning integrates linguistic representation and other communicative movements such as pitch and gesture.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |