2002 — 2003 |
Houde, John Francis |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
The Neural Substrates of Impaired Feedback Control in P* @ University of California San Francisco
DESCRIPTION (provided by applicant): Patients with Parkinson's disease (PD) have a variety of difficulties with speech production. These deficits may result from a reduced ability of PD patients (PDs) to use sensory feedback to control movement; a hypothesis consistent with studies showing that PDs have impaired vocal responses to changes in the pitch frequency or amplitude (volume) of auditory speech feedback. In the present proposal, the neural substrates of these impairments in speech feedback control are sought. Auditory feedback control of speech will be examined in PD and normal subjects vocalizing, while listening to perturbations of the frequency or amplitude (loudness) of the pitch in the auditory feedback of their speech. These perturbations will cause pitch-perturbation responses (PPRs); compensatory changes in pitch frequency or amplitude that are well characterized for normal subjects, but only minimally characterized for PDs. The study's first aim is to more fully-describe how PPRs of PDs and normals differ, and what pitch perturbation parameters cause the clearest differences. This will be accomplished by examining both PDs and normals in extensive psychophysical testing of their PPRs. After having determined what pitch perturbations most clearly diagnose the PPR deficits in PDs, the study's second aim will identify the neural systems involved in PPRs in PDs and normals. This will be done using fMRI scanning of PDs and normals producing or passively listening to PPRs. We will find CNS regions that are more active after PPRs than after unaltered vocalizations, but show no activity when the subject passively listens to a normal and pitch-perturbed speech. After finding areas of the CNS that appear involved in PPRs, the third aim of the study will be to determine the order of their neural activities resulting in a PPR. To examine this, we will induce PPRs in vocalizing PD and normal subjects, while using magnetoencephalography and electroencephalography to record the sequential activation of different brain regions following perturbation but preceding the PPR. Better understanding of the neural substrates of impaired speech feedback control in PDs will elucidate the pathophysiology of speech disorders associated with PD and promote refined targeting of treatments for those disorders.
|
1 |
2004 — 2006 |
Nagarajan, Srikantan (co-PI) [⬀] Houde, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Role of Auditory Cortex in Speech Motor Control @ University of California-San Francisco
How do we hear ourselves while we're speaking? Understanding how speech perception interacts with speech production is a longstanding issue that has classically been investigated by looking at how altering auditory feedback affects speech. Recently, however, the advent of functional neuroimaging methods has allowed a new approach to the issue: examining how producing speech affects the neural processes serving auditory perception. Several studies have shown that the act of speaking suppresses the normal response to speech sounds in auditory cortex and associated regions. Previous studies by Dr. John F. Houde suggest that this suppression reflects a comparison between actual auditory input and a prediction of that auditory input. Based on these initial studies, Dr. Houde and colleagues developed a model, derived from modern control theory, for how auditory feedback is processed during speech production. With support of NSF, Dr. Houde is testing this model by using whole-head magnetic source imaging (MSI) to monitor activity in auditory cortex as speakers respond to brief perturbations of their auditory feedback. Prior studies have shown that such speech perturbations cause compensatory responses in speech motor output. In this project, researchers are first determining whether auditory cortex is part of the neural circuitry mediating these compensatory responses by examining how variations in speaker's perturbation responses are correlated with activity in auditory cortex. Their next step is determining if their model of feedback processing explains the responses previously observed in auditory cortex. A key concept of the model is that auditory feedback does not directly affect speech motor output. Instead, incoming feedback is compared with an internally-generated prediction of the expected feedback, with the resulting feedback prediction error used to control speech output. The research tests whether the responses of auditory cortex to feedback perturbations are consistent with this model.
This project is important not only for understanding the neural circuits linking speech perception and production. This project involves using state-of-the art MSI methods to test predictions of an engineering control theory model of speech feedback processing. The research team is trained in the speech research, engineering control theory, signal processing, neuroscience and magnetic source imaging methods needed to conduct this research. The project will enable students and postdocs involved to gain experience in functional neuroimaging as well as learn about control theory concepts - an area of knowledge important for understanding motor control but usually neglected in the education of neuroscience, bioengineering and cognitive science students. And although this research is focused on speech motor control, its research emphasizes the importance of studying perception in conjunction with production to understand general problems in motor control and motor dysfunctions.
|
0.915 |
2009 — 2013 |
Nagarajan, Srikantan (co-PI) [⬀] Houde, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Using Neuroimaging to Test Models of Speech Motor Control @ University of California-San Francisco
To speak so a listener understands, the speaker has to accurately produce the sounds from his or her language. While this may seem effortless for most people, the actual speech production involves complex mental and physical processes involving the activation of speech muscles and the precisely timed movements in the vocal tract (e.g., the combination of movements in the mouth, jaw, and so on). The unique properties of the individual's speech organs (e.g., size of mouth), combined with the developmental changes of these properties over a lifetime, will directly influence the way speech sounds are produced by each individual. How does the human brain accomplish this feat of continually tuning the control of vocal tract so that it always produces the sounds desired? With support from the National Science Foundation, the investigators will study how the speaking process involves the brain predicting the sensory feedback and correcting the control of the vocal tract when the feedback does not match the prediction. While previous research suggests that this prediction and correction process does occur during speaking, there is little information about how the circuitry in the brain would accomplish such a process. In the proposed research the investigators will examine the timecourse of neural responses to audio feedback perturbations (brief changes in pitch, amplitude, or formant frequencies) during speaking. They will use magnetoencephalography (MEG) and electrocorticography (ECOG) methods to record normal individuals and epilepsy patients who have electrodes implanted in their brain to localize seizures. Both methods allow neural activity in the brain to be recorded at a millisecond time resolution.
The results of these experiments will allow for the testing of different models that have been proposed to explain the neural substrate of speech motor control. The outcome of the research will facilitate relating the control of speaking to what is known in other domains of motor control research, and lead to a more complete understanding of the control of movements in humans. The use of advanced functional neuroimaging to study the neural basis of speaking will provide a special opportunity to train and educate a wide range of graduate students, post-doctoral trainees, and medical students who will get involved in the research. The proposed research will also further the development of multi-user research facilities, especially at the UCSF Biomagnetic Imaging Laboratory that has one of a limited number of MEG scanner facilities in the US.
|
0.915 |
2010 — 2014 |
Houde, John Francis |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neuroimaging of Speech Motor Control @ University of California, San Francisco
DESCRIPTION (provided by applicant): Because communicating is so critical to functioning in the world, disorders of speech production are among the most debilitating neurological conditions. Developing effective treatments will require accurate, interpretable models of the neural processes controlling speaking. In defining such models, the role of sensory feedback has been a key issue: speaking appears to be both a feedforward process (you can speak with sensory feedback blocked) and a feedback process (alteration of sensory feedback modifies speech). One way to model this duality is to take an existing model of speech motor control based on sensory feedback control and augment it with a separate feedforward controller. This is the approach taken in DIVA, a currently dominant model of speech motor control, where feedback and feedforward control subsystems combine their outputs in motor cortex to control the vocal tract. In our lab, however, we have been investigating another way of modeling the feedforward and feedback characteristics of speech called observer-based, state feedback control (SFC). Here, control of speech is based entirely on feedback control, but the feedback comes from a surrogate called an observer that is only indirectly affected by real sensory feedback. Both models can account for the behavioral characteristics of speaking, but they make very different and testable predictions about the underlying neural processes responsible for those behaviors. Here, we will test the differing predictions of these two models by perturbing the auditory feedback of subjects as they speak and examining their neural responses to these feedback perturbations using several different functional neuroimaging methods: magnetoencephalographic imaging (MEG-I) and electrocorticography (ECoG). Outside of speech motor research, SFC models of other motor behaviors (e.g. reaching, eye movements) are becoming more prevalent, in large part because people appear to move in optimal ways (i.e., minimizing expended energy, only controlling task-relevant aspects of their movements) and SFC is the foundation of modern optimal control theory. If the neural control of speaking were shown to be consistent with an SFC model, we could relate it to other domains of motor control research and leverage an extensive theoretical knowledge base, allowing us to make powerful predictions of the model's behavior.
|
1 |
2013 — 2017 |
Houde, John Chang, Edward (co-PI) [⬀] Nagarajan, Srikantan [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Function of Auditory Feedback Processing During Speech @ University of California-San Francisco
The goal of speaking is to produce the right sounds that convey an intended message. Accordingly, speakers monitor their sound output and use this auditory feedback to further adjust their speech production. Drs. Nagarajan and Houde hypothesized that the brain not only generates the motor signals that control the speech production but also generates a prediction of what this speech should sound like and performs an on-going comparison during speaking in order to dynamically adjust speech production. Whole-brain magnetoencephalographic imaging (MEG-I) experiments will be performed to monitor subjects' auditory neural activity as they hear themselves speak. The first tests task and feature specificity of the feedback prediction. If the prediction encodes only task-relevant acoustic features (i.e., pitch for a singing task), then auditory cortical activity will depend only on the acoustic goal for that task. The second tests the importance of categorical identity in the process of comparing feedback with the prediction. If feedback is altered enough to change the meaning of a word (e.g., when /bad/ is altered to /dad/), this is expected to have a much larger impact on auditory cortical activity than non-categorical alterations. These experiments are expected to improve our understanding of how the brain uses auditory feedback to maintain accuracy in speech production.
The proposed research activity also has broader impact. First, these expected findings may further contribute to a better understanding of and effective treatments for speech dysfunctions. For example, accurate models of brain networks used to control speaking form the basis for testable hypotheses about neural origins of speech disorders such as stuttering or spasmodic dysphonia. Second, the research project will provide a special opportunity to train and educate graduate students and postdoctoral fellows in the use of real-time speech alteration and MEG-I techniques which are only available at very few US institutions. Outreach with collaborators at San Francisco State University, and to the San Francisco Unified School district through the NSF funded Science Education Partnership (SEP) program will provide research experience to their students, specifically students from socioeconomically disadvantaged minorities who are under-represented in the sciences. The research team will also participate in the big data sharing effort by making the data and analysis tools available to support efforts to make use of real data in the teaching of STEM-related courses and to enable participation in discovery science by those who would otherwise have no access to such data.
|
0.915 |
2014 — 2018 |
Houde, John Francis Nagarajan, Srikantan S. [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Imaging Sensorimotor Adaptation and Compensation of Speech @ University of California, San Francisco
DESCRIPTION (provided by applicant): The motor act of speaking is the last and crucial step in the process of translating intended messages into the physical processes that convey those messages, i.e. the movements of articulators resulting in producing intended vocal sounds. Surprisingly many aspects of this process remain poorly understood, in particular, the role of feedback processing in controlling speech. Deficits in feedback processing are implicated in major speech impairments including stuttering, conduction aphasia, spasmodic dysphonia, and apraxia of speech. However, due to a paucity of testable models, and a lack of well-established methods, little is understood about the neural mechanisms underlying auditory feedback control in speech. In this grant application, we extend a quantitative model for speech motor control called state-feedback control (SFC) that we have previously developed. SFC posits that the brain controls speech using internal predictions of the state of the vocal tract and of the sensory consequences of speaking. The SFC model accounts for many behavioral and neural phenomena in speech motor control, including two key behavioral responses to unexpectedly altered auditory feedback - compensation and adaptation. Compensation refers to short-term changes in speech output in response to feedback alteration. Adaptation refers to long-term changes in speech output that persist even after the feedback alteration is removed. Here, we propose to use state-of-the-art methods for magnetoencephalographic imaging (MEGI) and electrocorticography (ECOG) in conjunction with cutting-edge methods of quantitative modeling (Bayesian estimation) and behavioral experimentation (real-time speech feedback alteration, audiomotor studies with a touch-screen, speech controlled visual stimulation). Our goals are to further elaborate our SFC model of speech motor control and examine what aspects of speech constrain its adaptation behavior. The specific aims are to: 1) determine the neural correlates of sensorimotor adaptation in speech; 2) determine the role of somatosensory feedback in compensation and adaptation; and 3) isolate perceptual contributions to speech compensation and adaptation. The proposed studies will refine our SFC model of speech motor control, increase its predictive power, and examine what specific aspects of speech constrain its compensation and adaptation behaviors. Such an understanding of the neural basis of speaking has the potential to develop better treatments of dysfunctions of speech motor control such as stuttering, conduction aphasia, spasmodic dysphonia, apraxia of speech, spas and hypophonic dysarthria in Parkinson's disease.
|
1 |
2015 — 2016 |
Courey, Mark S. Houde, John Francis Nagarajan, Srikantan S. (co-PI) [⬀] |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Timing of Neural Activity in Spasmodic Dysphonia @ University of California, San Francisco
? DESCRIPTION (provided by applicant): Spasmodic dysphonia (SD) is a debilitating disorder of voicing where the laryngeal muscles are intermittently in spasm. This prevents the vocal folds from vibrating efficiently and results in involuntary interruptions during speech. Treatment options for SD are limited, with only temporary symptom relief provided by Botulinum toxin (Botox) injections. SD is thought to be a dysfunction originating in the central nervous system (CNS), and may involve abnormal cortical processing of sensory feedback during phonation. However, the underlying causes of SD remain ambiguous and largely unknown. Several neuroimaging studies have examined phonation in patients with SD and have found aberrant activations in a number of cortical and subcortical regions. However, because these studies were based on fMRI and PET, which have poor temporal resolution, they do not resolve when these aberrant activations occurred in relation to the phonation act. As a result, critical clues about what causes SD have likely been missed. Our lab has done extensive work modeling the dynamics of speech production. From our model, we conclude that inferring functional impairments in SD from aberrant CNS activity requires knowing more than where in the CNS the aberrant activity occurs. It also requires knowing when in the act of phonation (e.g., initial glottal movement, voice onset, and sustained phonation) it is occurring. Here we propose to address this issue using magnetoencephalographic imaging (MEGI) - a functional imaging method our lab has developed based on MEG, which can reconstruct cortical activity with millisecond accuracy and sub-centimeter resolution. In Specific Aim 1, we will reconstruct cortical activity using MEGI while subjects (patients with SD and neurotypical controls) repeatedly produce a steady-state phonation and we monitor their glottal movement with electromyography (EMG). Abnormal pre/motor cortical activity in patients (compared to controls) prior to glottal movement would suggest impairments in feed forward motor preparation. Abnormal activity immediately after glottal movement onset but prior to phonation would suggest selective deficits in somatosensory feedback processing, whereas abnormal activity following phonation would suggest deficits in somatosensory and/or auditory feedback processing. To isolate whether SD specifically involves deficits in auditory feedback processing, in Specific Aim 2 we will use MEGI to monitor cortical activity as subjects phonate when we briefly perturb the pitch they hear in the audio feedback of their ongoing phonation. Abnormal cortical responses to the perturbation seen in patients prior to compensation would suggest deficits in auditory feedback processing, while those seen after onset of compensation would suggest abnormal motor responses to auditory feedback. Results from proposed studies will help us to isolate where and when the deficits related to SD arise in the cortical pathways controlling phonation. This will help us to devise novel, and perhaps more effective, treatments for SD and also help us to better understand how existing treatments work or can be improved.
|
1 |
2017 — 2021 |
Gorno Tempini, Maria Luisa Houde, John Francis Nagarajan, Srikantan S. [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Dynamic Brain Imaging of Speech in Primary Progressive Aphasia @ University of California, San Francisco
Project Summary Primary progressive aphasia (PPA) is a clinical syndrome characterized by isolated, progressive loss of speech and language abilities. PPA occurs when pathological and molecular changes of frontotemporal lobar degeneration (FTLD) or Alzheimer's disease (AD) selectively damage language-specific networks of the brain. There is considerable variability in the distribution of brain atrophy in PPA, and patterns of language deficits vary accordingly. In particular, three clinical variants of PPA have been identified: i) logopenic variant (lvPPA) associated with loss of phonological abilities, left temporal-parietal atrophy and most often AD pathology; ii) nonfluent/agrammatic variant (nfvPPA) with motor speech and grammar deficits, left inferior frontal damage and often FTLD pathology; and iii) semantic variant (svPPA), with loss of conceptual knowledge, anterior temporal damage and also most often with FTLD-type pathology. This classification has greatly improved PPA diagnosis but clinical heterogeneity remains an issue, even within each variant, as individual patients differ in terms of their specific patterns of atrophy, language deficits and pathology. In the early stages of the disease, differential diagnosis between lvPPA and nfvPPA is particularly challenging as speech errors can occur in both conditions and atrophy might initially be subtle. To better distinguish between PPA variants, in this grant, we propose to examine neural oscillations in PPA using high temporal resolution brain imaging with magnetoencephalography (MEGI). We will examine regional neural oscillatory activity associated with speaking with a precision unmatched by any other imaging modality. MEGI data will be examined in conjunction with detailed cognitive and language testing, MRI and molecular PET imaging with the amyloid binding tracer PIB biomarker for AD that will be available in all our subjects. The specific aims are: 1. To identify differential patterns of frequency-specific resting-state oscillatory activity and functional connectivity in early stages of PPA variants 2. To examine cortical oscillatory network activity during speech feedback processing in PPA variants 3. To examine cortical oscillatory network activity during sequential speech production in PPA variants Overall, our findings will enable us to identify some of the earliest functional manifestations of brain network dysfunction in PPA, leading to the development of useful biomarkers to detect and longitudinally assess the progressive speech decline in PPA.
|
1 |
2018 — 2020 |
Houde, John Francis Nagarajan, Srikantan S. (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Modeling the Role of Auditory Feedback in Speech Motor Control @ University of California, San Francisco
When we speak, listeners hear us and understand us we speak correctly. But we also hear ourselves, and this auditory feedback affects our ongoing speech: delaying it causes dysfluency; perturbing its pitch or formants induces compensation. Yet we can also speak intelligibly even when we can't hear ourselves. For this reason, most models of speech motor control suppose that during speaking auditory processing is only engaged when auditory feedback is available. In this grant, we propose to investigate a computational model of speaking that represents a major departure from this. Our model proposes that the auditory system always plays a major role in controlling speaking, regardless of whether auditory feedback is available. In our state-feedback control (SFC) model of speech production, we posit two things about the role of the auditory system. First, the auditory system continuously maintains an estimate of current vocal output. This estimate is derived not only from available auditory feedback, but also from multiple other sources of information, including motor efference, other sensory modalities, and phonological and lexical context. Second, this estimate of current vocal output is used both at a low level to monitor and correct ongoing speech motor output and at a higher level to regulate the production of utterance sequences. By comparing computational simulations of our model with functional imaging experiments, we will test key predictions from our computational model as they apply to a wide range of speech production - from production of single utterances to utterance sequences. The specific aims of this grant are (1) to demonstrate that auditory system continuously maintains an estimate of current vocal output, and (2) to determine how auditory feedback processing controls the production of utterance sequences. The proposed work not only addresses fundamentally important basic science questions about speech production, but also has broad clinical impact since abnormalities in auditory feedback processing are implicated in many speech impairments.
|
1 |
2019 — 2021 |
Houde, John Francis Nagarajan, Srikantan S. (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Role of the Cerebellum in Speech @ University of California, San Francisco
PROJECT SUMMARY This proposal investigates the role of the cerebellum in speech, building upon theoretical models and experimental methods that have proven useful in understanding cerebellar function in reaching and walking. Neuroimaging and lesion studies have provided compelling evidence that the cerebellum is an integral part of the speech production network, though its precise role in the control of speech remains unclear. Furthermore, damage to the cerebellum (either degenerative or focal) can lead to ataxic dysarthria, a motor speech disorder characterized, in part, by impaired articulation and severe temporal deficits. This grant seeks to bridge the gap between theoretical models of cerebellar function and the speech symptoms associated with ataxic dysarthria. Two mechanisms underlie speech motor control ? feedback and feedforward control. In feedback control, speakers use sensory feedback (e.g., of their own voice) to control their speech. In feedforward control, speakers use knowledge gained from their past speech productions, rather than on-line feedback, to control their speech. This proposal entails a systematic plan to elucidate the role of the cerebellum in feedforward and feedback control of speech. A central hypothesis is that the cerebellum is especially critical in the feedforward control of speech, but has little involvement in feedback control. To explore this hypothesis, we will obtain converging evidence from three innovative methodologies: 1) Neuropsychological studies of speech-motor responses to real-time altered auditory feedback in patients with cerebellar atrophy (CA) and matched healthy controls, 2) Parallel studies in healthy controls undergoing theta-burst transcranial magnetic stimulation to create ?virtual lesions? of the cerebellum, and 3) Structural and functional studies in CA patients to examine the relationship between cerebellar lesion location, dysarthria symptoms, and feedforward and feedback control ability. Speech provides an important opportunity to examine how well current theories of cerebellar function generalize to a novel effector (vocal tract) and sensory (auditory) domain. Its? purpose for communication imposes exacting spectro-temporal constraints not seen in other motor domains. Furthermore, the distinctive balance of feedback and feedforward control in speech allows us to examine changes in both control types subsequent to cerebellar damage. Critically, this is the first work examining the link between theoretically- motivated control deficits in CA patients and the speech symptoms associated with ataxic dysarthria, as well as their neural correlates.
|
1 |