1997 — 1998 |
Nagarajan, Srikantan S |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Somatosensory Duration Discrimination Plasticity @ University of California San Francisco
sensory discrimination; discrimination learning; neural information processing; somesthetic sensory cortex; neural plasticity; stimulus /response; behavioral /social science research tag; human subject; Primates; clinical research;
|
1 |
2002 — 2011 |
Nagarajan, Srikantan S |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Cortical Spatiotemporal Plasticity in Humans @ University of California San Francisco
DESCRIPTION (Provided by Applicant): Understanding the relationship between the complexity of human learning and associated brain function is one of the most fascinafing journeys of basic science. In addition to being an important academic question, studies of brain function assocIated with learning have very practical applications for improving diagnosis and therapy of learning disabilities. Learning disability affects between 10-20 percent of Americans with severe socioeconomic consequences on their quality of life and health. This proposal focuses on understanding the neural processes underlying normal human learning of auditory information that is transient and occurs in rapid succession. The most intuitive example of such processing is reflected in our ability to learn and understand speech. Deficits in learning such forms of information are associated with dyslexia and language-learning impairment. A few of the currently popular tools used to study the relationships between human learning and associated brain processes are Positron Emission Tomography (PET), Functional Magnetic Resonance Imaging (fMRI), Magnetoencephalography (MEG) and Electroencephalography (EEG). However, of all these methods only MEG and EEG offer adequate time resolution, essential for the proposed study because brain responses to auditory stimuli typically occur in the time-scale of milliseconds. Data obtained using MEG and EEG is often analyzed without consideration of the dynamics of cortical activity and often simplified source and head models are assumed, Information about brain plasticity obtained in this fashion is hard to understand and interpret. Recently several new methods have been developed to process MEG and EEG data. However, the usefulness of these methods has not been adequately demonstrated on real data. The first specific aim of this proposal is to research and to validate novel analyses methods that will enhance the interpretation of EEG and MEG data. We will use realistic head modeling for imaging distributed sources and account for the spatio-temporal dynamics of brain activity. We will empirically validate the usefulness of these methods to understand the dynamics of functional brain plasticity using computer simulations and experiments. The second specific aim of the proposal is to determine the relationship between the dynamics of functional brain plasticity in spatio-temporal responses to successive stimuli and changes in psychophysical thresholds that occur as a result of perceptual learning. We will focus on learning in rate discrimination of amplitude-modulated tone trains in normal adults as a first step towards understanding learning of simple time-varying auditory stimuli that occur in rapid succession. We will examine and correlate learning-induced behavioral changes with changes in the spatial and the temporal patterns of activity within and across cortical areas. Such a multidisciplinary approach which combines methods of scientific computing and functional brain imaging using MEG and EEG should enhance our understanding of general neural mechanisms underlying human perception learning. These results in normal individuals should provide crucial information for the development, refinement and evaluation of diagnosis and therapy for individuals with learning disability.
|
1 |
2004 — 2006 |
Nagarajan, Srikantan Houde, John [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Role of Auditory Cortex in Speech Motor Control @ University of California-San Francisco
How do we hear ourselves while we're speaking? Understanding how speech perception interacts with speech production is a longstanding issue that has classically been investigated by looking at how altering auditory feedback affects speech. Recently, however, the advent of functional neuroimaging methods has allowed a new approach to the issue: examining how producing speech affects the neural processes serving auditory perception. Several studies have shown that the act of speaking suppresses the normal response to speech sounds in auditory cortex and associated regions. Previous studies by Dr. John F. Houde suggest that this suppression reflects a comparison between actual auditory input and a prediction of that auditory input. Based on these initial studies, Dr. Houde and colleagues developed a model, derived from modern control theory, for how auditory feedback is processed during speech production. With support of NSF, Dr. Houde is testing this model by using whole-head magnetic source imaging (MSI) to monitor activity in auditory cortex as speakers respond to brief perturbations of their auditory feedback. Prior studies have shown that such speech perturbations cause compensatory responses in speech motor output. In this project, researchers are first determining whether auditory cortex is part of the neural circuitry mediating these compensatory responses by examining how variations in speaker's perturbation responses are correlated with activity in auditory cortex. Their next step is determining if their model of feedback processing explains the responses previously observed in auditory cortex. A key concept of the model is that auditory feedback does not directly affect speech motor output. Instead, incoming feedback is compared with an internally-generated prediction of the expected feedback, with the resulting feedback prediction error used to control speech output. The research tests whether the responses of auditory cortex to feedback perturbations are consistent with this model.
This project is important not only for understanding the neural circuits linking speech perception and production. This project involves using state-of-the art MSI methods to test predictions of an engineering control theory model of speech feedback processing. The research team is trained in the speech research, engineering control theory, signal processing, neuroscience and magnetic source imaging methods needed to conduct this research. The project will enable students and postdocs involved to gain experience in functional neuroimaging as well as learn about control theory concepts - an area of knowledge important for understanding motor control but usually neglected in the education of neuroscience, bioengineering and cognitive science students. And although this research is focused on speech motor control, its research emphasizes the importance of studying perception in conjunction with production to understand general problems in motor control and motor dysfunctions.
|
0.915 |
2004 — 2008 |
Nagarajan, Srikantan S |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Mechanisms of Auditory Feedback During Speech @ University of California San Francisco
DESCRIPTION (provided by applicant): Understanding how auditory feedback is processed during speaking provides insights into fundamental mechanisms underlying speech production and perception. This knowledge might also ultimately contribute to the early detection and lead to treatment strategies for a number of prevalent clinical conditions where impairments in abnormal processing of auditory feedback have been reported (e.g. stuttering, Parkinson's disease, schizophrenia). While many behavioral studies have examined how auditory perception affects speech production, only recently have functional neuroimaging studies begun examining how producing speech affects the neural processes serving auditory perception. Recent studies have shown that in auditory cortex and other areas in the superior temporal plane, speaking causes "speaking-induced suppression" (SIS): response to self-produced speech is suppressed when compared to identical speech from an external source. In our recent work, we have shown that SIS in auditory cortex does not result from overall inhibition of this area during speaking. Rather, SIS appears to be a neural correlate of a feedback prediction error (FPE) - a comparison between actual auditory input and an internal "speaking-induced prediction" (SIP) of that auditory input. SIS expression in auditory cortex has led to the hypothesis that SIS reflects auditory discrimination of self-produced from externally produced stimuli (Self-non-Self Hypothesis). However, refinements in our understanding of auditory feedback in speech motor control, that are supported by behavioral studies and our preliminary data, suggest that SIS may also reflect feedback processing for speech motor control (Speech Motor Control Hypothesis). We have developed a unifying conceptual model that embodies both hypotheses, and our proposed experiments use SIS to test the neural correlates and the validity of this model. The specific aims are to determine how SIS is modulated by 1) altered feedback, 2) speech target dynamics and 3) speech motor adaptation. These manipulations not only help us to unravel the functional significance of SIS but also help us determine if there is a differentiation of the function of SIS across the superior temporal plane. Furthermore, how activity in other parts of the brain is affected by our experimental manipulations will allow us to determine the neural correlates of the mechanisms that generate SIS. Our approach capitalizes on unique real-time speech feedback alteration methods used with functional magnetic resonance imaging (fMRI) and magnetic source imaging (MSI). The excellent spatial resolution of fMRI will enable reconstruction of spatial locations of activity related to SIS and SIP while the excellent temporal resolution of MSI will enable us to reconstruct the sequence of activation in these areas.
|
1 |
2009 — 2013 |
Nagarajan, Srikantan Houde, John [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Using Neuroimaging to Test Models of Speech Motor Control @ University of California-San Francisco
To speak so a listener understands, the speaker has to accurately produce the sounds from his or her language. While this may seem effortless for most people, the actual speech production involves complex mental and physical processes involving the activation of speech muscles and the precisely timed movements in the vocal tract (e.g., the combination of movements in the mouth, jaw, and so on). The unique properties of the individual's speech organs (e.g., size of mouth), combined with the developmental changes of these properties over a lifetime, will directly influence the way speech sounds are produced by each individual. How does the human brain accomplish this feat of continually tuning the control of vocal tract so that it always produces the sounds desired? With support from the National Science Foundation, the investigators will study how the speaking process involves the brain predicting the sensory feedback and correcting the control of the vocal tract when the feedback does not match the prediction. While previous research suggests that this prediction and correction process does occur during speaking, there is little information about how the circuitry in the brain would accomplish such a process. In the proposed research the investigators will examine the timecourse of neural responses to audio feedback perturbations (brief changes in pitch, amplitude, or formant frequencies) during speaking. They will use magnetoencephalography (MEG) and electrocorticography (ECOG) methods to record normal individuals and epilepsy patients who have electrodes implanted in their brain to localize seizures. Both methods allow neural activity in the brain to be recorded at a millisecond time resolution.
The results of these experiments will allow for the testing of different models that have been proposed to explain the neural substrate of speech motor control. The outcome of the research will facilitate relating the control of speaking to what is known in other domains of motor control research, and lead to a more complete understanding of the control of movements in humans. The use of advanced functional neuroimaging to study the neural basis of speaking will provide a special opportunity to train and educate a wide range of graduate students, post-doctoral trainees, and medical students who will get involved in the research. The proposed research will also further the development of multi-user research facilities, especially at the UCSF Biomagnetic Imaging Laboratory that has one of a limited number of MEG scanner facilities in the US.
|
0.915 |
2011 — 2012 |
Nagarajan, Srikantan S |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Fusion of Electromagnetic Brain Imaging and Fmri @ University of California, San Francisco
DESCRIPTION (provided by applicant): Multimodal non-invasive functional brain imaging has made a tremendous impact in improving our understanding of the neural correlates of human behavior, and is now an indispensable tool for systems and cognitive neuroscientists. We propose to develop state-of-the-art multimodal functional imaging fusion algorithms for accurate visualization of the brain's dynamic activity and high spatial and temporal resolution. We propose to develop algorithms that combine complementary high spatial resolution of functional MRI (fMRI) and high-temporal resolution of magnetoencephalography (MEG) and electroencephalography (EEG) data for high-fidelity reconstruction of brain activity. In recent years, our research group has developed a suite of novel and powerful algorithms for MEG/EEG imaging superior to existing benchmark algorithms, and we have compared these results with electrocorticography (ECOG). Specifically, our algorithms can solve for many brain sources, including sources located far from the sensors, in the presence of large interference from unrelated brain sources using fast and robust probabilistic inference techniques. Here, we propose to extend this success in M/EEG inverse algorithms into the domain of multimodal imaging data fusion. Our overall goal here is to ultimately produce robust, high fidelity videos of event-related brain activation at a sub-millimeter and sub-millisecond resolution from noisy MEG/EEG and fMRI data using state-of-the-art machine learning algorithms. Specifically, we propose to extend a powerful new algorithm that we have recently developed, called Champagne, into two new fusion algorithms that combine fMRI, MEG and EEG data in different ways. Performance of both algorithms will first be rigorously evaluated in simulations, including performance comparisons with existing benchmark fusion algorithms. Algorithms will then tested for consistency on four fMRI-MEG+EEG datasets from healthy controls obtained for identical paradigms (auditory, motor, picture naming and verb-generation) and two fMRI-EEG datasets (face and motion perception). Additional validation studies will also be performed on fMRI-MEG/EEG datasets obtained from epilepsy patients and compared to electrocorticography (ECoG). Following successful testing and evaluation, all algorithms developed in this grant proposal, as well as example validation datasets, will be distributed using NUTMEG (nutmeg.berkeley.edu), an open-source software toolbox that we have developed. PUBLIC HEALTH RELEVANCE: Multimodal non-invasive functional brain imaging has made a tremendous impact in improving our understanding of the neural correlates of human behavior, and is now an indispensable tool for systems and cognitive neuroscientists. With the development of appropriate analytical tools, multimodal functional brain imaging is in the process of revolutionizing the diagnosis and treatment of a variety of neurological and psychiatric disorders such as autism, schizophrenia, dementia, and epilepsy that affect tens of millions of Americans.
|
1 |
2013 — 2017 |
Houde, John (co-PI) [⬀] Chang, Edward (co-PI) [⬀] Nagarajan, Srikantan |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Function of Auditory Feedback Processing During Speech @ University of California-San Francisco
The goal of speaking is to produce the right sounds that convey an intended message. Accordingly, speakers monitor their sound output and use this auditory feedback to further adjust their speech production. Drs. Nagarajan and Houde hypothesized that the brain not only generates the motor signals that control the speech production but also generates a prediction of what this speech should sound like and performs an on-going comparison during speaking in order to dynamically adjust speech production. Whole-brain magnetoencephalographic imaging (MEG-I) experiments will be performed to monitor subjects' auditory neural activity as they hear themselves speak. The first tests task and feature specificity of the feedback prediction. If the prediction encodes only task-relevant acoustic features (i.e., pitch for a singing task), then auditory cortical activity will depend only on the acoustic goal for that task. The second tests the importance of categorical identity in the process of comparing feedback with the prediction. If feedback is altered enough to change the meaning of a word (e.g., when /bad/ is altered to /dad/), this is expected to have a much larger impact on auditory cortical activity than non-categorical alterations. These experiments are expected to improve our understanding of how the brain uses auditory feedback to maintain accuracy in speech production.
The proposed research activity also has broader impact. First, these expected findings may further contribute to a better understanding of and effective treatments for speech dysfunctions. For example, accurate models of brain networks used to control speaking form the basis for testable hypotheses about neural origins of speech disorders such as stuttering or spasmodic dysphonia. Second, the research project will provide a special opportunity to train and educate graduate students and postdoctoral fellows in the use of real-time speech alteration and MEG-I techniques which are only available at very few US institutions. Outreach with collaborators at San Francisco State University, and to the San Francisco Unified School district through the NSF funded Science Education Partnership (SEP) program will provide research experience to their students, specifically students from socioeconomically disadvantaged minorities who are under-represented in the sciences. The research team will also participate in the big data sharing effort by making the data and analysis tools available to support efforts to make use of real data in the teaching of STEM-related courses and to enable participation in discovery science by those who would otherwise have no access to such data.
|
0.915 |
2014 — 2018 |
Houde, John Francis Nagarajan, Srikantan S. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Imaging Sensorimotor Adaptation and Compensation of Speech @ University of California, San Francisco
DESCRIPTION (provided by applicant): The motor act of speaking is the last and crucial step in the process of translating intended messages into the physical processes that convey those messages, i.e. the movements of articulators resulting in producing intended vocal sounds. Surprisingly many aspects of this process remain poorly understood, in particular, the role of feedback processing in controlling speech. Deficits in feedback processing are implicated in major speech impairments including stuttering, conduction aphasia, spasmodic dysphonia, and apraxia of speech. However, due to a paucity of testable models, and a lack of well-established methods, little is understood about the neural mechanisms underlying auditory feedback control in speech. In this grant application, we extend a quantitative model for speech motor control called state-feedback control (SFC) that we have previously developed. SFC posits that the brain controls speech using internal predictions of the state of the vocal tract and of the sensory consequences of speaking. The SFC model accounts for many behavioral and neural phenomena in speech motor control, including two key behavioral responses to unexpectedly altered auditory feedback - compensation and adaptation. Compensation refers to short-term changes in speech output in response to feedback alteration. Adaptation refers to long-term changes in speech output that persist even after the feedback alteration is removed. Here, we propose to use state-of-the-art methods for magnetoencephalographic imaging (MEGI) and electrocorticography (ECOG) in conjunction with cutting-edge methods of quantitative modeling (Bayesian estimation) and behavioral experimentation (real-time speech feedback alteration, audiomotor studies with a touch-screen, speech controlled visual stimulation). Our goals are to further elaborate our SFC model of speech motor control and examine what aspects of speech constrain its adaptation behavior. The specific aims are to: 1) determine the neural correlates of sensorimotor adaptation in speech; 2) determine the role of somatosensory feedback in compensation and adaptation; and 3) isolate perceptual contributions to speech compensation and adaptation. The proposed studies will refine our SFC model of speech motor control, increase its predictive power, and examine what specific aspects of speech constrain its compensation and adaptation behaviors. Such an understanding of the neural basis of speaking has the potential to develop better treatments of dysfunctions of speech motor control such as stuttering, conduction aphasia, spasmodic dysphonia, apraxia of speech, spas and hypophonic dysarthria in Parkinson's disease.
|
1 |
2015 — 2016 |
Courey, Mark S. Houde, John Francis Nagarajan, Srikantan S. |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Timing of Neural Activity in Spasmodic Dysphonia @ University of California, San Francisco
? DESCRIPTION (provided by applicant): Spasmodic dysphonia (SD) is a debilitating disorder of voicing where the laryngeal muscles are intermittently in spasm. This prevents the vocal folds from vibrating efficiently and results in involuntary interruptions during speech. Treatment options for SD are limited, with only temporary symptom relief provided by Botulinum toxin (Botox) injections. SD is thought to be a dysfunction originating in the central nervous system (CNS), and may involve abnormal cortical processing of sensory feedback during phonation. However, the underlying causes of SD remain ambiguous and largely unknown. Several neuroimaging studies have examined phonation in patients with SD and have found aberrant activations in a number of cortical and subcortical regions. However, because these studies were based on fMRI and PET, which have poor temporal resolution, they do not resolve when these aberrant activations occurred in relation to the phonation act. As a result, critical clues about what causes SD have likely been missed. Our lab has done extensive work modeling the dynamics of speech production. From our model, we conclude that inferring functional impairments in SD from aberrant CNS activity requires knowing more than where in the CNS the aberrant activity occurs. It also requires knowing when in the act of phonation (e.g., initial glottal movement, voice onset, and sustained phonation) it is occurring. Here we propose to address this issue using magnetoencephalographic imaging (MEGI) - a functional imaging method our lab has developed based on MEG, which can reconstruct cortical activity with millisecond accuracy and sub-centimeter resolution. In Specific Aim 1, we will reconstruct cortical activity using MEGI while subjects (patients with SD and neurotypical controls) repeatedly produce a steady-state phonation and we monitor their glottal movement with electromyography (EMG). Abnormal pre/motor cortical activity in patients (compared to controls) prior to glottal movement would suggest impairments in feed forward motor preparation. Abnormal activity immediately after glottal movement onset but prior to phonation would suggest selective deficits in somatosensory feedback processing, whereas abnormal activity following phonation would suggest deficits in somatosensory and/or auditory feedback processing. To isolate whether SD specifically involves deficits in auditory feedback processing, in Specific Aim 2 we will use MEGI to monitor cortical activity as subjects phonate when we briefly perturb the pitch they hear in the audio feedback of their ongoing phonation. Abnormal cortical responses to the perturbation seen in patients prior to compensation would suggest deficits in auditory feedback processing, while those seen after onset of compensation would suggest abnormal motor responses to auditory feedback. Results from proposed studies will help us to isolate where and when the deficits related to SD arise in the cortical pathways controlling phonation. This will help us to devise novel, and perhaps more effective, treatments for SD and also help us to better understand how existing treatments work or can be improved.
|
1 |
2016 — 2018 |
Nagarajan, Srikantan S. Raj, Ashish [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multimodal Modeling Framework For Fusing Structural and Functional Connectome Data @ Weill Medical Coll of Cornell Univ
PROJECT SUMMARY / ABSTRACT Project Summary A key goal of computational neuroscience is to discover how the brain?s structural organization produces its functional behavior, and how impairment of the former causes dysfunction and disease. Rapid advances in neural measurement technologies are finally beginning to enable in vivo measurements of large-scale functional organization (via EEG, MEG, fMRI, PET, optical imaging) and the underlying structural connectivity architecture (via diffusion MRI, tractography). Traditional non-linear numerical simulations of single neurons or local circuits is challenging to extrapolate to macroscopic brain dynamics, and deterministic brain network models are needed that can integrate across modalities and scales. We propose an ambitious multi-scale, parsimonious and analytic model of brain function based on spectral graph theory. Bayesian inference using graphical modeling is proposed to deduce structure from function. These algorithms will be implemented and shared via a Network Dynamics Workbench that can be used by neuroscientists and clinicians to perturb structure and generate hypotheses regarding functional impairment in stimulus and disease conditions. The key insight underlying this proposal is that the emergent macroscopic behavior of the brain is essentially deterministic, and is undergirded by network ?eigen-modes?. We will develop graph models of neural dynamics that are accessible analytically by simple equations rather than via numerical simulations. These models will be minimal and simple, and linear wherever appropriate. The final deliverable is a Network Dynamics Workbench for experimentally interrogating brain function and dysfunction. Relevance Neurological and psychiatric disorders constitute an overwhelming burden of disease today, especially in a rapidly aging population. A validated model of brain function predicted from structure will provide a critical tool in understanding and fighting these disorders.
|
1 |
2017 |
Nagarajan, Srikantan S. Vinogradov, Sophia [⬀] |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Is Cognitive Training Neuroprotective in Early Psychosis? @ University of Minnesota
The purpose of this study is to perform longitudinal high-resolution 7T MRI in participants with first-episode psychosis (FEP) enrolled in our ongoing randomized controlled clinical trial (RCT) of cognitive training. Despite the recent explosion in treatment programs for FEP, many patients still do not experience optimal outcomes, with impaired cognition in particular being a poor prognostic indicator. Indeed, we now know that the early phases of psychotic illness are characterized by progressive neural system dysfunction, including accelerated frontal and temporal gray matter loss, as well as white matter changes and neuroinflammation. In this innovative high-risk project, we seek to determine whether a 12-week course of intensive cognitive training of auditory processing in young FEP patients delivered remotely as a stand-alone treatment is neuroprotective against neural tissue loss in auditory cortex (superior temporal gyrus, STG), and possibly in other cortical regions. We will also investigate the effects of training on white matter integrity. Our prior work has shown that intensive cognitive training of auditory processing drives significant cognitive improvement in FEP patients; our unpublished data indicate that improved positive symptoms are seen 6 months later. In studies with persistently ill patients, we have demonstrated significant functional plasticity in prefrontal and auditory cortex after this form of training. Here, we integrate our findings with emerging data from basic science and ask a high-risk/ high-gain research question: Can a short course of intensive cognitive training not only improve cognition, but prevent accelerated gray matter loss in left STG, and possibly in other regions, such as prefrontal cortex? Additionally, does it mitigate white matter changes? Finally, we will explore its possible effects on a putative marker of neuroinflammation. We will answer these questions by acquiring state-of-the-art high-resolution 7T MRI longitudinal imaging data in a subset of young FEP patients who are enrolled in our current NIMH-funded RCT of cognitive training in community mental health centers. The goal of the original RCT is to investigate the clinical and cognitive effects of 30 hours of cognitive training delivered via iPads, as compared to treatment-as-usual. This R21 Exploratory/Developmental grant will permit us to leverage our unique subject population and research infrastructure in order to obtain sophisticated imaging data at two time points 12 months apart in a highly informative preliminary sample of young FEP individuals. The data we obtain will contribute to our understanding of how to develop scalable, optimally effective personalized treatments that pre-empt cognitive and neural system deterioration and promote recovery in the early phases of psychotic illness.
|
0.958 |
2017 — 2021 |
Gorno Tempini, Maria Luisa Houde, John Francis Nagarajan, Srikantan S. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Dynamic Brain Imaging of Speech in Primary Progressive Aphasia @ University of California, San Francisco
Project Summary Primary progressive aphasia (PPA) is a clinical syndrome characterized by isolated, progressive loss of speech and language abilities. PPA occurs when pathological and molecular changes of frontotemporal lobar degeneration (FTLD) or Alzheimer's disease (AD) selectively damage language-specific networks of the brain. There is considerable variability in the distribution of brain atrophy in PPA, and patterns of language deficits vary accordingly. In particular, three clinical variants of PPA have been identified: i) logopenic variant (lvPPA) associated with loss of phonological abilities, left temporal-parietal atrophy and most often AD pathology; ii) nonfluent/agrammatic variant (nfvPPA) with motor speech and grammar deficits, left inferior frontal damage and often FTLD pathology; and iii) semantic variant (svPPA), with loss of conceptual knowledge, anterior temporal damage and also most often with FTLD-type pathology. This classification has greatly improved PPA diagnosis but clinical heterogeneity remains an issue, even within each variant, as individual patients differ in terms of their specific patterns of atrophy, language deficits and pathology. In the early stages of the disease, differential diagnosis between lvPPA and nfvPPA is particularly challenging as speech errors can occur in both conditions and atrophy might initially be subtle. To better distinguish between PPA variants, in this grant, we propose to examine neural oscillations in PPA using high temporal resolution brain imaging with magnetoencephalography (MEGI). We will examine regional neural oscillatory activity associated with speaking with a precision unmatched by any other imaging modality. MEGI data will be examined in conjunction with detailed cognitive and language testing, MRI and molecular PET imaging with the amyloid binding tracer PIB biomarker for AD that will be available in all our subjects. The specific aims are: 1. To identify differential patterns of frequency-specific resting-state oscillatory activity and functional connectivity in early stages of PPA variants 2. To examine cortical oscillatory network activity during speech feedback processing in PPA variants 3. To examine cortical oscillatory network activity during sequential speech production in PPA variants Overall, our findings will enable us to identify some of the earliest functional manifestations of brain network dysfunction in PPA, leading to the development of useful biomarkers to detect and longitudinally assess the progressive speech decline in PPA.
|
1 |
2018 — 2020 |
Houde, John Francis Nagarajan, Srikantan S. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Modeling the Role of Auditory Feedback in Speech Motor Control @ University of California, San Francisco
When we speak, listeners hear us and understand us we speak correctly. But we also hear ourselves, and this auditory feedback affects our ongoing speech: delaying it causes dysfluency; perturbing its pitch or formants induces compensation. Yet we can also speak intelligibly even when we can't hear ourselves. For this reason, most models of speech motor control suppose that during speaking auditory processing is only engaged when auditory feedback is available. In this grant, we propose to investigate a computational model of speaking that represents a major departure from this. Our model proposes that the auditory system always plays a major role in controlling speaking, regardless of whether auditory feedback is available. In our state-feedback control (SFC) model of speech production, we posit two things about the role of the auditory system. First, the auditory system continuously maintains an estimate of current vocal output. This estimate is derived not only from available auditory feedback, but also from multiple other sources of information, including motor efference, other sensory modalities, and phonological and lexical context. Second, this estimate of current vocal output is used both at a low level to monitor and correct ongoing speech motor output and at a higher level to regulate the production of utterance sequences. By comparing computational simulations of our model with functional imaging experiments, we will test key predictions from our computational model as they apply to a wide range of speech production - from production of single utterances to utterance sequences. The specific aims of this grant are (1) to demonstrate that auditory system continuously maintains an estimate of current vocal output, and (2) to determine how auditory feedback processing controls the production of utterance sequences. The proposed work not only addresses fundamentally important basic science questions about speech production, but also has broad clinical impact since abnormalities in auditory feedback processing are implicated in many speech impairments.
|
1 |
2019 |
Nagarajan, Srikantan S. Raj, Ashish [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Network Modelling of Multimodal Dynamics in Alzheimer?S Disease and Dementia @ University of California, San Francisco
PROJECT SUMMARY / ABSTRACT Alzheimer's Disease (AD) is caused by misfolded proteins that march through brain circuits trans-neuronally, causing stereotyped patterns of damage to the brain over decades of progression, and increasing clinical and cognitive impairments. Using new imaging techniques, spatiotemporal mapping of the biomarkers of AD, including of atrophy, metabolism, and pathology deposition, are becoming possible. However, the precise relationship of these biomarkers to each other is not known. These factors, coupled with insidious onset, clinical heterogeneity, overlap with other dementias and variability in progression, make a rigorous characterization and prognosis difficult. Although the ?trans-neuronal? mechanism of pathology naturally suggests that pathology spread must follow the brain's fiber connectivity, existing methods of predicting progression and cognitive decline do not currently exploit the network information, relying instead on phenomenological or statistical approaches unanchored in the biophysics of networked spread. These gaps hinder understanding of the biophysical mechanism underlying dementias, and preclude accurate quantitative predictors of patients' future trajectory. The objective of this application is to learn, test and apply biophysical models of networked spread in AD. Our central hypothesis is that once a patient's baseline disease status is known, all subsequent disease-related processes are enacted on the brain's fiber connectivity network, i.e. the ?connectome?, in a fully predictable manner. Influence of genetic and environmental actors is already factored in the baseline data. This project will build on and extend our recent novel graph theoretic Network Diffusion model, which mathematically captures the process of trans-neuronal network spread, and is ideally suited for investigating these issues. With this network model as a foundation, we will bring together all key elements of the causal AD progression chain. Then, using human imaging data (atrophy from MRI, A? from AV45-PET, tau from T807-PET and metabolism from FDG-PET) from the public ADNI study, we will mathematically characterize 1) network-based spread of tau and amyloid-beta, 2) the relationship between tau deposition and regional atrophy, and amyloid deposition and regional metabolism. Next, these validated models will be entered in a state-space generative mode of progression that will predict future spatial patterns of the biomarkers. An alternative deep learning approach will also be developed as a comparison with the proposed biophysical modeling approach. Success of this proposal could have wide implications in treatment, care, planning and monitoring of dementia in susceptible populations. Our long-term goal is to develop a common connectome-based biophysics model underlying all dementias, forming the core of novel computational diagnostic and prognostic biomarkers.
|
1 |
2019 |
Houde, John Francis Nagarajan, Srikantan S. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Role of Auditory Feedback in Speech Motor Control in Alzheimer's Disease @ University of California, San Francisco
PROJECT SUMMARY/ABSTRACT/RELEVANCE TO ADRD Abnormalities in speech production in Alzheimer's disease (AD) have received scant attention in the literature. Yet the network architecture of speech motor control circuit ? with its anatomic distribution involving the superior temporal, posterior parietal, premotor, and prefrontal regions ? is indeed a highly vulnerable target in Alzheimer's disease. Determining neural mechanisms of impairments in speech motor control could therefore provide useful scales of network integrity to gauge disease progression and therapeutic efficacy in clinical trials of AD. In our own prior studies in AD (funded by the Alzheimer's Association) we have found that abnormalities in speech motor control clearly exist and can be seen in how patients with AD (ADs) respond to perturbations of pitch in the auditory feedback they hear as they speak. Behaviorally, when pitch feedback is perturbed, ADs respond with significantly larger compensatory responses than controls. Neurally, ADs show greater posterior temporal lobe (pTL) activity and smaller activity in medial prefrontal cortex (mPFC) in response to altered pitch feedback, compared to controls, and these activity changes are correlated with degree of compensation. Both degree of compensation and mPFC activity during compensation are also correlated with measures of cognitive abilities in the ADs, particularly measures of executive function. Here, we propose to investigate the nature of these AD speech motor control abnormalities within the scope of our original funded parent grant. In the parent grant, we develop a computational model of speech motor control and test its predictions using experiments based on magnetoencephalographic imaging (MEGI). In this administrative supplement, we will use these same components of the parent grant to investigate speech abnormalities in AD. We will use the computational model we develop in the parent grant to simulate and mechanistically explain AD speech abnormalities. We will also use a simplified version of the first experiment proposed in the parent grant ? an experiment based on MEGI ? to test hypotheses suggested by the model about the underlying cause of the AD speech abnormalities. The goal of this proposed work is to have a model that can accurately reproduce what goes wrong in AD speech. Such a model ? the first of its kind ? would give us a powerful basis for making many predictions about speech in AD, including predictions about the effects of different patterns of disease progression on speech in AD, which would also allow us to predict patients' responses to different treatments.
|
1 |
2019 — 2021 |
Houde, John Francis Nagarajan, Srikantan S. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Role of the Cerebellum in Speech @ University of California, San Francisco
PROJECT SUMMARY This proposal investigates the role of the cerebellum in speech, building upon theoretical models and experimental methods that have proven useful in understanding cerebellar function in reaching and walking. Neuroimaging and lesion studies have provided compelling evidence that the cerebellum is an integral part of the speech production network, though its precise role in the control of speech remains unclear. Furthermore, damage to the cerebellum (either degenerative or focal) can lead to ataxic dysarthria, a motor speech disorder characterized, in part, by impaired articulation and severe temporal deficits. This grant seeks to bridge the gap between theoretical models of cerebellar function and the speech symptoms associated with ataxic dysarthria. Two mechanisms underlie speech motor control ? feedback and feedforward control. In feedback control, speakers use sensory feedback (e.g., of their own voice) to control their speech. In feedforward control, speakers use knowledge gained from their past speech productions, rather than on-line feedback, to control their speech. This proposal entails a systematic plan to elucidate the role of the cerebellum in feedforward and feedback control of speech. A central hypothesis is that the cerebellum is especially critical in the feedforward control of speech, but has little involvement in feedback control. To explore this hypothesis, we will obtain converging evidence from three innovative methodologies: 1) Neuropsychological studies of speech-motor responses to real-time altered auditory feedback in patients with cerebellar atrophy (CA) and matched healthy controls, 2) Parallel studies in healthy controls undergoing theta-burst transcranial magnetic stimulation to create ?virtual lesions? of the cerebellum, and 3) Structural and functional studies in CA patients to examine the relationship between cerebellar lesion location, dysarthria symptoms, and feedforward and feedback control ability. Speech provides an important opportunity to examine how well current theories of cerebellar function generalize to a novel effector (vocal tract) and sensory (auditory) domain. Its? purpose for communication imposes exacting spectro-temporal constraints not seen in other motor domains. Furthermore, the distinctive balance of feedback and feedforward control in speech allows us to examine changes in both control types subsequent to cerebellar damage. Critically, this is the first work examining the link between theoretically- motivated control deficits in CA patients and the speech symptoms associated with ataxic dysarthria, as well as their neural correlates.
|
1 |
2021 |
Cheung, Steven Wan [⬀] Nagarajan, Srikantan S. |
R56Activity Code Description: To provide limited interim research support based on the merit of a pending R01 application while applicant gathers additional data to revise a new or competing renewal application. This grant will underwrite highly meritorious applications that if given the opportunity to revise their application could meet IC recommended standards and would be missed opportunities if not funded. Interim funded ends when the applicant succeeds in obtaining an R01 or other competing award built on the R56 grant. These awards are not renewable. |
Brain Plasticity and Clinical Consequences of Adult-Onset Asymmetric Hearing Loss @ University of California, San Francisco
PROJECT SUMMARY Permanent sensorineural asymmetric hearing loss (AHL) disrupts extraction of interaural information for binaural processing. Using a cutoff of at least 15 dB interaural difference as definition of AHL, prevalence estimates vary widely, from 1% to 50%. Among cohorts with occupational noise exposure, AHL prevalence ranges from 15%- 49%. Critical clinical consequences include difficulty with sound target identification in noisy environments and degradation of spatial hearing. Beyond those impairments, the aidable poorer ear in AHL is at risk for accelerated decline and often burdened by tinnitus. There is a wide gap in our understanding of the relationship between central nervous system changes along the continuum of AHL magnitudes, audiological and psychoacoustical outcomes, and tinnitus perception. Closing this knowledge gap would be the first step to advance diagnostic tools and inspire innovative treatments for AHL. Informed by anchoring neuroimaging and audiological data from normal hearing and single-sided deafness, the most extreme form of AHL, we propose to close this knowledge gap. A comprehensive study on the clinical consequences of AHL should address hearing performance under adverse conditions, spatial hearing, and tinnitus outcomes, and their central neural correlates. We propose a longitudinal within-subject neuroimaging features and clinical assessments study of AHL before and after treatment by amplification. We will use resting-state magnetoencephalographic imaging (RS-MEGI) and functional magnetic resonance imaging (RS-fMRI), task-based MEGI, and diffusion MRI to examine temporal, functional and structural features, and audiological and psychoacoustical tests to evaluate hearing performance and tinnitus outcomes. This observational study will collect data from participants who will be treated by routine amplification with individualized tinnitus sound therapies, as required, for AHL. We will evaluate test-retest reliability of neuroimaging features, and assess neuroimaging features, hearing performance, and tinnitus outcomes at baseline and at months 3, 6 and 12 following treatment. The specific aims of this research are to examine: 1) AHL clinical outcomes, 2) AHL auditory interhemispheric temporal organization using MEGI, and 3) AHL whole brain functional and structural neuroimaging features using resting-state MEGI and fMRI (functional), task-based MEGI (functional), and diffusion MRI (structural).
|
1 |