Edward Chang - US grants
Affiliations: | University of California, San Francisco, San Francisco, CA |
Area:
auditory system, language, neurosurgeryWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Edward Chang is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2002 — 2005 | Chang, Edward | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Learning and Measuring Perceptual Similarity @ University of California-Santa Barbara Image retrieval has been an active research area for many years, but |
0.973 |
2002 — 2008 | Chang, Edward | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Intelligent Sampling For Learning Complex Query Concepts @ University of California-Santa Barbara This research project will advance intelligent search engines by developing query-concept learners. Techniques for detecting concept drifts as well as multi-resolution image characterization will be developed. In particular the research will be based on two online algorithms, MEGA and SVMActive. It will also integrated text attributes in image searching and apply the technology to video data. The career development plan will include sponsoring undergraduate and graduate projects as well as incorporating the research into the Computer Science curriculum. |
0.973 |
2006 — 2009 | Chang, Edward | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Scalable, Multimodal Algorithms For Multimedia Information Retrieval @ University of California-Santa Barbara The aim of this research is to advance the ability of a search engine to understand a user's query seeking multimedia data, to speed up machine-learning algorithms for comprehending a query, and to index high-dimensional imagery data to permit fast matching of found data to a query concept. This study comprises three thrusts: multimodal active learning, scalable kernel machines, and kernel indexing. The first thrust explores ways to profile the complexity of a query concept and ways a concept can be learned using information from image context, image content, text, and camera parameters. The second thrust investigates approximate factorization algorithms and parallel algorithms to speed up kernel machines such as Support Vector Machines (SVMs) and kernel PCA. The third thrust devises indexing algorithms to work with the kernel methods in a potentially infinite dimensional space. Together, these three integrated research thrusts provide a solid foundation for building large-scale, next-generation, multimedia information retrieval systems. Speeding up the kernel methods in both training and indexing is critical for making learning feasible in real time and on a large scale. Broader impacts of this work are expected to be very significant because a variety of applications depend on high-performance kernel methods to scale up to larger databases. The expected results of this research include: a faster version of SVMs, a kernel-indexing algorithm, and a large-scale development of an image-sharing and image-search engine. These results will be disseminated via open-source software or World Wide Web services via the project Web site (http://www.mmdb.ece.ucsb.edu/~echang/IIS-0535085.html). |
0.973 |
2008 | Chang, Edward | F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Electrocorticographic Study of Categorical Phoneme Perception in the Human Tempor @ University of California San Francisco [unreadable] DESCRIPTION (provided by applicant): The basic mechanisms underlying speech perception are unclear. The human auditory system regularly processes highly acoustically variable inputs into invariant representations of speech- a critical function for human communication. In order to better understand this property, I propose to investigate the cortical representations of speech sounds during categorical perception (CP). CP occurs when a change in a variable such a phonemic contrast along a physical continuum is perceived, not as gradual but as an instance of discrete category. Previous functional neuroimaging research has implicated the superior temporal gyrus and sulcus of the lateral temporal cortex in phonemic processing, but not the precise means by which the cortex represents those sounds. During a one-year training program, I plan to use electrocorticography (ECoG), the direct application of electrodes to the brain surface for recording cortical activity, to examine categorical speech processing in the superior temporal gyrus. I will utilize a high-density ECoG array and signal analysis techniques during passive listening to speech stimuli in six epilepsy patients with chronic implanted subdural grids. Phonetic discrimination and identification behavioral tasks will be performed pre-operatively, and in additional healthy normal subjects, to determine boundaries for CP transitions. Phoneme stimuli will focus on three contrasts with well-known categorical psychophysical properties: /b/ vs /d/, /r/ vs /I/, and /d/ vs /t/. We hypothesize that a highly specific spatiotemporal pattern of evoked potential and/or neural oscillatory activity will reveal an emergent categorical representation of speech phonemic contrasts which closely reflects behavioral thresholds, as opposed to a continuously linear representation of acoustic parameters. An improved understanding of speech processing mechanisms has direct implications for the origin and remediation of communication disorders, including autism, dyslexia, and language learning impairment. [unreadable] [unreadable] [unreadable] |
1 |
2009 — 2012 | Chang, Edward | K99Activity Code Description: To support the initial phase of a Career/Research Transition award program that provides 1-2 years of mentored support for highly motivated, advanced postdoctoral research scientists. R00Activity Code Description: To support the second phase of a Career/Research Transition award program that provides 1 -3 years of independent research support (R00) contingent on securing an independent research position. Award recipients will be expected to compete successfully for independent R01 support from the NIH during the R00 research transition award period. |
Neocortical Mechanisms of Categorical Speech Perception @ University of California, San Francisco The basic mechanisms underlying comprehension of spoken language are unknown. We do not understand, for example, how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. An investigation of the cortical representation of speech sounds during categorical perception can likely shed light on this fundamental question. Categorical perception occurs when a change in a variable such as phonem ic contrast along a continuum is perceived, not as a gradual function but rather as a discrete category change. Previous research has implicated the superior temporal cortex in the processing Of speech sounds. However, how the cortex actually represents (i.e. encodes) phonemes is undetermined, mainly due to limitations of non-invasive recording techniques. The recording of neural activity di rectly from the cortical surface is a promising approach si nee it can provide both high spatial and temporal resolution. Here, I propose to examine the mechanisms of categorical speech processing by utilizing neurophysiological recordings obtained during ne urosurgical pnacedures. The principal focus of the independent ROO phase wil I be to elucidate the emergent invariant representation of phonemes in the superior tem poral gyrus that underiies categorical perception. High-density electrode anays, advanced signal processing, and direct eiectrocortical stim ulation will be utilized to unravel both local population encoding of speec h sounds in the lateral temporal cortex as well as global processing across multiple sensory and cognitiv e areas. |
1 |
2011 | Chang, Edward | DP2Activity Code Description: To support highly innovative research projects by new investigators in all areas of biomedical and behavioral research. |
Functional Architecture of Human Speech Motor Cortex @ University of California, San Francisco DESCRIPTION (Provided by the applicant) Abstract: Speaking is one of the most complex actions that we perform, yet nearly all of us learn do it effortlessly. The ability to communicate through speech is often described as the unique and defining trait of human behavior. Despite its importance, the basic neural mechanisms that govern our ability to speak fluently remain unresolved. This proposal addresses two fundamental questions at the crossroads of linguistics, systems neuroscience, and biomedical engineering: 1) How are the coordinated movements of articulation functionally represented in human speech motor cortex? 2) Can we apply this new knowledge to decode speech motor cortex for the practical implementation of a neuro-prosthetic communication device? Our studies should greatly advance understanding of how the speech motor cortex encodes the precise control of articulation during speech production as well as determine whether this control system can be harnessed for novel rehabilitative strategies. Three potential areas of impact are: Neurobiology of Language, where results will shed light on neurophysiologic mechanisms of speech motor control;Human Neurophysiology, where insight gained may suggest novel methods for multivariate analysis of distributed population neural activity;and Translational NeuroEngineering, where results will apply directly to the development of a speech neuro-prosthetic device. We propose to investigate the functional organization of the speech motor cortex during consonant- vowel syllable production, and during articulatory compensation in the context of pertubation. Our methods utilizing safe, high-density, large-scale intracranial electrode recordings in humans represent a significant advancement over current noninvasive neuroimaging approaches. To accomplish this, we must innovate new, integrative approaches to speech motor control research. We will also employ novel and as of yet unproven analyses to 'read-out'speech motor cortex in real-time. Given that these ideas concerning the neural basis of speech production and its strategic application for neuroprosthetics are new and relatively untested, the level of risk in our proposal is substantially higher than in traditional investigator grants. The most debilitating aspect of profound paralysis due to trauma, stroke, or disease is loss of the ability to speak, which leads to profound social isolation. Our research focuses on novel methodologies gained in previous studies of speech perception in the human temporal lobe. We wish to broaden the impact of our research and innovate a related, and highly complementary field in the neurobiology of speech motor control. Public Health Relevance: Discovering the neural mechanisms of speech production has major implications for understanding a large number of communication disorders including mutism, stuttering, apraxia of speech, and aphasia. In addition, the proposed research seeks to translate critical knowledge on the neural control of speech to develop algorithms for a practical communication neuroprosthetic device to provide immediate, practical benefit to patients suffering from these disabling neurological conditions. |
1 |
2012 — 2021 | Chang, Edward | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Functional Organization of the Superior Temporal Gyrus For Speech Perception @ University of California, San Francisco PROJECT SUMMARY The basic mechanisms underlying comprehension of spoken language are unknown. We do not understand, for example, how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. An investigation of the cortical representation of speech sounds can likely shed light on this fundamental question. Previous research has implicated the superior temporal cortex in the processing of speech sounds. However, how the cortex actually represents (i.e. encodes) phonemes is undetermined. The recording of neural activity directly from the cortical surface is a promising approach since it can provide both high spatial and temporal resolution. Here, I propose to examine the mechanisms of phonetic encoding by utilizing neurophysiological recordings obtained during neurosurgical procedures. High-density electrode arrays, advanced signal processing, and direct electrocortical stimulation will be utilized to unravel both local population encoding of speech sounds in the lateral temporal cortex as well as global processing across multiple sensory and cognitive areas. |
1 |
2013 — 2015 | Chang, Edward | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of California-San Francisco This project will use new technologies for measuring brain activity to understand in detail how human listeners are able to separate competing, overlapping voices, and thereby to help design automatic systems capable of the same feat. Natural environments are full of overlapping sounds, and successful audio processing by both humans and machines relies on a fundamental ability to separate out sound sources of interest. This is commonly referred to as the "cocktail party effect," based on the ability of people to hear what a single person is saying despite the noisy background audio from other speakers. Despite the long history of research in hearing, this exceptional human capability for sound source separation is still poorly understood, and efforts to automatically separate overlapping voices by machine are correspondingly crude: although great advances have been made in robust processing of noisy speech by machine, separation of complex natural sounds (such as overlapping voices) remains a challenge. Advances in sensor technology now enable the modeling of this function in humans, giving an unprecedented, detailed view of sound representation processing in the brain. This project works specifically with measurements of neuroelectric response made directly on the surface of the human cortex (currently with a 256-electrode sensor array) for patients awaiting neurosurgery. Using such measurements made for controlled mixtures of voices, the project will endeavor to both develop models of voice separation in the human cortex by reconstructing an approximation to the acoustic stimulus from the neural population response, and in the process learning the linear mapping between the neural response back to a spectrogram measure of the stimulus. To attempt to significantly improve the ability of machine algorithms to mimic human source separation capability, the project will also focus on a signal processing framework that supports experiments with different combinations of cues and strategies to optimize agreement with the recordings of neural activity. The engineering model is based on the Computational Auditory Scene Analysis (CASA) framework, a family of approaches that have shown competitive results for handling sound mixtures. |
1 |
2013 — 2017 | Houde, John (co-PI) [⬀] Chang, Edward Nagarajan, Srikantan [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Function of Auditory Feedback Processing During Speech @ University of California-San Francisco The goal of speaking is to produce the right sounds that convey an intended message. Accordingly, speakers monitor their sound output and use this auditory feedback to further adjust their speech production. Drs. Nagarajan and Houde hypothesized that the brain not only generates the motor signals that control the speech production but also generates a prediction of what this speech should sound like and performs an on-going comparison during speaking in order to dynamically adjust speech production. Whole-brain magnetoencephalographic imaging (MEG-I) experiments will be performed to monitor subjects' auditory neural activity as they hear themselves speak. The first tests task and feature specificity of the feedback prediction. If the prediction encodes only task-relevant acoustic features (i.e., pitch for a singing task), then auditory cortical activity will depend only on the acoustic goal for that task. The second tests the importance of categorical identity in the process of comparing feedback with the prediction. If feedback is altered enough to change the meaning of a word (e.g., when /bad/ is altered to /dad/), this is expected to have a much larger impact on auditory cortical activity than non-categorical alterations. These experiments are expected to improve our understanding of how the brain uses auditory feedback to maintain accuracy in speech production. |
1 |
2016 — 2018 | Chang, Edward | U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Functional Architecture of Speech Motor Cortex @ University of California, San Francisco PROJECT SUMMARY Speaking is one of the most complex actions that we perform, yet nearly all of us learn do it effortlessly. The ability to communicate through speech is often described as the unique and defining trait of human behavior. Despite its importance, the basic neural mechanisms that govern our ability to speak fluently remain unresolved. This proposal addresses two fundamental questions at the crossroads of linguistics, systems neuroscience, and biomedical engineering: 1) How are the kinematic and acoustic targets of articulation represented in human speech motor cortex?, 2) What are the coordinated patterns of cortical activation that gives rise to fluent, continuous speech?, and 3) How does prefrontal cortex govern the cognitive inhibitory control of speech (e.g. stopping)? Our studies should greatly advance understanding of how the speech motor cortex encodes the precise control of articulation during speech production as well as determine whether this control system can be harnessed for novel rehabilitative strategies. Three potential areas of impact are: Neurobiology of Language, where results will shed light on neurophysiologic mechanisms of speech motor control; Human Neurophysiology, where insight gained may suggest novel methods for machine learning-based analyses of distributed population neural activity; and Translational NeuroEngineering, where utilization of novel cortical recording technologies at unparalleled spatiotemporal resolution and duration. We propose to investigate the functional organization of the speech motor cortex during controlled vowel and syllable productions, but also from natural, continuous speech. Our methods utilizing safe, high-density, large-scale intracranial electrode recordings in humans represent a significant advancement over current noninvasive neuroimaging approaches. To accomplish this, we must innovate new, integrative approaches to speech motor control research. We have assembled a team with significant multi- disciplinary strengths in neurosurgery, neurology, ethics, computational modeling, machine learning, neuroscience, engineering, and linguistics. The most debilitating aspect of profound paralysis due to trauma, stroke, or disease is loss of the ability to speak, which leads to profound social isolation. Our research leverages foundational knowledge gained during research piloted under a NIH New Innovator (DP2) award. We wish to broaden the impact of our research in the neurobiology of speech motor control. |
1 |
2018 | Barbaro, Nicholas M (co-PI) [⬀] Chang, Edward Quigg, Mark S |
U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Radiosurgery Vs Lobectomy For Temporal Lobe Epilepsy: Phase 3 Clinical Trial @ University of California, San Francisco DESCRIPTION (provided by applicant): It is estimated that 0.5%-1.0% of the U.S. population has epilepsy, and that 20% of patients with epilepsy have medically refractory seizures. Patients with unilateral temporal lobe seizure onsets have an excellent chance of becoming seizure-free following temporal lobectomy. Based on the successful completion of a Pilot Clinical Trial showing that seizure-free rates were in excess of 80% and with acceptable toxicity, there is equipoise for treating well-selected patients with either radiosurgery or temporal resection. The purpose of this study is to compare the effectiveness of radiosurgery with temporal lobectomy in the treatment of patients with pharmaco-resistant temporal lobe epilepsy including freedom from seizures, seizure reduction, neuropsychological outcomes, quality of life and cost-effectiveness. Aim 1: To compare the seizure-free outcomes and morbidity of Gamma Knife radiosurgery (GKS) for patients with pharmaco-resistant temporal lobe epilepsy with those of open temporal lobectomy. Our primary hypothesis is that radiosurgery and lobectomy will have equivalent seizure-free rates at 25-36 months following therapy (one-year of seizure freedom beginning 2 years after treatment). The two arms will be considered equivalent if a one-sided 95% confidence interval precludes a decrease in seizure-free rate of 15%. Our secondary hypothesis is that radiosurgery will result in significant reductions in seizures compared to baseline and that by 2 years following treatment the percentage reduction in seizures will be identical for these two treatments. Aim 2: To compare the neuropsychological outcomes in patients undergoing radiosurgery and temporal lobe surgery, in particular with respect to verbal memory function for language-dominant hemisphere treated patients. Our hypothesis is that patients treated for speech-dominant temporal lobe seizures with temporal lobectomy will show significant reductions in verbal memory and those patients treated with radiosurgery will not have significant reduction in measures of verbal memory. Aim 3: To determine what changes occur in the quality of life of patients with temporal lobe epilepsy following radiosurgical treatment as compared with open surgery. Our primary hypothesis is that there will be improvements (comparing baseline with 3 years post- treatment) in quality of life measures in both groups. Our secondary hypothesis is that both open surgery and radiosurgery subjects will undergo transient reductions in quality of life measures caused by treatment effects during the first year following treatment, but that quality of life will improve for subjects who become seizure-free, independent of treatment group. Aim 4: To compare the cost-effectiveness of radiosurgery compared with open surgery. We hypothesize that radiosurgery will be cost-effective compared to temporal lobectomy over the lifetime of the patient. The purpose of this study is to compare two methods of treatment of surgically-amenable epilepsy: standard anterior temporal lobectomy versus noninvasive Gamma Knife radiosurgery. Beyond the main outcome of the number of patients rendered seizure-free, we will compare preservation of language functions, quality of life measures, and the cost of treatment. |
1 |
2020 — 2021 | Chang, Edward Starr, Philip Andrew (co-PI) [⬀] |
UH3Activity Code Description: The UH3 award is to provide a second phase for the support for innovative exploratory and development research activities initiated under the UH2 mechanism. Although only UH2 awardees are generally eligible to apply for UH3 support, specific program initiatives may establish eligibility criteria under which applications could be accepted from applicants demonstrating progress equivalent to that expected under UH2. |
Technology Development For Closed-Loop Deep Brain Stimulation to Treat Refractory Neuropathic Pain @ University of California, San Francisco PROJECT SUMMARY Many pain syndromes are notoriously refractory to almost all treatment and pose significant costs to patients and society. Deep brain stimulation (DBS) for refractory pain disorders showed early promise but demonstration of long-term efficacy is lacking. Current DBS devices provide ?open-loop? continuous stimulation and thus are prone to loss of effect owing to nervous system adaptation and a failure to accommodate natural fluctuations in chronic pain states. DBS could be significantly improved if neural biomarkers for relevant disease states could be used as feedback signals in ?closed-loop? DBS algorithms that would selectively provide stimulation when it is needed. This approach may help avert the development of tolerance over time and enable the dynamic features of chronic pain to be targeted in a personalized fashion. Optimizing the brain targets for both biomarker detection and stimulation delivery may also markedly impact efficacy. Recent imaging studies in humans point to the key role of frontal cortical regions in supporting the affective and cognitive dimensions of pain, which may be more effective DBS targets than previous targets involved in basic somatosensory processing. Pathological activity in the anterior cingulate (ACC) and orbitofrontal cortex (OFC) is correlated with the higher-order processing of pain, and recent clinical trials have identified ACC as a promising stimulation target for the neuromodulation of pain. In this study we will target ACC and OFC for biomarker discovery and closed-loop stimulation. We will develop data-driven stimulation control algorithms to treat chronic pain using a novel neural interface device (Medtronic Activa PC+S) that allows longitudinal intracranial signal recording in an ambulatory setting. By building and validating this technological capacity in an implanted device, we will empower DBS for chronic pain indications and advance personalized, precision methods for DBS more generally. We will enroll ten patients with post-stroke pain, phantom limb syndrome and spinal cord injury pain in our three-phase clinical trial. We will first identify biomarkers of low and high pain states to define optimal neural signals for pain prediction in individuals (Aim 1). We will then use these pain biomarkers to develop closed-loop algorithms for DBS and test the feasibility and efficacy of performing closed-loop DBS for chronic pain in a single-blinded, sham controlled clinical trial (Aim 2). Our main outcome measures will be a combination of pain, mood and functional scores together with quantitative sensory testing. In the last phase, we will assess the efficacy of closed-loop DBS algorithms against traditional open-loop DBS (Aim 3) and assess mechanisms of DBS tolerance in response to chronic stimulation. Successful completion of this study would result in the first algorithms to predict real-time fluctuations in chronic pain states for the delivery of analgesic stimulation and would prove the feasibility of closed-loop DBS for pain-relief by advancing implantable device technology. |
1 |
2021 | Chang, Edward | U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
A Pilot Clinical Trial For Speech Neuroprosthesis @ University of California, San Francisco |
1 |
2021 | Chang, Edward Sturm, Virginia Emily (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Spatiotemporal Dynamics of the Human Emotion Network @ University of California, San Francisco ABSTRACT Affective symptoms are a common feature of neuropsychiatric disorders that reflect dysfunction in a distributed brain network that supports emotion. How aberrant functioning in a single emotion network underlies a wide range of affective symptoms, such as depression and anxiety, is not well understood. Anchored by the anterior cingulate cortex and ventral anterior insula, the emotion network responds to numerous affective stimuli. A more sophisticated understanding of how the emotion network produces emotions?and how atypical emotion network functioning relates to affective symptoms?will be critical for advancing current neuroanatomical models of neuropsychiatric disorders. Intracranial electroencephalography (iEEG) provides direct estimates of neuronal populations and can be used to map the spatiotemporal dynamics of the emotion network at a millisecond-level resolution. Although functional neuroimaging studies have uncovered little evidence for neural differentiation among emotions, these studies lack the spatiotemporal and spectral resolution to determine whether emotions are characterized by unique neural signatures. The overall goals of the proposed project are to elucidate how emotion network dynamics relate to the behavioral, autonomic, and experiential changes that accompany emotions and to investigate how emotion network dysfunction relates to affective symptoms. Anatomically- specific biomarkers of emotion network dysfunction could be used to guide development of novel treatments, monitor symptoms and treatment response, and improve animal models of affective symptoms. We will study 100 patients with intractable epilepsy undergoing surgery for seizure localization. Subjects with iEEG electrodes within the emotion network will undergo continuous neural and video recordings during a multi-day hospital stay. Naturalistic affective behaviors that subjects display spontaneously throughout their hospitalization, emotional reactivity in response to standardized affective stimuli, and emotional reactions following electrical stimulation of emotion network hubs will be quantified. We will examine how activity within emotion network hubs changes during emotions and how emotion network properties make some individuals more vulnerable to affective symptoms than others. We will address three key aims. In Aim 1, we will determine how emotion network activity relates to naturalistic affective behaviors. In Aim 2, we will uncover the unique neural signatures of discrete emotions and their relations to task-based measures of emotional reactivity. In Aim 3, we will probe whether electrical stimulation of emotion network hubs changes network activity and alters emotions, mood, and anxiety. By utilizing a multidisciplinary approach, the proposed project has the potential to ask novel questions about the neural origins of emotions and to advance current models of the neurobiological basis of emotions and affective symptoms. |
1 |
2021 | Chang, Edward | U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Neural Coding of Speech Across Human Languages @ University of California, San Francisco PROJECT SUMMARY The basic mechanisms underlying comprehension of spoken language are unknown. We are only beginning to understand how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. Traditional theories have posited a ?universal? phonetic inventory shared by all humans, but this has been challenged by other newer theories that each language has its own unique and specialized code. An investigation of the cortical representation of speech sounds across languages can likely shed light on this fundamental question. Previous research has implicated the superior temporal cortex in the processing of speech sounds. Most of this work has been entirely carried out in English. The recording of neural activity directly from the cortical surface from individuals with different language experience is a promising approach since it can provide both high spatial and temporal resolution. Here, we propose to examine the mechanisms of phonetic encoding, by utilizing neurophysiological recordings obtained during neurosurgical procedures. High-density electrode arrays, advanced signal processing, and direct electrocortical stimulation will be utilized to unravel both local and population encoding of speech sounds in the lateral temporal cortex. We will examine neural encoding of speech in patients who are monolingual and bilingual in Mandarin, Spanish, and English, the most common spoken languages worldwide and feature important contrastive differences of pitch, formant, and temporal envelope. We will test a novel hypothesis that speech processing across languages will reflect a general auditory encoding of relevant phonetic properties, but that processing is modified by language-specific ?tuning?. A cross-linguistic approach to the neural encoding of speech will powerfully advance our understanding of how the brain processes sound pattern variability within and across languages. This will provide fundamental insights into the shared mechanisms of auditory processing and experience-dependent plasticity in humans. The results may have significant implications for the development of new diagnostic and rehabilitative strategies for language and neurological disorders (e.g., aphasia, dyslexia, autism). Furthermore, this proposal strives to achieve a broader view of diversity and inclusion in the neuroscience of language. |
1 |