2014 — 2018 |
Chandrasekaran, Bharath |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Online Modulation of Auditory Brainstem Responses to Speech @ University of Texas, Austin
DESCRIPTION (provided by applicant): The goal of this project is to use multimodal (functional magnetic resonance imaging (fMRI) and electroencephalography (EEG)) neuroimaging methods to examine the nature of linguistic and non-linguistic influences on brainstem encoding of speech signals in adults. In direct conflict with the concept of auditory brainstem nuclei as passive relay stations for behaviorally-relevant signals, recent studies have demonstrated active transformation of the signal, as represented in the auditory midbrain and brainstem. However, the mechanisms underlying such early sensory plasticity are unclear. In this proposal, an integrative model of subcortical auditory plasticity is posited (predictive tunin), which argues for a continuous, online modulation of bottom-up signals via corticofugal pathways, based on an algorithm that constantly anticipates incoming stimulus regularities, thereby transforming representation in the auditory pathway. This proposal utilizes cross-language and case-control designs and innovative EEG methods to directly address the role of brainstem circuitry in dynamic encoding of speech and test competing neural models (local modulation vs. predictive tuning). Causal influences (top-down vs. bottom-up) during speech processing will be tested using fMRI effective connectivity analyses. The proposed experiments will provide a comprehensive examination of mechanisms underlying brainstem plasticity and expand the understanding of the neurobiology of speech perception beyond the current corticocentric focus. Recent studies show that a number of clinical populations exhibit speech-encoding deficits at the level of the brainstem. The design and analysis methods developed in this proposal can be used to evaluate the locus (bottom-up versus top-down) of such encoding deficits.
|
1 |
2017 — 2021 |
Chandrasekaran, Bharath |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Systems in Auditory and Speech Categorization @ University of Texas, Austin
Using complementary multi-modal neuroimaging methods (functional magnetic resonance imaging (fMRI) and electrocorticography (ECoG)) in conjunction with rigorous behavioral approaches, we will examine the role of multiple cortico-striatal and sensory cortical networks in the acquisition and automatization of novel non- speech and speech categories in the mature adult brain. We test the scientific premise of a dual-learning systems (DLS) model by probing neural function using fMRI or ECoG during the process of feedback-dependent category learning. In contrast to popular single-learning system (SLS) approaches, DLS posits that two neurally- dissociable cortico-striatal systems are critical to speech learning: an explicit, sound-to-rule cortico-striatal system, that maps sounds onto rules, and an implicit, sound-to-reward cortico-striatal system that implicitly associates sounds with actions that lead to immediate reward. Per DLS, the two systems contribute to the emerging expertise of the learner. Via closed loops, the highly plastic cortico-striatal systems ?train? key less labile temporal lobe networks to categorize information by validated rules or rewards. Once categories are learned to the point of automaticity, cortico-striatal networks are no longer required to mediate behavior. Instead, abstract categorical information within the temporal cortex drives highly accurate speech categorization. In Aim 1.1, we use fMRI to examine the relative dominance of the two cortico-striatal networks in learning multidimensional non-speech category structures that are experimenter-constrained to either rely on rules (rule- based, RB), or on implicit integration of multidimensional cues (information-integration, II). We predict that key regions of the sound-to-rule network, the prefrontal cortex (PFC), hippocampus, and caudate nucleus show greater activation during RB, relative to II learning; in contrast, key regions within the sound-to-reward network, the putamen and the ventral striatum show greater activation during II, relative to RB learning. In Aims 1.2 and 1.3, we leverage the temporal precision of ECoG measurements from high-density grids in temporal, PFC, and Hippocampal regions to examine the extent to which temporal lobe representational changes during RB learning are an outcome of error-monitoring processes within the PFC and hippocampus. In Aim 2, we probe neural function using fMRI or ECoG to assess network and representational changes during the acquisition of non- native supra-segmental and segmental categories to native-like performance levels. We predict that early ?novice? speech acquisition involves sound-to-rule mapping; later ?experienced? involves sound-to-reward mapping. In contrast, only cortical networks are active at the point of ?native-like automaticity? in categorization. Using innovative single-trial classification and network-level decoding analyses on ECoG data, we examine learning-induced changes in speech representation within the temporal lobe. Further, we examine the extent to which error monitoring processes within the PFC and the hippocampus drive emergent temporal lobe representations of novel speech categories.
|
1 |
2020 — 2021 |
Chandrasekaran, Bharath Holt, Lori L [⬀] Shinncunningham, Barbara |
R13Activity Code Description: To support recipient sponsored and directed international, national or regional meetings, conferences and workshops. |
Symposium On Cognitive Auditory Neuroscience (Scan) @ Carnegie-Mellon University
PROJECT SUMMARY/ABSTRACT In recent years, human cognitive auditory neuroscience has made rapid strides due to advances in human neuroimaging, the advent of innovative machine learning/big data analytic approaches, and a greater mechanistic understanding of cognitive-sensory interactions in animal models. The dynamic landscape of this emergent field necessitates a highly interdisciplinary, human and translation-centric symposium that brings together expertise across academia and industry. This application requests partial funding for the Symposium on Cognitive Auditory Neuroscience (SCAN) to be hosted in Pittsburgh, PA in July 2020 and 2022, as a joint venture between Carnegie Mellon University (CMU) and University of Pittsburgh (Pitt). As a biennial meeting, SCAN aims to become the premiere intellectual and professional venue for current research in the emerging field of human cognitive auditory neuroscience. SCAN will incorporate elements typical to academic conferences (research talks, posters) as well as novel ideas that promote ?blue sky? thinking in this rapidly evolving field. SCAN will assiduously and innovatively work towards inclusivity and creating an atmosphere that encourages intellectual and professional engagement from women, underrepresented minorities, and individuals with disabilities. Another critical aim of the SCAN is to foster industry-academic partnerships with an eye towards translation of basic research and fostering career opportunities for trainees. Pittsburgh is uniquely situated to launch SCAN. With an enviable concentration of co-located auditory neuroscience expertise, Pittsburgh is also an intellectual hub for industries/start-ups engaged in in machine learning, natural language processing, and speech recognition. SCAN will leverage these advantages to foster growth and innovation tied to core missions of the National Institutes of Deafness and Communication Disorders.
|
0.934 |