2007 — 2011 |
Beauchamp, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Multisensory Influences On Touch Perception--Fmri, Meg and Tms Studies @ University of Texas Health Science Center Houston
In our everyday lives, we are frequently confronted with information from multiple sensory modalities. Recently, there has been increasing interest in the circumstances under which stimuli presented in one sensory modality influence sensations in a different modality. For instance, the sound of a mosquito buzzing seemingly enhances sensitivity to touch (tactile stimulation) on our skin; seeing an insect crawling on someone else's arm seems to affect our own tactile perception. Despite several recent studies examining the influence of audition and vision on touch, the brain mechanisms responsible for these interactions are poorly understood. An NSF-funded collaborative effort of Tony Ro (Rice University) and Michael Beauchamp (University of Texas Health Science Center, Houston) will use a combination of converging methods to examine tactile processing in isolation and the influence of vision and audition on touch in the human brain. Psychophysical studies will be conducted to determine the optimal stimulus parameters that demonstrate an influence of vision and audition on tactile perception. Functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), and magnetoencephalography (MEG) will be used to localize the brain regions involved with integrating multisensory information. While most of these experiments will be conducted using normal controls, an additional series of experiments will be conducted in a unique patient who acquired auditory-tactile synesthesia following a stroke. Tactile sensitivity on the patient's left hand and arm was impaired, but he now feels tactile sensations in that area in response to sounds. Psychophysical and imaging experiment will be completed on this patient to determine the neural mechanisms responsible for the synesthesia, especially whether plastic neural changes have reconstituted the patient's somatosensory cortex so that it is now responds to sounds.
These studies will not only better our understanding of multisensory integration, but will provide a deeper appreciation of general information processing mechanisms of the human brain. Such knowledge will contribute towards the development of better rehabilitative tools for patients with congenital or acquired sensory deficits to one or more of the sensory systems. Additionally, this research will provide a better understanding of the mechanisms of natural and brain-damaged induced changes that take place in the adult human brain. The funding will be used to support research training opportunities for undergraduate, graduate, and post-doctoral trainees in cognitive neuroscience and brain imaging in the Houston area. In addition to training the next generation of brain scientists, the findings of this research will be disseminated through scientific and lay publications, as well as other media outlets, allowing for a deeper understanding and appreciation of the human brain in society.
|
0.915 |
2010 — 2013 |
Beauchamp, Michael S |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Mechanisms of Optimal Multisensory Integration @ University of Texas Hlth Sci Ctr Houston
DESCRIPTION (provided by applicant): Multisensory integration is at the core of many cognitive phenomena. It provides a survival advantage because it allows the brain to combine the independent estimates available from different sensory modalities into a single estimate that is more accurate than any single modality in isolation. A key obstacle to progress is our lack of knowledge about how the brain combines different modalities. If sensory modality #1 claims that the environment is "X" while sensory modality #2 claims that the environment is "Y", how can the estimates best be combined to guide behavior? An important finding in behavioral studies is that multisensory integration is Bayes-optimal-that is, the reliability of different sensory modalities are taken into account when integrating them. Sensory inputs that are reliable (more informative) receive greater weights, while sensory inputs that are less informative receive less weight. The goal of this proposal is to uncover the neural mechanisms for optimal visual-tactile integration. Our central hypothesis takes the form of a simple model in which the strengths of connections from unisensory to multisensory brain areas are modulated by the reliability of the stimulus in each modality. An unreliable stimulus results in a weak connection, decreasing the effectiveness of that modality in the integration area, while a reliable stimulus results in a strong connection and increased ability to drive behavior. To test our model, we propose four specific aims that will examine two distinct paradigms: a touch delivered to the hand that is both seen and felt;and speech that is both seen and heard. In the first aim, we will determine the brain areas involved in these two types of stimuli using blood oxygen-level dependent functional magnetic resonance imaging (BOLD fMRI). We will test the hypothesis that the intraparietal sulcus (IPS) will respond to visual and tactile touch and that the superior temporal sulcus (STS) will respond to auditory and visual speech. In the second aim, we will show that neural connection strengths are proportional to stimulus reliability. We will test the hypothesis that the effective connectivity between unisensory and multisensory areas will be proportional to the reliability of the stimulus presented in that modality. In the third aim, we will demonstrate a correlation between multisensory brain activity and behavior using multi-voxel pattern analysis (MVPA). In the fourth aim, we will reveal a causal link between brain activity and behavioral multisensory integration. Using fMRI-guided transcranial magnetic stimulation (TMS), we will test the hypothesis that TMS of multisensory areas will eliminate the behavioral advantage of multisensory stimuli and the hypothesis that TMS of unisensory areas will impair behavioral performance proportional to the reliability of the stimulus in that modality. PUBLIC HEALTH RELEVANCE: Multisensory integration is at the core of many cognitive phenomena. We will use functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in normal human subjects to study the organization and operation of the brain during multisensory integration.
|
0.981 |
2014 — 2020 |
Beauchamp, Michael S |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Substrates of Optimal Multisensory Integration @ Baylor College of Medicine
? DESCRIPTION (provided by applicant) Speech perception is one of the most important cognitive operations performed by the human brain and is fundamentally multisensory: when conversing with someone, we use both visual information from their face and auditory information from their voice. Multisensory speech perception is especially important when the auditory component of the speech is noisy, either due to a hearing disorder or normal aging. However, much less is known about the neural computations underlying visual speech perception than about those underlying auditory speech perception. To remedy this gap in existing knowledge, we will use converging evidence from two complementary measures of brain activity, BOLD fMRI and electrocorticography (ECoG). The results of these neural recording studies will be interpreted in the context of a flexible computational model based on the emerging tenet that the brain performs multisensory integration using optimal or Bayesian inference, combining the currently available sensory information with prior experience. In the first Aim, a Bayesian model will be constructed to explain individual differences in multisensory speech perception along three axes: subjects' ability to understand noisy audiovisual speech; subjects' susceptibility to the McGurk effect, a multisensory illusion; and the time spent fixating the mouth of a talking face. In the second Aim, we will explore the neural encoding of visual speech using voxel-wise forward encoding models of the BOLD fMRI signal. We will develop encoding models to test 7 different theories of visual speech representation from the linguistic and computer vision literature. In the third Aim, we will use ECoG to examine the neural computations for integrating visual and auditory speech, guided by the Bayesian models developed in Aim 1. First, we will study reduced neural variability for multisensory speech predicted by our model. Second, we will study the representational space of unisensory and multisensory speech.
|
0.904 |
2018 — 2020 |
Beauchamp, Michael S |
R24Activity Code Description: Undocumented code - click on the grant title for more information. |
Rave: a New Open Software Tool For Analysis and Visualization of Electrocorticography Data @ Baylor College of Medicine
Project Summary/Abstract A fast-growing technique in human neuroscience is electrocorticography (ECOG), the only technique that allows the activity of small population of neurons in the human brain to be directly recorded. We use the term ECOG to refer to the entire range of invasive recording techniques (from subdural strips and grids to penetrating electrodes) that share the common attribute of recording neural activity from the human brain with high spatial and temporal resolution. While this ability has resulted in many high-impact advances in understanding fundamental mechanisms of brain function in health and disease, it generates staggering amounts of data as a single patient can be implanted with hundreds of electrodes, each sampled thousands of times a second for hours or even days. The difficulty of exploring these vast datasets is the rate-limiting step in using them to improve human health. We propose to overcome this obstacle by creating an easy-to-use, powerful platform designed from the ground up for the unique properties of ECOG. We dub this software tool RAVE (?R Analysis and Visualization of Electrocorticography data?). The first goal of Aim 1 is to release RAVE 1.0 to the entire ECOG community by month 6 of the first funding period. This will maximize transformative impact by putting the new tools in the hands of users as quickly as possible, facilitating rapid adoption. The design philosophy of RAVE is driven by three imperatives. The first is to keep users close to the data so that users may make discoveries about the brain without being misled by artifacts. The second imperative is rigorous statistical methodology. The final imperative is play well with others. As described in Aim 2, our approach will make it easy to seamlessly incorporate new and existing analysis tools written in Matlab, C++, Python or R into RAVE, giving users the best of both worlds: advanced but easy-to-use visualization of results from ECOG experiments, whether they are analyzed with the off-the- shelf tools routines provided with RAVE or novel tools developed by others.
|
0.904 |
2020 — 2021 |
Beauchamp, Michael S Schroeder, Charles E |
U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Dynamic Neural Mechanisms of Audiovisual Speech Perception @ University of Pennsylvania
Project Summary/Abstract Speech perception is inherently multisensory: when conversing with someone that we can see, our brains combine auditory information from the voice with visual information from the face. Speech perception lies at the heart of our interactions with other people and is thus one of our most important cognitive abilities. However, there is a large gap in our knowledge about this uniquely human skill because most experimental techniques available in humans suffer from poor spatiotemporal resolution. In order to remedy this gap, we will examine the neural mechanisms of audiovisual speech perception using intracranial recording (iEEG) in humans. Audiovisual speech perception occurs in the posterior superior temporal gyrus and sulcus (pSTG) Understanding the dynamics of the neural computations within pSTG at the mesoscale (neurons organized into columns and patches) has been impossible in humans. We propose to leverage two technical innovations within the fast-changing field of iEEG to study them for the first time: first, high-resolution intracranial electrode grids, which allow for recording from a cortical volume hundreds of times smaller than the electrodes in standard iEEG grids; second, NeuroGrids that record single-neuron activity from a non-penetrating film of electrodes placed on the cortical surface. Our causal inference model requires the existence of distinct auditory, visual and audiovisual speech representations. Aim 1 will search for these representations in pSTG. Aim 2 will examine low-frequency oscillations in pSTG to determine their role in multisensory speech perception. If successful, the Aims will provide a comprehensive account of the neural mechanisms of multisensory speech perception, including the long-standing mystery of the perceptual benefit of visual speech.
|
0.951 |