2013 — 2014 |
Rowland, Benjamin A [⬀] |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Real-Time Multisensory Integration For Time-Varying Signals @ Wake Forest University Health Sciences
DESCRIPTION (provided by applicant): Traditional models of perception have a foundational principle that sensory processing is hierarchical: sensory filters decompose the physical world into primitive features that are recombined through a series of stages into a coherent supramodal representation. These models are typically conceptualized as processing time-invariant (i.e., constant) signals or representations, and are evaluated in experiments whose structure (baseline->stimulus->response) parallels the discrete and staged nature of the presumed underlying processing dynamics. Real environments contain continuous time-varying signals and demand continuous perceptual and behavioral solutions, and while it is assumed that the operations of time-invariant models will trivially generalize to the continuous time domain, there are reasons to suspect that they will not. For example, in a continuous, dynamic environment, fluctuations in input traces may be interpreted as either representing noise contamination or changes in the source signals. How the brain achieves stable perceptual solutions in such circumstances is of critical interest. The present application describes a series of short-term experiments that seek to evaluate this issue in the context of multisensory integration. The merging of information across the senses has been shown to improve perceptual and behavioral judgments and speed the responses to external events, particularly those whose unisensory representations are significantly contaminated or obscured by noise. To realize these benefits, the brain must coordinate activity across senses that have different operational parameters, in particular, different reliabilities that are dependent on the immediate environmental circumstances. To achieve optimal integration, the brain must appropriately weight signals derived from different senses according to these reliabilities in their joint consideration. The framework proposed here seeks to understand this phenomenon by evaluating whether subjects use Kalman filter dynamics to achieve optimal performance on a continuous-time multisensory task. Subjects are tasked with tracking dynamic and noise-corrupted patterns of visual and auditory stimuli presented alone or in concert. The ability of subjects to adapt to changes in the signal and the relative reliabilities of the senses are interpreted in the context of the proposed model framework. Subject expectations are shaped by prior experience and forewarning of changes in the signals and signal statistics, and the timing and impact of the adjustments they make in their responses consequent to this information is evaluated. The experimental approach applied in the proposed studies will provide important insights into the principles upon which sensory channels are combined and help to establish the boundary conditions upon which real-time integration is achieved.
|
0.931 |
2016 — 2020 |
Rowland, Benjamin A (co-PI) [⬀] Stein, Barry E [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Reversing Hemianopia With Cross-Modal Training @ Wake Forest University Health Sciences
Project Summary The midbrain superior colliculus (SC) typically requires influence from ipsilateral visual cortex to play its critical role in generating visuomotor responses to contralateral cues. Thus, visual cortex lesions eliminate both normal visual feature processing and the visual functions of the ipsilateral SC. The result is a contralateral hemianopia. Although insights from animal models suggest amelioration of this deficit is possible through a number of interventions, none of these offers viable therapeutic options for human patients. However, using an animal model, we have recently demonstrated that a non-invasive rehabilitative training paradigm (using auditory-visual cues) can permanently reinstate vision in animals rendered hemianopic by unilateral removal of all contiguous areas of visual cortex. Unfortunately, we are largely ignorant of the neural changes that induce this reinstatement of vision. Nevertheless, our preliminary data do suggest that cross-modal training produces a functional reorganization in a cortico-SC circuit that involves specific regions of association cortex (i.e., the anterior ectosylvian sulcus, AES). These adaptive changes render SC neurons once again capable of visual responses and of supporting visual behavior in the absence of ipsilateral visual cortex ? presumably via compensatory inputs from AES. Our objective here is to use physiological and behavioral techniques to evaluate the physiological consequences of large visual cortex lesions on the neuronal properties in the AES and SC of hemianopic animals, and to determine how their properties are modified by cross-modal training so that vision is restored. Our overarching hypothesis is that cross-modal training, via Hebbian mechanisms, is able to amplify the normally subthreshold inputs to these regions from sources other than visual cortex. Understanding how the inherent plasticity of this circuit can be harnessed via non-surgical, behavioral training techniques to ameliorate hemineglect will help us understand the latent functional capabilities of this system, and provide invaluable insights to facilitate strategies for dealing with this debilitating condition in human patients.
|
0.931 |
2020 — 2021 |
Rowland, Benjamin A (co-PI) [⬀] Stein, Barry E [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multisensory Development: Cortical-Midbrain Interactions @ Wake Forest University Health Sciences
Project Summary A major issue of ignorance in sensory processing is how the brain develops its remarkable ability to use its senses synergistically, a critical requirement for normal perception. We do know however, that acquiring this capability is a protracted postnatal process, and the ability to use visual and auditory information cooperatively must be learned. This process is best understood in terms of the detection and orientation behaviors mediated by the superior colliculus (SC), a midbrain structure well-endowed with multisensory neurons. After extensive visual-auditory experience, animals show enhanced visual-auditory detection and localization behaviors. Their multisensory SC neurons show similar changes ? now integrating their different sensory inputs to enhance their response and the physiological salience of the initiating events. The brain has come to treat these cross- modal stimuli as a coherent whole rather than as a set of competitive or unrelated cues. These changes are not seen in animals reared in darkness or with masking noise, and chemical lesions preferentially eliminating SC multisensory neurons eliminate the enhanced multisensory detection and orientation behaviors without disrupting responses to their individual component cues. Interestingly, this integrative capacity and its performance benefits in detecting and orienting to external events can be acquired in dark-reared and noise- reared animals by giving them appropriate experience later in life. But, the conceptual and practical use of this information is limited by a poor understanding of the factors underlying its acquisition and operation. We suggest the acquisition of this SC capacity does not depend on forming generic associations between the sensory modalities as is widely believed. Rather, it involves a far more sophisticated form of statistical learning in which the probability that any set of cross-modal inputs derive from the same event is encoded. This information is then used by the circuit to determine how it will later respond to such events. But to be effective in this regard, those cross-modal inputs must access the SC through unisensory projections from association cortex (and be filtered by the SC?s inherent biases). We posit that this natural process can be reproduced artificially by inducing covariant activation of these converging cortico-SC afferents - in the absence of external cues, and without any of the reinforcement contingencies or cognitive factors normally associated with overt behavior. Finally, we hypothesize that NMDA receptors provide the crucial mechanistic basis for encoding this experience by initiating Hebbian-like learning algorithms. The end result is a multisensory system that is extremely sensitive to the particular cross-modal stimulus configurations that were learned to belong to the same events. This gives them preferential access to the neural machinery that will still further enhance their physiological salience and their ability to elicit SC-mediated behavior, ensuring that the system is adapted to the environment in which it was formed, and in which it will likely be used.
|
0.931 |