2015 — 2017 |
Rosenberg, Ari |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Vestibular Contribution to the Encoding of Object Orientation Relative to Gravity @ Baylor College of Medicine
DESCRIPTION (provided by applicant): Gravity plays a critical role in shaping our experience of the world, influencing both sensory perception and motor planning at fundamental levels. Understanding how vestibular information, that signals the orientation of the self relative to gravity, can be used to create a stable gravity-centered representation of the visual scene is thus important for understanding perception and action. Surprisingly little is known about where and how the brain may use a vestibular estimate of gravity to transform visual signals first encoded in eye-centered (retinal) coordinates into the gravity-centered representation we perceive. The proposed experiments aim to eliminate this knowledge gap. Two lines of research suggest two probable loci. The first is the caudal intraparietal area (CIP) which is known to encode a high-level visual representation of object orientation. The second is the visual posterior sylvian (VPS) which is known to respond to both vestibular and visual stimulation, and which clinical reports suggest may be involved in creating a gravity-centered visual representation. I hypothesize that the transformation occurs progressively, beginning with an egocentric representation in V3A (CIP's main visual input) and culminating in a primarily gravity-centered representation: V3A (egocentric) ? CIP ? VPS (mostly gravity-centered). It is thus expected that V3A represents object orientation in strictly egocentric (head and/or eye) coordinates, and that the computations implementing the transformation occur at the level of CIP and/or VPS. In Aim 1, the visual orientation selectivity of single neurons will be recorded with the monkey in multiple spatial orientations (rolled left/right ear down). This experiment dissociates egocentric (eye/head) from gravity- centered representations, allowing the reference frame in which single neurons encode object orientation to be determined. Even if the transformation to a gravity-centered representation is incomplete at the level of single cells in CIP and/or VPS, it is hypothesized that population activity in these areas can represent object orientation relative to gravity. This will be tested using neural network modeling and the framework of probabilistic population codes to develop a neural theory of how a gravity-centered representation of object orientation is achieved. In Aim 2, the role of the vestibular system in implementing this transformation will be tested directly by performing a bilateral labyrinthectomy and repeating experiments from Aim 1. Since electrical stimulation of vestibular afferents can change perceived visual object orientation, the elimination of vestibular signals is expected to profoundly, if not completely, abolish gravity's effects on visual responses. Any residual effect will be attributed to proprioceptive signals (not vision, since no visual cues to gravity will be present). After the lesion, the effect of gravity on visual responses may increase with time, suggesting a re-learning period in which the role of proprioceptive signals increases. This research is important for understanding vestibular-visual interactions and establishing novel directions for both basic and clinical research studies.
|
0.936 |
2018 — 2021 |
Rosenberg, Ari |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Hierarchical Cortical Circuits Implementing Robust 3d Visual Perception @ University of Wisconsin-Madison
PROJECT SUMMARY/ABSTRACT How do we perceive the three-dimensional (3D) structure of the world when our eyes only sense two-dimensional (2D) projections like a movie on a screen? Reconstructing 3D scene information from 2D retinal images is a highly complex problem, made evident by the great difficulty robots have in turning visual inputs into appropriate 3D motor outputs to move physical chessmen on a cluttered board, even though they can beat the best human chess players. The goal of this proposal is to elucidate how hierarchical cortical circuits implement robust (i.e., accurate & precise) 3D visual perception. Towards this end, we will answer two fundamental questions about how the brain achieves the 2D-to-3D visual transformation using behavioral, electrophysiological, and neuro- imaging approaches. In Aim 1, we will answer the question of how the visual system represents the spatial pose (i.e., position & orientation) of objects in 3D space. Our hypothesis is that 3D scene information is reconstructed within the V1 ? V3A ? CIP pathway. We will test this hypothesis by simultaneously recording 3D pose tuning curves from V3A and CIP neurons in macaque monkeys while the animals perform an eight-alternative 3D orientation discrimination task. This experiment will dissociate neural responses to 3D pose that reflect elementary receptive field structures (resulting in 3D orientation preferences that vary with position-in-depth, which we anticipate to find in V3A) from those that represent 3D object features (resulting in 3D orientation preferences that are invariant to position-in-depth, which we anticipate to find in CIP). Using these data, we will additionally test for functional correlates between neural activity in each area and perceptual sensitivity. Through application of Granger Causality Analysis to simultaneous local field potential recordings in V3A and CIP, we will further test for feedforward/feedback influences between the areas to evaluate their hierarchical structure. In Aim 2, we will answer the question of how binocular disparity cues (differences in where an object's image falls on each retina) and perspective cues (features resulting from 2D retinal projections of the 3D world) are integrated at the perceptual and neuronal levels to achieve robust 3D visual representations. Both cues provide valuable 3D scene information, and human perceptual studies show that their integration is dynamically reweighted depending on the viewing conditions (i.e., position-in-depth & orientation-in-depth) to achieve robust 3D percepts. Specifically, greater weight is assigned to the more reliable cue based on the viewing conditions; but, where and how this sophisticated integrative process is implemented in the brain is unknown. We anticipate that V3A and CIP will each show sensitivity to both cue types, but only CIP will dynamically reweight the cues to achieve robust 3D representations. This research is important for understanding ecologically relevant sensory processing and neural computations that are required for us to successfully interact with our 3D environment. Insights from this work will also extend beyond 3D vision by elucidating processes implemented by neural circuits to solve highly nonlinear optimization problems that turn ambiguous sensory signals into robust perceptions.
|
0.936 |