1998 — 2000 |
Backus, Benjamin T |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Neural Correlates of Stereoscopic Perception
The proposed research is designed to identify and measure the neural substrates of stereo depth perception, by comparing perceptual data with functional magnetic resonance imaging (fMRI) measurements of brain activity in specific brain areas. Brain areas where the neural activity correlates with the percept may be involved in, or responsible for, stereo depth perception. Our general approach will be to replicate experiments that have yielded landmark findings in stereo psychophysics, while collecting fMRI measurements of brain activity. The fMRI data will be analyzed separately in each of several identifiable visual brain areas to link brain activity with perception, and to test psychophysically-motivated theories about how stereoscopic vision works. The specific aims of the proposed research are: (1) to measure and characterize fMRI responses to binocular stimulation in each of several brain areas, and to identify brain areas that respond to stereoscopic stimuli; (2) to compare brain activity and psychophysical upper depth (maximum disparity) limits; and (3) to compare brain activity and temporal depth modulation thresholds (rate of stereo vision processing). These experiments could improve out understanding of stereoblindness, which affects 6-7 percent of the US population.
|
0.911 |
2003 — 2005 |
Backus, Benjamin T |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Cue Reliability/Depth Calibration in Space Perception @ University of Pennsylvania
DESCRIPTION (provided by applicant): Humans routinely and confidently base their physical actions on the visual perception of space. We step off curbs but not cliffs, merge successfully with oncoming traffic, and dice chicken without chopping off our fingers. How does the visual system build representations of the environment that are so reliable? Recent work has shown that visual performance is in many ways nearly optimal. An important example of this occurs when multiple types of visual information (such as stereo, perspective, and motion parallax) are present, and the scene's layout could be determined from any of them. This is often the case in natural vision. In this situation, the visual system often constructs a percept that not only uses all the sources of information, but averages them together to create the perceived scene, with the most reliable sources given the greatest weight in the average. In principle, such weighted averaging should affect not only the appearance of the scene, but also the performance of tasks that use the percept. It is not yet known whether this is the case. The first study in the proposal quantifies the improvement in performance, using high quality visual displays and a task that is important for driving. There are also situations in which different sources of information could, in principle, be combined to give an extra boost to performance, above and beyond the use of a weighted average. This can happen because different cues excel at providing different sorts of information about shape and distance. If the information from different cues could be combined before each cue is used to estimate various aspects of the scene layout, a "nonlinear" improvement in performance could be realized. Does the visual system exploit this opportunity? The answer to this question is important for understanding the neural mechanisms of visual perception. The second study addresses this question by measuring performance in a task in which observers adjust the shapes of simulated objects. Finally, the visual system builds accurate percepts and is exquisitely sensitive to changes in spatial layout. This requires that the system be kept finely tuned. Any drift in its computational mechanisms must be quickly detected and corrected. How this is done is not understood, but there is reason to believe the visual system can compare the outputs from different mechanisms with each other, and recalibrate itself when discrepancies are found. We propose that this process can be understood using the same conceptual tools that have already been developed to understand cue combination. We exploit a depth recalibration phenomenon discovered forty years ago to test predictions about how fast different visual mechanisms will be recalibrated when they disagree with each other.
|
0.958 |
2007 — 2011 |
Backus, Benjamin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pavlovian Conditioning of Visual Perception @ University of Pennsylvania
The eyes are responsible for transducing light, but the brain must interpret the retinal images before a person can see the properties of the objects around them. To do this the brain extracts signals (such as motion, color, and binocular disparities) from the retinal images and then uses these signals as ''cues'' to construct visual percepts that accurately represent the local environment. For example, binocular disparities are signals that are used by the brain as a cue for depth, which can be demonstrated by looking at a stereogram. A question of long standing interest is how the brain knows which cues to use during the construction of a given percept. In other words, how does the brain learn to use cues appropriately during perception?
With support of the National Science Foundation, Dr. Backus and his colleagues are conducting experiments to clarify an important aspect of this problem, namely, how the brain decides to start utilizing a new cue during perception. Recent work in the Backus lab confirmed that under certain conditions the brain''s visual system can be trained to use new cues. This training was achieved by means of classical (Pavlovian) conditioning procedures in which a new signal (such as a motion direction) was paired with depth cues that were already trusted by the brain (such as binocular disparity). The experiments will use simulated 3D stimuli to measure the rate at which new cues are learned under a variety of conditions. In addition to advancing our basic understanding of perceptual learning, this work may lead to the development of new techniques for training human perception and to better computer vision systems that can improve themselves through learning.
|
1 |