2009 — 2013 |
Whitney, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Neural Mechanisms of Visual Crowding @ University of California-Berkeley
Every time we open our eyes, our visual system is overloaded with information. This is especially true in the peripheral visual field: Although we feel that we are aware of the details of individual objects in our periphery, the number and density of objects in natural scenes means that we are unable to perceive detailed information such as identity or quantity. This inability to scrutinize objects in the periphery when other objects are present is called visual crowding. Counterintuitively, crowding may be beneficial. A number of models have emphasized that crowding can improve processing efficiency or allow us to detect statistical regularities in natural scenes--tasks that are fundamental to all aspects of visual perception. Despite the existence of a unique psychophysical definition for crowding, and its clear importance in visual processing, there are major gaps in our understanding of where in the visual hierarchy crowding occurs, the sorts of objects on which it operates, and the underlying neural mechanism that causes crowding. With funding from the National Science Foundation, Dr. David Whitney and his colleagues at the University of California, Davis, are pursuing two major research goals. First, by measuring behavioral performance, Dr. Whitney will test the hypothesis that crowding operates independently at multiple levels of visual analysis, for low-level visual features such as contours or gratings, and also for high-level objects such as faces. Second, Dr. Whitney's team will isolate and identify the neural mechanisms that mediate both low and high-level crowding using a non-invasive brain imaging method known as fMRI-adaptation.
Understanding visual crowding is fundamental to understanding most other aspects of visual perception. Every natural scene we look at is densely filled with objects, but only a very few of these can be simultaneously scrutinized, largely because visual crowding prevents the visual system from having access to all the details at once. Perception, therefore, is subject to the costs (and benefits) of crowding. Because human vision is perhaps the most thoroughly examined operational visual system, investigating the impact of crowding on human perception will be important in developing a realistic artificial visual system in the future. More broadly, understanding the limits of human spatial vision, including crowding, has the power to improve a range of applications including data visualization (e.g., crowding interferes with visualizing abnormalities on an x-ray), advertising (e.g., too many words or images on a billboard causes car accidents), computer graphics (e.g., too many flashing icons on a website becomes ineffective), visual art and movies (e.g., do not crowd the star actor with too many other nearby faces), and a host of other applications. In addition to the broader impacts noted above, this proposal will support the training of a graduate student. Moreover, Dr. Whitney will establish a unique outreach program in local high schools, with predominantly Hispanic populations, that will use art as a way to introduce visual neuroscience research questions and methods. The educational outreach program is specifically aimed at stimulating interest and increasing the participation of under-represented groups in basic science research.
|
1 |
2013 — 2015 |
Whitney, David V |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Mechanisms of Perceptual Localization @ University of California Berkeley
DESCRIPTION (provided by applicant): Virtually every visual and visuomotor function hinges first and foremost on the visual system's ability to consistently and accurately localize stimuli, and this is especially true in cluttered and dynamic scenes. However, it is unknown what sorts of information are integrated to determine perceived position, how such integration occurs, or what the neural mechanism(s) and loci of this process are. Perceptual localization in typically cluttered and dynamic scenes requires the visual system to both assign and update object positions; to do this it relies not just on the retinal location of the object of interest, but als principally on four additional factors: visual motion, frames of reference, eccentricity biases, an contrast adaptation. To understand how the visual system assigns and updates object positions, we must approach the task of localization not as an isolated process, but as an integrative one-one that depends on contextual information in the scene. Our proposed experiments have two goals; first, to psychophysically measure perceived position as a function of retinal position, contrast adaptation, visual motion, eccentricity bias, and frames of reference in order to generate a novel multifactorial integration field model; we will use this model in fMRI experiments to test the hypothesis that neurotopic organization across visual cortex is heterogeneous (unique position codes in each visual area) or homogeneous (identical across visual areas). Our pilot results suggest that there are unique position codes in different visual areas. The second goal of the proposal is to test whether these unique position codes (the differences in topographic organization) have perceptual consequences. We will use psychophysics to test these predicted double dissociations in perceived location and then use TMS to test the causal contribution of heterogeneous visual cortical topographic organization to perceived position. One novelty of our approach lies in developing a new mixed generative and discriminative model of spatial coding, that can be applied to psychophysical and fMRI data in tandem, and further allows us to make predictions from fMRI results about perceptual outcomes in specific situations. The causal relationship between fMRI results and perceptual outcomes will then be tested with TMS. Our experiments will provide novel insight on how cues are integrated to determine perceived position at each stage of visual processing, which is crucial to understanding the fundamental localization deficits that occur in a range of visual and cognitive impairments ranging from amblyopia and macular degeneration, to autism. Until we understand how position is assigned in the typical brain, we lack the necessary insight to develop diagnostic tools, predictive markers, and treatment outcome measures for these impairments.
|
0.958 |
2019 — 2020 |
Whitney, David V |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Isolating and Mitigating Sequentially Dependent Perceptual Errors in Clinical Visual Search @ University of California Berkeley
Project Summary When looking at an x-ray, radiologists are typically asked to localize a tumor (if present), and to classify it, judging its size, class, position and so on. Importantly, during this task, radiologists examine on a daily basis hundreds and hundreds of x-rays, seeing several images one after the other. A main underlying assumption of this task is that radiologists? percepts and decisions on a current X-ray are completely independent of prior events. Recent results showed that this is not true: our perception and decisions are strongly biased by our past visual experience. Although serial dependencies were proposed to be a purposeful mechanism to achieve perceptual stability of our otherwise noisy visual input, serial dependencies play a crucial and deleterious role in the everyday task performed by radiologists. For example, an x-ray containing a tumor can be classified as benign depending on the content of the previously seen x-ray. Given the importance and the impact of serial dependencies in clinical tasks, in this proposal, we plan to (1) establish, (2) identify and (3) mitigate the conditions under which serial effects determine our percepts and decisions in tumor search tasks. In Aim 1, we will establish the presence of serial effects in four different clinically relevant domains: tumor detection, tumor classification, tumor position and recognition speed. In Aim 2, we plan to identify the specific boundary conditions under which visual serial dependence impacts tumor search in radiology. In Aim 3, once we will fully understand these boundary conditions in Aim 2, we will propose a series of task and stimulus manipulations to control and mitigate the deleterious effects of visual serial dependence on tumor search. As a result of these manipulations, visual search performance should improve in measurable ways (detection, classification, position, speed). Aim 3 is particularly crucial because it will allow us to propose new guidelines which will greatly improve tumor recognition in x-ray images, making this task even more effective and reliable. Taken together, the proposed studies in Aim 1, 2, and 3 will allow us to establish, identify, and mitigate the deleterious effect of serial dependencies in radiological search tasks, which could have a significant impact on the health and well-being of patients everywhere. ! ! !
|
0.958 |