2008 — 2017 |
Bisley, James |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Neural Mechanisms Underlying the Guidance of Visual Attention @ University of California Los Angeles
[unreadable] Description (provided by applicant): We study visual attention because of its importance in visual perception. Behavioral paradigms, such as change blindness, have shown us that while we think we perceive the whole visual world, we only take away information about regions or objects that we have attended. Because visual attention is a foundation of visual perception, it underlies most of our interactions with the perceived world - both our physical interactions and more intellectual interactions, such as learning and memory. Thus, increasing our understanding of the mechanisms underlying the guidance of attention is critical in allowing us to gain a deeper insight into how the brain makes decisions based on both external and cognitive inputs and, in the long run, insight into the mechanisms underlying visual perception itself. In this study, we test the hypothesis that the lateral intraparietal area (LIP) acts as a priority map - a map of the visual world that is used to guide the allocation of attention. The theory is that attention is allocated to the location on the map with the greatest activity. We have hypothesized that this map is used to guide both peripheral (covert) attention and eye movements (overt attention). In aim 1, we will test this hypothesis by comparing the activity in LIP and visual area V4 under conditions in which covert attention is spread, focused or biased to a particular location. We predict that activity in V4 will be modulated in a way that is directly related to the spatial distribution of activity in LIP - a peak of activity in LIP will produce strong attentional modulation in V4. We will further test this by stimulating LIP and showing predictable modulation in V4 activity. In aim 2, we will test a prediction made by our model of the system, namely that once an object has been looked at, it is suppressed on the map so that the focus of gaze (ie. overt attention) doesn't just bounce between the two highest points on the map. We will test this by examining the activity in LIP to an identical stimulus under conditions in which it has or has not been looked at previously. We expect that the response will be significantly lower in the case in which the stimulus has already been seen. We will then test whether this reduction in activity is important to the behavior by stimulating LIP during the task. We expect that this will result in more eye movements being made to the visual stimulus at the stimulated location than to the same stimulus in trials in which stimulation does not occur. These experiments are aimed at understanding the role that LIP plays in the allocation of attention and the results may be used to fine-tune our model of how attention is allocated. PUBLIC HEALTH RELEVANCE The results from this study help us understand the way the brain decides what is worth paying attention to. Given the importance of visual attention in everyday life and the deficits seen in patients with parietal lesions or attention deficit disorders, a greater understanding of this mechanism may aid in the development of pharmacological or behavioral methods to combat these problems. [unreadable] [unreadable] [unreadable]
|
1 |
2018 — 2021 |
Bisley, James |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Processing in Covert Attention @ University of California Los Angeles
Project Summary For centuries, magicians and illusionists have known that if they distract your gaze and attention, you are unlikely to notice what they are doing in plain sight. This inattentional blindness is a result of our limited ability to take in everything about the visual scene and is traditionally illustrated in the lab by set-size effects in visual search: the more items you have to search through, the harder it is to find what you are looking for and the longer it takes. We have recently found that this limitation does not appear to involve visual processing in visual areas, but takes place as the visual information is passed up to association cortices. We hypothesized that divisive normalization in the lateral intraparietal area (LIP) of posterior parietal cortex plays a key role in this process. Our hypothesis combines our knowledge of the neurophysiology of attention, divisive normalization and decision making to explain the behavioral effects associated with changes in set-size. Specifically, we have suggested that when looking for an item, activity in LIP represents the accumulation of evidence from visual areas that the item is in the neuron's response field and if the activity reaches a threshold before a deadline, the animal indicates that the item is in that location. This accumulation is affected by divisive normalization: the more items in the visual world, the more activity across the area and the greater the normalization, resulting in lower accumulation rates. We know that visual attention enhances responses in visual areas, and propose that this effective increase in gain in the input to LIP helps override the effective gain decrease due to the divisive normalization. In this project, we will test this hypothesis using a visual search task utilizing moving dot stimuli. Specifically, we will record from single neurons in areas MT (a motion processing area) and LIP while animals perform a visual search task in which the 1, 2 or 4 objects in the array are moving dot patterns. If a target direction is present, then the animals must look at it, otherwise, they must maintain fixation. We will vary the signal and the color of the dots in each patch and will have three attentional conditions. In the spatial attention condition, the animal is spatially cued, indicating which location the target will appear in, if it appears at all. In the feature- based attention condition, the animal is given a color cue and the target, if it appears, will appear in a patch with dots the same color as the cue. In the spread attention condition, the animal is not given any cue. We will model the behavior (both percent correct and reaction times) based on our hypothesis and the activity in MT and LIP. We will then directly test this hypothesis, by pharmacologically manipulating responses in the frontal eye field while recording behavioral and neuronal data from LIP and MT. We expect that our results will shed light on the neuronal mechanisms underlying our limited ability to process visual information in the scene, but could also indicate a functional role for the attentional modulation that has been studied for almost 30 years.
|
1 |