2019 — 2022 |
Hummel, John (co-PI) [⬀] Lleras, Alejandro (co-PI) [⬀] Buetti, Simona [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Compcog: Template Contrast and Saliency (Tcas) Toolbox: a Tool to Visualize Parallel Attentive Evaluation of Scenes @ University of Illinois At Urbana-Champaign
One of the most common visual tasks humans do is use their eyes to find objects in the world around them. This task involves analyzing all the visual objects and backgrounds in the scene. This is a complicated task because the brain has to separate objects from the background. The brain also has to process the color, shape, and size of all objects. The aim of the research is to build a mathematical model that can find objects in scenes, despite the difficulty of the problem. The model is inspired by the visual system. It uses two ways to process information. First, it uses central vision to get a fine-grained analysis of the object it is looking at. Second, it also uses peripheral vision, which is the area around and away from central vision. Peripheral vision can analyze several objects at the same time but is less precise than central vision. The ultimate goal of the project is to develop a free, open-source software toolbox that anyone can use. The toolbox will visualize how the visual system processes complex scenes. It will determine which regions in a scene should be ignored and which regions the eyes should focus on. One strength of the proposal is that it makes specific predictions that can be tested in various fields of neuroscience. It might also lead to improvements in visual aids for visually impaired individuals because it can guide users toward areas in a scene that are likely to contain the target object.
The starting point for the proposed work is a mathematically explicit model of goal-directed visual processing. The model incorporates two components of visual complexity: a parameter that measures the visual difference between objects in the scene and the object the observer is looking for (the target) and a parameter that measures how similar objects in the scene are to one another. The preliminary work indicated that the model is very capable of predicting how long it will take observers to find targets in visually complex scenes. The first two goals of the present research aim at evaluating other components of visual complexity to improve the model and its ability to predict visual processing in more complex visual scenes. The experiments in Goals 1 and 2 will help determine how to combine the visual qualities of objects (such as color, shape and texture) as well as how to account for the contrast between objects and their background. Results from Goals 1 and 2 will directly guide the development of a computational toolbox. The toolbox will allow users to visualize visual processing of simple and complex scenes and make predictions about where observers are likely to move their eyes as a function of their current goals (freely inspect the scene or find a specific object within it). The proposed work combines behavioral psychophysics and computational simulations (Goals 1 and 2), toolbox implementation and eye-tracking validation (Goal 3). The merits of the toolbox include the fact that: 1) it combines different types of visual processing (visual conspicuity contrast and target template contrast), 2) it can predict eye movements over different time scales, and 3) it can evaluate the contribution of these two types of processing to performance. This implementation is important because the contribution of these two processes is known to vary as a function of search goals (free-view vs. goal-directed) and search strategy adopted by observers (active search vs. passive search). Finally, another innovation of the toolbox is that it will be able to make predictions when targets are only defined in abstract terms, that is, when observers only have vague descriptions about the item they are supposed to find in the scene, which is particularly challenging for current computer vision systems to achieve.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |