2005 |
Giesbrecht, Barry L |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Contextual Influences On Visual Attention and Perception @ University of California Santa Barbara
DESCRIPTION (provided by applicant): Our visual world is complex and is in a continual state of flux, changing over time and space. Contextual information, such as expectations, current knowledge, and regularities in the sensory environment, interacts with selective attention to facilitate processing of our sensory world and, ultimately, coherent and adaptive human behavior. The research proposed here is aimed at investigating how contextual information influences the deployment of visual attention under conditions of limited capacity processing. The aim of Experiment 1 is to investigate the influence of explicit contextual cues; those are cues in the environment of which we are aware, on preparing the system dynamically for upcoming information. This experiment uses a novel modification of the typical dual-task paradigm used to study the attentional blink (AB), in which the first target serves as a cue to the identity of the second target. Critically, this experiment will bridge a key theoretical gap that exists between the dual-task and task-switching literatures. The aim of Experiment 2 is to investigate the influence of implicit contextual cues; those are cues in the environment of which we are not aware, on preparing the system dynamically for upcoming information. Although, this experiment uses the same basic design as the first, the relationship between the first and the second targets will not be revealed to the subjects. In addition to providing more information about how the context can reconfigure the system in a dual-task situation, this experiment will also provide insight on the relationship between attention and memory. The intention of both studies is that by rigorously investigating the interaction between sources of visual context and selective attention, a better understanding of conscious behavior will be achieved.
|
0.958 |
2013 — 2014 |
Eckstein, Miguel Patricio [⬀] Giesbrecht, Barry L |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Neural Representation of Scene Context During Visual Search @ University of California Santa Barbara
DESCRIPTION (provided by applicant): Synthetic cues (e.g., arrows and boxes) predictive of a target location speed up search times and result in increased decision accuracy. Similarly, when observers search in natural scenes, a highly visible object (e.g., house) that often co-occurs in natural environments with a sought target (e.g., chimney) will influence eye movements and facilitate search when the target appears close to the object. While the last few decades have seen single cell neurophysiology, human electrophysiology and neuroimaging lead to great advances in the understanding of the effects of attention and synthetic cues on neural activity, little is known about the underlying neural mechanisms mediating context effects during visual search in real scenes. Here, we propose to separately measure neural activity using functional magnetic resonance imaging (fMRI) while observers search for targets in real scenes and use neural decoding methods (multivariate pattern analysis) and a novel variation of population receptive field methods to: 1) Determine the brain areas (fMRI) that represent the spatial location of scene context and thus might mediate guidance of search in real scenes; 2) To evaluate whether the coding of scene context is automatic or whether it is modulated by top-down visual attention. The proposed work will improve our understanding of the neural mechanisms of scene context which arguably is one of the most important strategies used by observers to optimize visual search in natural environments. Our results will also advance our understanding of the function and role of brain regions related to attention, objects/scenes, and contextual associations for visual search. These advances might potentially help in identifying neural correlates of poor behavioral performance for patients with low-vision and attentional deficits in an ecologically important task such as visual search in real scenes.
|
0.958 |
2017 — 2018 |
Giesbrecht, Barry L |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Investigating the Interaction Between Fatigue States and Reward-Driven Attention @ University of California Santa Barbara
Project Summary Reward based learning can have a dramatic impact on behavior. Learning to associate particular stimuli in the environment with reward can help guide our attention to potentially rewarding outcomes. However, this can also be costly when reward associated stimuli distract us from task-relevant information. The costs of reward-driven distraction include risks to health. Reward learning has been implicated in addiction where it is thought that reward learning creates long-term persistent attention biases towards the substance of addiction and the environmental cues associated with that substance. The energy cost of resisting reward-driven behaviors, as well as the degree of fatigue, may affect the motivation to resist. It is possible that under conditions of fatigue the energy cost of resisting would be higher and thus the motivation to resist is reduced. It has even been suggested that fatigue is a ?universal risk factor? for relapse in addictions, but this hypothesis has not been tested directly, and the mechanism for the predictive relationship between fatigue and relapse has not been identified. We propose that fatigue facilitates relapse by exacerbating automatic attention capture by reward-associated cues, and that reward-driven capture is facilitated by fatigue more generally, outside the context of addiction. However, there is yet no evidence, either in humans or animal models, of how fatigue affects behaviors learned via reward. We will test three specific hypotheses: 1) Physical fatigue exacerbates persistent attention capture by formerly reward-associated visual features; 2) Physical fatigue exacerbates persistent attention capture by reward-associated visual features by impairing cognitive control; and 3) Physical fatigue uniquely exacerbates attention capture by reward-associated features. This work involves integrating two different lines of basic research from our lab, one on reward- driven attention and one on physical fatigue. We will use behavioral performance, EEG-fMRI, and cutting- edge multivariate analytical tools to elucidate the neural mechanisms of reward-driven attention that are modulated by fatigue states. The proposed work would provide evidence that physical fatigue uniquely modulates the persistent effects of reward learning on behavior, and that this effect is due to decreased cognitive control. This work would be the first to show that the state of an individual modulates reward-driven attention capture. This insight contributes to our understanding of the fundamental mechanisms for the effects of reward learning on attention, as well as to ways to mitigate the costly effects of reward-driven capture.
|
0.958 |
2018 — 2019 |
Turk, Matthew (co-PI) [⬀] Hollerer, Tobias Giesbrecht, Barry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Attention-Aware Mixed Reality Interfaces @ University of California-Santa Barbara
This project develops new technologies that measure and model people's state of attention and applies these methodologies to virtual reality (VR) and augmented reality (AR) language learning applications. For determining the degree of people's attention on central learning tasks presented in VR and AR, this project uses two sensor modalities that can index attention: electrical activity of the brain, as measured by electroencephalography (EEG) signals, and eye gaze behavior, as measured by eye trackers. Context recognition plays a key element in future VR and AR application scenarios, and users, as well as content providers, can benefit substantially from information about user attention states during information consumption. The project can inform the development of optimized VR and AR content, as well as individualized learning strategies. The project's motivating application is the optimization of language learning for users across the complete spectrum of ability. In the longer run, additional benefits include the creation of special tools for students with known attention deficits, as well as for increasing productivity and safety in various commercial and industrial applications.
This research explores necessary novel technologies for attention-aware mixed reality (MR) interfaces. The project integrates signals from consumer-grade EEG and eye tracking devices, to determine if and how much the participant's attention is divided (i.e. distracted or multi-tasking) or focused, and to assist appropriately. In this attention-assisted paradigm, users are monitored by EEG and eye tracking devices while interacting with mixed reality user interfaces. Attention states are classified over time using both sensor modalities and can be spatially referenced in the user interface with eye tracking. Attention activity feedback can be reported in real time while the user is interacting with the interface or may be stored and later visualized for a more thorough analysis of attentional patterns. The project delivers technology demonstrations of attention-aware interfaces for foreign language vocabulary learning in VR (with a lookout on AR possibilities). Exploratory experiments will uncover the possibilities for characterizing human attention states through EEG and eye tracking data. The project opens new opportunities for advances in multi-modal interaction to contribute to MR interfaces and learning technologies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2022 — 2025 |
Hollerer, Tobias Giesbrecht, Barry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Hcc: Medium: Hci in Motion -- Using Eeg, Eye Tracking, and Body Sensing For Attention-Aware Mobile Mixed Reality @ University of California-Santa Barbara
Mobile, wireless, headsets for virtual and augmented reality, such as the Meta Quest 2 and Microsoft HoloLens-2, are becoming more widely used in many applications beyond video games, such as training, construction, and medicine. However, wearing these head-worn goggles while walking can make some people feel sick or distracted, which has even led to injury in some cases. This effect is similar to texting while walking, but potentially worse because a person's entire periphery can be filled with distracting media elements. While previous research has investigated these issues when users are standing still or seated, it is unclear how problems unfold and how they can be prevented while users are in motion. Specifically, this project will investigate how and why virtual and augmented reality headsets affect attention and feelings of sickness. First, this work will record data, such as heart rate, brain waves, and the direction users are turning their eyes to, while they are wearing virtual and augmented reality headsets and walking. Secondly, this project will develop ways to reduce sickness and distraction while walking with virtual and augmented reality headsets. This work will improve the safety of mobile virtual and augmented reality headsets, products that virtually all big technology companies today heavily invest in as possible companions or replacements to smartphones. This project will be introduced in courses and research mentorship projects at The University of Texas at San Antonio and the University of California at Santa Barbara, to advance research training of both undergraduate and graduate students. Considering that both universities and research teams have a history of supporting many underrepresented minority students, it is expected that the educational value of this project will be high, especially in terms of recruiting and mentoring women and underrepresented minority students.<br/><br/>There is an increasing prevalence of mobile, immersive interfaces (e.g., mobile Virtual Reality(VR) / Augmented Reality (AR)) that may affect users' cognitive capacities and situational awareness, potentially leading to physical harm (e.g., impaired task performance, tripping over physical obstacles in VR, unsafe street crossings while seeing advertisements in AR). The landscape of human-computer interaction has expanded from fairly well standardized stationary office configurations to more varied mobile and immersive settings involving active body movements (mobile and situated computing, AR, mobile VR) and simulated first-person perspective changes and motion experiences (immersive computing). To make matters worse, compared to more standardized platforms such as desktop and laptop UIs, tablet and smartphone interfaces, individual differences among users have a much bigger usability impact in context-driven surround-focus usage scenarios found in mobile AR/VR. For example, motion sickness (i.e., cybersickness) in VR is known to inflict symptoms of widely varying severity, depending on the individual user. One serious consequence is that interaction designers have difficulties providing engaging general experiences that are universally usable by a wide variety of users. Despite the increasing prevalence of immersive technologies and their pitfalls, the precise cognitive and physiological mechanisms at play when 'computing in motion' are not well understood. This work is aimed at filling this knowledge gap. The specific objectives are: 1) to assess the cognitive effects of interacting with mobile AR/VR while users are walking, 2) to provide automated tools to effectively reduce the cognitive demand of mobile AR/VR, and 3) to make mobile AR/VR safer and more usable. Based on preliminary data, the central hypothesis is that through multi-modal sensing combined with machine learning approaches, mobile AR/VR applications can learn the characteristics of user behavior and provide real time adaptations that will reduce user error, increase ease of use, improve task performance, and reduce the impact of physical hazards. This work will improve the safety of mobile AR and mobile VR, paradigms that virtually all big technology companies today heavily invest in as a possible follow-up paradigm to the smartphone platform. Educational impact will occur through incorporation of research outcomes into undergraduate and graduate courses offered at The University of Texas at San Antonio and the University of California at Santa Barbara, and research training and mentorship opportunities for both undergraduate and graduate students. The courses include Machine Learning, Deep Learning for Visual Computing, Human-Computer Interaction, and Mobile Application Programming. Because our project integrates a topic of high social impact with cutting edge machine learning and human-computer interaction research along with proven successful mentorship strategies, the educational impacts of the project will be high, especially in terms of recruiting and mentoring women and underrepresented minority students.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |