Area:
motion perception, attention to motion
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Adriane E. Seiffert is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2000 — 2002 |
Seiffert, Adriane E |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Target Recognition in Visual Search
From the moment you open your eyes, your visual system begins to efficiently analyze and interpret the barrage of input both simultaneously and sequentially. You are almost immediately aware of salient objects in your view, usually because such objects are visually distinct from other objects and the background. Recognition of visual objects, however, must engage many sequential stages of analysis, including segementation of the visual array into groups and objects, deployment of attention to relevant attributes, identification of input by comparison to memory and finally decision processes that match input to goals for the selection of response. Visual search has traditionally been used to study early perceptual properties of the visual system and the factors of attentional deployment. Here, we propose to use a new visual search paradigm to investigate these other stages of analysis. This research will demonstrate three properties of recognition in visual search: 1) how grouping principles are used in segmenting targets, 2) the role of identification in target detection and 3) the extent to which decision processes determine search speed.
|
0.952 |
2003 — 2005 |
Seiffert, Adriane E |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Attentional Tracking and the Perception of Control
DESCRIPTION (provided by applicant): The long-term objective of the project is to understand how visual attention interacts with motion perception and visuo-motor systems to track the motion of target objects. The specific aims of the proposed research are: 1) to describe the mechanism that is used when visual attention tracks object movement; 2) to determine what factors cause errors in attentional tracking; and, 3) to investigate whether attentional tracking is critical for the active manipulation of objects and the perception of control. Behavioral and neuroimaging experiments are proposed to elucidate the cognitive structure and neural basis of attentional tracking. When visual attention is used to track objects, selection mechanisms must interact with perceptual systems coding motion trajectories and object positions. Our work has shown that noticing the change in position of an object constitutes a motion detection system that is dependent on visual attention. Proposed experiments will relate attentional tracking to this system. Specifically, we will compare spatial limits of position-dependent motion detection to the spatial resolution of attention. Also, we will measure brain activity with functional magnetic resonance imaging (fMRI) to determine if the same brain areas are involved in position-sensitive motion perception and attentional tracking. This project will also investigate why errors are made in attentional tracking. Evaluation of attentional tracking with displays containing different types of motion noise will test whether attentional tracking inappropriately integrates motion. Neuroimaging studies will measure brain activity in response to errors to determine if errors are caused by mistakes in coding motions or lapses in the continuity of attentional movements. The last series of experiments will investigate whether attentional tracking mediates the use of visual feedback for the perception of control during manipulation of an object. Proposed experiments will also test whether brain activity correlated with perceived control is coincident with activity related to attentional tracking or activity involved in intentionality. The intellectual contribution of this project will be a better understanding of how visual attention tracks objects, why it fails and how it is employed in the perception of control. This information will be valuable to a number of practical applications, such as human-machine interface design, as well as clinical issues, such as development of visual prosthesis and development of treatments for visual attention deficits.
|
1 |
2016 — 2018 |
Biswas, Gautam (co-PI) [⬀] Levin, Daniel [⬀] Seiffert, Adriane |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Exp: Linking Eye Movements With Vvisual Attention to Enhance Cyberlearning
The Cyberlearning and Future Learning Technologies Program funds efforts that support envisioning the future of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects explore the viability of new kinds of learning technologies by designing and building new kinds of learning technologies and studying their possibilities for fostering learning and challenges to using them effectively. This project will lay the groundwork necessary for incorporating eye movements into cyberlearning. Although hardware and software solutions are rapidly advancing the ability to detect and track cyberlearners' eye movements, the scientific understanding of the link between these eye movements and actual learning remains tentative. This issue is particularly important because research demonstrates surprising limits to the visual information that people take in: Even when it can be demonstrated that they have looked at something, this is no guarantee that learners gain knowledge of what they have seen. This project will address this problem in two ways. First, the researchers will develop a cognitive theory that can help specify how eye movements reveal what cyberlearners have absorbed when they view and interact with technology-based learning systems. Second, the researchers will develop a novel software application that helps cyberlearning content creators to incorporate assessment of eye movements into their practice. These projects will converge not only to develop cognitive theory that can help cyberlearners achieve more effective interactions, but also to enrich cognitive theory with input from real-world cyberlearning practitioners who struggle every day with the need to understand the sometimes confounding link between showing a learner something and learners' actual ability to understand and remember what they have seen.
In particular, the investigators hypothesize that the link between fixation patterns and learning is mediated by visual modes that vary the relationship between concrete coding of visual properties and abstract focus on causal relationships and the goals of actions. The project will include experiments in which learners have their eyes tracked while they view a screen-captured information technology lesson. Some learners will be induced to deploy an "encoding" mode in which they focus on the specific sequence of steps needed to complete the task, while other learners will view the same materials using a "causal" mode in which they focus on the concepts underlying the lesson. Initial research has demonstrated significant differences in fixation patterns in these tasks (the strongest of these is that learners follow the instructor's mouse movements more closely in the encoding mode), and the current project will test whether these modes are associated with different patterns of visual and conceptual learning. The project will leverage these results by incorporating mode-revealing analytics into a novel software application that allows content creators to record screen-capture videos of their lessons while recording their own eye movements. In addition, a panel of viewers will be equipped with their own eye trackers and will view the content creators' lessons. Viewer eye movements will be returned to content creators who will be able view fixation patterns in the application, along with analytics based on findings from the visual mode experiments. The prototype system will be integrated with an existing learning technology, courseware for computer science education titled "Betty's Brain," and deployed in both formal and informal learning environments, including the Nashville Adventure Science Center.
|
0.915 |