2002 — 2005 |
Levin, Daniel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Thinking and Seeing: Visual Metacognition in the Legal Process
Recently, a number of researchers have emphasized the impact that beliefs about mental processes can have on the legal process. Research on metamemory (e.g. people's understanding of and control over memory) is therefore critical for understanding how jurors might incorrectly weigh evidence. Particularly important are situations where beliefs about mental functioning diverge from people's actual capabilities. Recent research has demonstrated that people are unaware of visual information that they do not attend to, and that they typically attend to a very small proportion of visual information in a given scene. One particularly striking manifestation of this failure occurs when subjects have difficulty detecting large between-view visual changes. This finding is referred to as "change blindness" (CB), and it occurs regardless of whether the subject is actively searching for changes, and even when the changing object is the current focus of the subject's attention. In part, interest in these findings is based on the degree to which they conflict with intuition - many people are incredulous when they discover that seemingly obvious changes are missed by subjects. Three sets of experiments in this proposal explore visual metacognition, documenting the scope of incorrect beliefs about visual information processing. A first set of experiments tests the degree to which beliefs about visual organization and intention underlie this metacognitive errors in vision. In these experiments, subjects will make estimates about changes to well organized natural scenes, jumbled scenes, and object arrays. A second set of experiments compares predictions about picture memory with predictions about on-line visual processes such as change detection to determine why subjects underestimate their performance in the former case, while they overestimate their performance in the latter case. The final experiments in this proposal explore the possibility that inaccurate visual metacognition can lead jurors to misevaluate evidence in criminal and civil cases. Just as incorrect beliefs about the confidence-accuracy correlation can lead jurors to misjudge eyewitness evidence, inaccurate beliefs about vision may lead them to misunderstand what someone "should have seen."
|
1 |
2008 — 2014 |
Adams, Julie (co-PI) [⬀] Biswas, Gautam (co-PI) [⬀] Saylor, Megan (co-PI) [⬀] Levin, Daniel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Thinking About, and Interacting With Living and Mechanical Agents
Recent advances in artificial intelligence and robotics are confronting individuals of all ages with a series of category-defying entities that combine features of living and nonliving things. As such, these entities increasingly challenge people's basic understanding of mind and intelligence. The goal in this project is to explore adults' and children's beliefs about a range of living and mechanical agents, and to test how these beliefs affect people's ability to track, remember, and understand mechanical agents in two specific computer interfaces. First, it will explore a computer interface designed to allow a human operator to interact with and control a set of semi-autonomous robots. The second environment will be a teachable agent system in which middle school children learn about complex science phenomena, such as river ecosystems, by actively teaching an animated software agent.
This project represents one of the few research programs to empirically test people's understanding of living and artificial agents, and it will employ a conceptual framework that starts with naïve understandings of mind (e.g. "Theory of Mind") and applies them to engineered environments where these understandings are used. This framework describes the conditions under which participants apply different agent concepts, and can help understand how these beliefs might change over time as people interact with novel agents. Although the framework is not yet a complete theory, it represents a broadened approach to reasoning about both typical and novel living and mechanical agents that goes beyond existing dual-process models of Theory of Mind. These experiments also make links between concepts about agents, and the deployment of these concepts in realistic high-load perceptual tasks, so they can make an important contribution to our basic understanding of how knowledge affects vision.
The findings from this project may have important implications for educating both children and adults to deal with novel intelligent decision making technologies that move beyond the simple command-and-response cycle inherent to most current computer applications. Previous research by the PIs has already documented ways in which different people vary in their approach to these technologies (e.g. older and younger adults seem to have subtly different beliefs about the nature of computer intelligence), so this project may help improve the accessibility of novel agent-technologies to a wide range of different populations. More generally, because this research uses interactive educational tools and realistic robot-command systems to explore agent-understanding, it has the potential to improve user interfaces supporting social learning environments that focus on self-regulated learning, and that facilitate the effectiveness of human-machine emergency response teams. These technologies confront users with challenges to their most basic understandings of intelligence and thinking, and our research has the potential to guide both children and adults as they become successful users and creators of the interactive technologies of the future.
|
1 |
2016 — 2018 |
Biswas, Gautam (co-PI) [⬀] Levin, Daniel Seiffert, Adriane (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Exp: Linking Eye Movements With Vvisual Attention to Enhance Cyberlearning
The Cyberlearning and Future Learning Technologies Program funds efforts that support envisioning the future of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects explore the viability of new kinds of learning technologies by designing and building new kinds of learning technologies and studying their possibilities for fostering learning and challenges to using them effectively. This project will lay the groundwork necessary for incorporating eye movements into cyberlearning. Although hardware and software solutions are rapidly advancing the ability to detect and track cyberlearners' eye movements, the scientific understanding of the link between these eye movements and actual learning remains tentative. This issue is particularly important because research demonstrates surprising limits to the visual information that people take in: Even when it can be demonstrated that they have looked at something, this is no guarantee that learners gain knowledge of what they have seen. This project will address this problem in two ways. First, the researchers will develop a cognitive theory that can help specify how eye movements reveal what cyberlearners have absorbed when they view and interact with technology-based learning systems. Second, the researchers will develop a novel software application that helps cyberlearning content creators to incorporate assessment of eye movements into their practice. These projects will converge not only to develop cognitive theory that can help cyberlearners achieve more effective interactions, but also to enrich cognitive theory with input from real-world cyberlearning practitioners who struggle every day with the need to understand the sometimes confounding link between showing a learner something and learners' actual ability to understand and remember what they have seen.
In particular, the investigators hypothesize that the link between fixation patterns and learning is mediated by visual modes that vary the relationship between concrete coding of visual properties and abstract focus on causal relationships and the goals of actions. The project will include experiments in which learners have their eyes tracked while they view a screen-captured information technology lesson. Some learners will be induced to deploy an "encoding" mode in which they focus on the specific sequence of steps needed to complete the task, while other learners will view the same materials using a "causal" mode in which they focus on the concepts underlying the lesson. Initial research has demonstrated significant differences in fixation patterns in these tasks (the strongest of these is that learners follow the instructor's mouse movements more closely in the encoding mode), and the current project will test whether these modes are associated with different patterns of visual and conceptual learning. The project will leverage these results by incorporating mode-revealing analytics into a novel software application that allows content creators to record screen-capture videos of their lessons while recording their own eye movements. In addition, a panel of viewers will be equipped with their own eye trackers and will view the content creators' lessons. Viewer eye movements will be returned to content creators who will be able view fixation patterns in the application, along with analytics based on findings from the visual mode experiments. The prototype system will be integrated with an existing learning technology, courseware for computer science education titled "Betty's Brain," and deployed in both formal and informal learning environments, including the Nashville Adventure Science Center.
|
1 |