1985 — 1987 |
Mcnamara, Timothy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Representation and Integration of Spatial and Propositional Knowledge in Memory |
0.915 |
1989 — 1997 |
Mcnamara, Timothy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mental Representations of Spatial and Nonspatial Relations
9222002 MCNAMARA This research will investigate aspects of human memory. One line of experiments will investigate the nature of spatial memory. A series of nine experiments will attempt to determine whether spatial memories are "orientation specific" or "orientation free." In orientation-specific representations, locations are specified in view-specific reference frames and the representation has a canonical orientation. One's memory of a map or of the globe is an example of an orientation-specific representation because it is oriented with north at the top. Orientation-free representations, on the other hand, are not view dependent and do not have a canonical orientation. An example of such a representation is memory of the interior of one's home; such memories typically do not have a canonical orientation. An obvious difference between these types of memories is the number of perspectives we had on the spatial layout when it was learned (typically only one for a map, but several for natural environments). The experiments will test this explanation and other explanations of the differences between memories of small- and large-scale spaces. Another series of four experiments will investigate the causes of asymmetries in distance estimations. Previous research has shown that when people estimate distances from memory, estimates from salient landmarks to less salient locations (e.g., from the Washington Monument to the Pension Building) are larger than estimates from the less salient building to the landmark. The goal of the proposed experiments is to determine why these asymmetries occur. These studies of spatial memory will provide crucial new knowledge about the basic mechanisms that underlie our ability to recognize scenes and to navigate in familiar and unfamiliar environments. This knowledge will shed light on the causes of individual differences in spatial ability, and will aid in the development of freely moving robotic systems. The second line of experiments will investigate the causes of associative priming. When people retrieve information from memory, performance is often affected by previous retrieval operations or by the context in which the retrieval takes place. In the lexical decision task, for example, one must decide whether a string of letters is a word or a nonword (e.g., blit). It has been widely documented that lexical decisions on a word are faster and more accurate when the word is preceded by an associated word (e.g., hospital-doctor) than when the word is preceded by an unassociated word (e.g., library-doctor). This facilitation is called "priming," and it occurs in nearly all tasks that require memory retrieval. The goal of this line of research is to test various theories of associative priming. The ubiquity of priming indicates that it is caused by a fundamental mechanism of memory retrieval. Thus, an understanding of the causes of priming will provide important new insights into basic properties of human memory. ***
|
0.915 |
1998 — 2007 |
Mcnamara, Timothy P. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Mental Representations of Spatial Relations
DESCRIPTION (provided by applicant): The research described in this proposal investigates human spatial memory. The long-term goal of the project is to understand how spatial relations among objects in the environment are represented in memory and how remembered spatial relations are used to guide navigation. The specific aims of the project are to advance the scientific understanding of (a) how location and orientation are updated in memory as people locomote in a previously learned environment; (b) the mental representations and processes used in spatial pointing tasks; (c) the extent to which spatial relations are represented more strongly in directions congruent than in directions incongruent with intrinsic axes of a spatial layout; (d) the acquisition of memories of largescale environments; (e) whether learning a new environment produces multiple representations in memory; and (t) the nature of spatial memories acquired from non-visual modalities, and how they compare to spatial memories acquired visually. Participants will learn locations of objects in spaces ranging in size from a table-top to a large city park. Layouts will be learned by visual inspection, visually guided locomotion, or manual exploration without visual guidance. After learning the layouts, participants will take part in tasks that require them to point to target objects from their actual location or from imagined standing locations and facing directions, to discriminate familiar and novel views of a recently learned spatial layout from views of other spatial layouts, or to decide whether objects are in one layout vs. another. Individual differences, including gender-related effects, will be examined in all experiments. This basic science provides a theoretical and empirical foundation for understanding individual differences in spatial ability, and the debilitating deficits in spatial memory created by stroke, traumatic brain injury, and dementia.
|
1 |
2007 — 2013 |
Rieser, John (co-PI) [⬀] Mcnamara, Timothy Carr, Thomas Bodenheimer, Robert [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Design and Evaluation of Spatially Compelling Virtual Environments
This interdisciplinary project investigates human cognition of spaces to improve virtual environments, both from a user and an author's perspective. The objectives are to (1) improve virtual environments so that better learning can occur in them, and (2) develop authoring methods for virtual environments informed by the cognitive demands that people have when learning spaces. This research project should advance the design and authoring of virtual environments by leveraging human cognitive capabilities. The programs seeks to develop a system to increase the user's sense of presence and sensitivity to the environmental scale of virtual environments. It further seeks to develop locomotion interfaces to assist exploring large virtual environments from within small physical ones. A goal is to employ human-centered representations for locomotion in virtual environments and to develop methods for skill acquisition in virtual environments. This research proposal advances the scientific understanding of human cognition and learning as well. The research proposes studies that will be informative about the broad role that environmental geometry and self-representation play in perception, orientation, and navigation, while controlling factors that are extremely difficult, if not impossible, to control in the real world. A rigorous evaluation program for all components of the project is planned.
The importance of this proposal is that virtual environments provide people with opportunities to experience places and situations remote from their actual physical surroundings. Virtual environments allow the simulation of real-world events in a controllable and re-usable environment. They potentially allow people to learn about an environment which, for reasons of time, distance, expense, and safety, would not otherwise be available. Virtual environments could have a huge impact in education, entertainment, medicine, architecture, and training, but they are not widely used because of their expense and delicacy. The research program in this proposal should significantly improve the quality of learning in virtual environments, to reduce the time and cost of authoring virtual environments, and to overcome likely impediments to their widespread use. Moreover, this proposal builds a scientific program to develop a better understanding of the cognitive capabilities of humans in immersive virtual environments, and does so in a way that will inform the design process for such environments and our understanding of how humans reason about space.
|
0.915 |
2008 — 2011 |
Adams, Julie (co-PI) [⬀] Mcnamara, Timothy Rieser, John (co-PI) [⬀] Bodenheimer, Robert [⬀] Sarkar, Nilanjan (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of Instruments For Interaction, Learning, and Perception in Virtual Environments
Proposal #: CNS 08-21640 PI(s): Bodenheimer, Robert E. Adams, Julie A.; McNamara, Timothy P.; Rieser, John J.; Sarkar, Nilanjan Institution: Vanderbilt University Nashville, TN 37235-7749 Title: MRI/Acq.: Instruments for Interaction, Learning, and Perception in Virtual Environments Project Proposed: This project, acquiring a high-fidelity instrument designed to facilitate and assess perception, interaction, and learning in immersive environments, pursues an ambitious research agenda dealing with people, their interactions with virtual environments, and the design factors underlying successful environments. The work aims to build a program to develop a better understanding of the cognitive capabilities of humans in immersive virtual environments, to inform the design process of such environments and to understand how humans reason about space. The instrument will be shared among diverse and interdisciplinary groups collaborating in the area of virtual environments, including Computer Science and Engineering (graphics, animation, artificial intelligence, human factors, robotic, etc.) and the Psychological Science (cognitive psychology, child development, rehabilitation engineering, brain sciences, etc.) The component parts of the instrument (comprising optical motion capture equipment, a head-mounted display with binocular eye-tracking, and high-performance wireless data gloves) allow the measurement, tracking, rendering, and animation of subjects in virtual environments (from their overall position, to their posture, to the actions of their hands and fingers) coupled with the measurement of their gaze. The project ranges from low-level research in how people experience virtual environments to user evaluations involving high-level interface and simulation design. Children with autism will also be studied. Broader Impacts: This project improves the quality of learning in virtual environments, reducing the time and cost of authoring and overcoming likely impediments to their widespread use. The instrument enables courses in robotics currently infeasible with real robots and provides experience for students. The work builds a scientific program to develop a better understanding of the cognitive capabilities of humans in immersive virtual environments and may be applied to understanding the development of children?s abilities to reason about space and to coordinate perceptual-motor skills as they develop. Moreover, it may help to treat autism spectral disorder.
|
0.915 |
2015 — 2018 |
Mcnamara, Timothy Bodenheimer, Robert [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Collaborative Research: Improving Wayfinding and Navigation in Immersive Virtual Environments
The objective of this research is to enable more effective design and use of virtual worlds. Virtual worlds are important in many domains, including architecture, education, medicine, simulation, and training. However, when compared to the real world, virtual worlds are hard to move through effectively, and pose challenges to effective navigation. If virtual worlds are going to be widely deployed - particularly for applications in education, training, and simulation - then these problems must be solved. This work will generate essential discoveries improving the process of wayfinding (orienting and navigating from place to place) and locomoting through immersive virtual worlds. It thus provides a critical and synergistic complement to the recent advent of low-cost commodity-level virtual reality equipment.
This research is multi-disciplinary and employs methods from computer science, cognitive science, and geographical information science in accomplishing these objectives. A transformation of wayfinding and navigation for large immersive virtual worlds can be accomplished by studying locomotion modes in conjunction with the spatial characteristics of virtual worlds and individual differences and abilities of the users of the virtual environments. In this work, virtual worlds are described and analyzed in terms of their connectivity, visual access, and integration using formal measures summarized as space syntax. Likewise, individuals traveling through virtual worlds may navigate and reason about space quite differently, and these differences can be quantified and measured. The goal is to develop locomotion modes that take into account both characteristics described by space syntax and individual attributes of users. Truly effective design and use of virtual worlds depend on an understanding of how an individual's abilities relate to the characteristics of the virtual world and the mechanisms for moving about in them. This interdisciplinary approach examines wayfinding and navigation in a multi-factor way, combining a focus on locomotion modes, a focus on spatial syntax (characteristics) of the virtual world, and a focus on the abilities and differences of individual users. In addition to improving the design and use of virtual worlds, this work will impact multiple disciplines: it not only advances computer graphics and virtual reality, but also informs the fields of cognitive science and geographical information science.
|
0.915 |
2018 — 2020 |
Rieser, John (co-PI) [⬀] Mcnamara, Timothy Bodenheimer, Robert [⬀] Narasimham, Gayathri (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cri: Ii-En: High-Fidelity Real-Time Avatars For Virtual and Mixed Reality
Technology that can create compelling immersive virtual environments is now available on the general market. However, this technology has limitations. Important frontiers for virtual environments need high quality tracking equipment. One of these frontiers is the ability to build characters that move accurately in a virtual environment. A second frontier is the ability to explore large virtual environments using methods that seem natural. Our goal is to tailor these methods to the individual user. This research proposal will equip a lab with instrumentation that will make fundamental advances on these two problems. It will also train graduate students and provide research opportunities for a number of undergraduates.
This research will equip a laboratory with a high quality motion capture system that will allow the pursuit of novel scientific questions involving the perceptual fidelity of virtual environments, examine theoretical questions involving users and their relationship to their self-avatars, and determine how individual differences in users can be effectively utilized to provide better locomotion and navigation in virtual worlds. In particular, this equipment will enable research in how to design high fidelity virtual environments, and enable the understanding of the components of fidelity that facilitate learning and transfer of training, for which self-avatars are a critical component. Likewise, the equipment will enable significant progress in locomotion methods for improved navigation and wayfinding in large virtual environments by allowing examination of how spatial information sources are used by individuals as they move through virtual worlds.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2022 — 2025 |
Mcnamara, Timothy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Decision Processes in Human Navigation
Navigating successfully from one place to another can require difficult decisions. We often need to consider the costs and the benefits of possible routes. For example, the best walking path between downtown buildings may be a short outdoor path when the weather is pleasant or a longer path through indoor passageways during overly hot or cold months. We also use our knowledge to make decisions about where to search for something that we need. Experienced drivers know, for example, that a strip mall is a better place to find a gas station than is a residential neighborhood. We also may need to decide whether we know an environment well enough to rely on our memories and sense of direction or should use the mapping app on our cell phones. To make good choices and to keep from getting lost, we need to rely on several sources of information. One important source is what we see, such as roads, trails, and familiar places. Another important source is from our bodies: As we walk and turn, even with our eyes closed, we have a sense of how far we have traveled and which directions we are facing. These sources of information tell us where we are, where we are headed, and how hard it will be to get there (e.g., climbing a steep hill vs. walking around it). This research investigates how people make these sorts of decisions, deal with conflicting sources of information (e.g., our sense of direction indicates that we should turn left but a familiar landmark indicates that we should turn right), and use navigation aids (e.g., an overhead map of the environment). The investigators will use mathematical models of people’s choices and actions to understand how the human brain stores and uses spatial knowledge for navigation. The results can inform the use of technology, ranging from movement interfaces for video games to GPS-enabled maps. <br/><br/>The investigators explore the ways in which navigational decisions and actions are affected by (a) spatial cues about the navigator’s location and the goal location (e.g., landmarks in the environment, body-based cues from walking and turning), (b) costs associated with possible choices (e.g., effort, time), and (c) individual characteristics of the navigator (e.g., spatial ability, risk tolerance). Experiments use immersive virtual reality to maintain tight control over the visual scene while allowing for full physical movement during navigation; this technology allows navigators to walk and turn in virtual environments just as they do in the real world. The experiments examine 1) how navigators use information from multiple spatial cues to find a goal when those cues are inconsistent with one another; 2) how navigators account for navigational costs, as when choosing between a short path through deep sand or a longer path on firm ground; 3) how navigators use prior knowledge to make navigational decisions, as when selecting the most likely place to search for a restaurant within an unfamiliar city, knowing that a restaurant is more likely to be located in a business district than a residential neighborhood; and 4) how navigators combine spatial information from technology, such as that provided by a GPS-enabled map, with natural cues provided by vision and bodily movement. Computational models of the cognitive processes involved in human navigation will be used to expand explanatory theories of human decisions and actions.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |