1985 — 1987 |
Warren, William H |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Age-Related Changes in the Visual Control of Locomotion
The essential ability to control locomotion with respect to the environment is based on the visual perception of optic flow patterns produced at a moving point of observation. Although a number of mathematical analyses of optic flow have been performed, little empirical research exists on the ability of observers to detect and utilize this information. The proposed project will examine age differences in the detection of optic flow information, as they pertain to the control of walking and driving. The specific aims of the study are to determine the abilities of young and old observers to detect properties of optic flow patterns that are specific to the following aspects of self-motion: (a) the direction or heading of rectilinear motion; (b) the path of motion during curve-taking; (c) the approach to and size of contours, brinks, obstacles, and apertures on the ground surface; (d) variables of the ground surface, such as texture and other markings, that influence the detectibility of these properties; and (e) observer variables such as size of the visual field, fixation, and visibility of the focus of expansion, that may affect performance. Optic flow displays will be generated by computer animation techniques, stored on video disk, and presented to subjects for judgment on a large rear-projection screen. The variables of optic flow rate, angle of path to the surface, and response task will also be manipulated. A basic understanding of how the visual control of locomotion changes with age has immediate applications to the clinical assessment of perceptual abilities on complex everyday tasks, the development of optical aids and retaining of perceptual skills to improve performance, and enhancing the visual structure of roadways and other environments to further the mobility of the elderly.
|
1 |
1989 — 2013 |
Warren, William H |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Visual Control of Locomotion |
1 |
1997 — 2002 |
Kaelbling, Leslie Tarr, Michael Warren, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Learning and Intelligent Systems: Learning Minimal Representations For Visual Navigation and Recognition
This project is being funded through the Learning and Intelligent Systems (LIS) initiative. The project is concerned with the intelligence exhibited in interactions among sensory-motor activities and cognitive capacities such as reasoning, planning and learning, in both organisms and machines. Such interaction is regularly shown in the act of navigation, which is engaged in by humans and other animals from an early age, and seems almost effortless in normal circumstances thereafter. Whatever there is in navigation that is innate and whatever is learned, it is important to try to understand the interaction of the various cognitive, perceptual, and motor systems that are involved. The complexity of these interactions becomes clear in the development of mobile robots, such as the one recently deployed on Mars, not to mention the more autonomous ones planned for the future. It is still a major and imperfectly understood task to create programs that will coordinate sensors, keep an internal emapn of the area, and allow the robot to cross a space efficiently and without collisions with obstacles. An interdisciplinary approach is being taken in this research project, exploring human capabilities through experiments, developing models based on the experimental results and what is already known about human navigation, implementing these models in programs for robot control, then testing these programs in robotic navigation experiments for their efficacy and their reasonableness as models of human navigation. The goals are both to understand the phenomena in humans and machines and to develop robust algorithms to be used in mobile robots. This alliance of researchers studying psychophysics, cognition, computation, and robotics will lead to gains in knowledge across many disciplines and will enhance our understanding of spatial cognition and visual navigation in agents, both artificial and natural.
|
0.915 |
1997 — 2001 |
Warren, William H |
K02Activity Code Description: Undocumented code - click on the grant title for more information. |
Visual Control of Adaptive Behavior--Locomotion
DESCRIPTION (Applicant's Abstract): The candidate for this NIMH K02 Independent Scientist Award was trained as an experimental psychologist and has performed empirical research on the visual perception of optic flow and the control of human locomotion. His long-term objective is to integrate work on perception and action in a dynamical account of visual control. Short-term career goals are to develop a strong theoretical component to his research, including modeling the neural processing of optic flow and the dynamics of visual control, and to incorporate research on perceptual-motor learning. The proposed career development plan includes a course sequence in mathematics and control theory, training in neural and dynamical modeling techniques. The award would be used to release the applicant from teaching and administrative responsibilities to devote full time to research and training. The specific aims of the proposed research are to determine how information in optic flow is extracted by the visual system and used to control balance and steering during locomotion. Four interrelated projects are proposed. The first project, on perception of heading, will determine the information used to perceive headings during observer translation and rotation, and in the presence of moving objects, using psychophysical methods. A model of this process will be formalized and tested. A project on visual control of steering will determine whether this information is used to control steering with respect to stationary and moving objects, in a joystick control task. The third project, on visual control of walking, will examine how optic flow is used to control balance and steering during treadmill walking and will model the control dynamics of the perception-action loop. The final project focuses on learning control dynamics and will investigate how an infant in a "baby bouncer" explores and learns to control the dynamics of a simple task, as a model for the acquisition of locomotion. The results would contribute to basic knowledge about the perception of optic flow and the control of locomotion, providing a foundation for clinical research on visual-motor deficits, gait disorders and mobility problems.
|
1 |
2003 — 2010 |
Tarr, Michael Warren, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Learning Minimal Representations For Visual Navigation and Recognition Ii
Consider how you find your way to the grocery store or learn the layout of a new mall, or how scientists might build a robot that can be dropped on Mars to navigate its surface. People, animals, and robots must navigate complex environments, but different strategies are applied in different situations. One may get to the grocery store by dead reckoning like ants, following landmarks like honeybees, or one can use a precise "memory map" of the environment. Moreover, clever combinations of strategies can make it easier to find the way. The present research effort specifically explores how these strategies are integrated to allow robust visual navigation.
With NSF support, Dr. Michael Tarr and Dr. William Warren study how people learn the layout of new environments, the geometry of the resulting spatial knowledge, and how it is used to navigate. The uniqueness of their approach is to study actual navigation behavior, as people actively walk through a computer-generated virtual environment (the VENLab - see http://www.cog.brown.edu/Research/ven_lab/ ). Participants wear a head-mounted virtual reality display and walk freely in a 40 x 40 ft area. Their movements are recorded by a tracking system in the ceiling. After participants learn the layout, the environment can be surreptitiously changed, and they must, in effect, find a new route to the grocery store. By distorting the virtual world or changing the properties of landmarks, these scientists determine the navigational strategies people use and how they rely on routes, landmarks, and the geometry of space.
|
0.915 |
2009 — 2013 |
Warren, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Geometry of Spatial Knowledge For Navigation
The Geometry of Spatial Knowledge for Navigation Investigator: William H. Warren
Imagine arriving in a new city. As you leave the train station and walk around the downtown area, you seem to build up knowledge of the spatial layout that enables you to get back to the train station (homing), find your way to restaurants and museums, and even take new shortcuts or detours. It is commonly believed that this spatial knowledge takes the form of a "cognitive map," something like a street map in your head. But what have you really learned about the layout of the city? What sort of spatial knowledge can you build up by walking around (by path integration)? How does it depend on the layout of the environment, on distinctive landmarks like statues and skyscrapers, and the manner in which you explore the city? What kinds of navigation aids might help you learn the layout and keep you from getting lost (signs, maps, GPS)?
This project aims to answer such questions by taking advantage of state-of-the-art virtual reality techniques. Investigators study active navigation by distorting a 40 x 50 ft virtual environment during ongoing walking, and recording the participant's natural behavior. For instance, as a participant learns or navigates in the environment, the virtual world may be stretched, changing distance relationships; the locations of paths and landmarks may be shifted; or "wormholes" may be inserted to teleport the participant from one place to another, creating "rips" and "folds" in virtual space. These manipulations allow the investigators to probe the geometric structure of spatial knowledge and how it is acquired.
In previous research, the investigators have found that multiple forms of spatial knowledge, with different geometric properties, are acquired when learning a new environment. But the results suggest that such knowledge may not be integrated into a consistent, unitary cognitive map. In particular, participants seem to acquire some local metric information about distances and directions, but fail to integrate it into a consistent global map. They learn topological structure such as the paths that connect places (a graph), relations among neighborhoods, and ordinal sequences of landmarks, but tolerate large discrepancies between them. The present research investigates the accuracy of the metric "map" that can be built up through path integration, how the various types of geometric knowledge are related, and how their acquisition depends on the structure and stability of the environment during learning.
For example, in one study the investigators will create a non-Euclidean virtual world called the "Escher Museum," consisting of six rooms with distinctive paintings and sculptures that do not fit into a plane, but overlap in space, something like a flat spiral staircase. Thus, by walking around a loop the participant will circle back to previous physical locations that are occupied by new Museum rooms. This allows the investigators to dissociate the neighborhood structure (rooms) from the ordinal structure of landmarks (sculptures) and metric distances and directions between them.
A better understanding of spatial knowledge and human navigation has broad applications to the design and presentation of spatial information (signage, maps, directions, web-based mapping applications), visual and verbal interfaces for GPS-based navigation aids (autos, cell phones), and biologically-inspired robot navigation systems, as well as to architectural design and city planning.
|
0.915 |
2014 — 2017 |
Warren, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collective Behavior of Human Crowds
Whether walking down a busy sidewalk or a crowded mall, we effortlessly coordinate our movements with other pedestrians. Sometimes we weave through the crowd, dodging our neighbors; at other times we merge into a coherent "swarm," much like a flock of birds or a school of fish, and may spontaneously form lanes in opposite directions, similar to columns of ants. Where do these basic traffic patterns come from? It is generally believed that collective behavior in humans and animals emerges from local interactions between neighbors, rather than from a central plan or leader, but the actual mechanisms are unclear. By studying the perceptual-motor "rules" that govern interactions between neighbors, the investigator aims to determine whether crowd behavior can be explained by local interactions. The resulting model will enable realistic simulation of pedestrian traffic flow. Its potential broader impacts are to architectural design, evacuation planning, computer animation, and the development of assistive technology for blind and low-vision users.
There are many models of collective swarm behavior in fields ranging from physics and computer science to animal behavior and urban planning, yet they are based on little experimental data. The key weakness of existing theories is a dearth of knowledge about the visual coupling between neighbors -- the perceptual-motor rules, forces, or control laws that govern pedestrian interactions. The goal of this project is to develop a cognitively grounded pedestrian model and test the hypothesis that global crowd behavior emerges from these local interactions. An innovative research program combines (a) a local-to-global approach, in which the visual coupling is mapped out in experiments with virtual crowds, and used to predict crowd behavior in multi-agent simulations, and (b) a global-to-local approach, in which experiments on real crowds are analyzed and used to test the model. The aim is to account for pedestrian and crowd dynamics and elucidate the relation between "micro" and "macro" levels of collective behavior.
|
0.915 |
2019 — 2021 |
Warren, William H |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
A Vision-Based Model of Locomotion in Crowded Environments
People face complex mobility challenges in natural settings every day, when walking down a busy sidewalk, through a crowded train station, or in a shopping mall. To guide locomotion, the visual system detects information about self-motion through an evolving layout of objects and other pedestrians, and generates a safe and efficient path of travel. Individuals with low vision report mobility as one of the most difficult activities of daily living, particularly walking in crowds or using public transportation, with increased risks of collision, injury, and reduced independence. As yet, however, researchers do not understand how vision is used to control locomotor behavior in such complex, everyday settings. The long-term objective of the proposed project is to develop the first vision-based model of pedestrian behavior in dynamic, crowded environments, and use the results to design more effective assistive technology. Most models of locomotor control (from robotics, computer animation, and biology) assume the 3D positions and velocities of environmental objects as input, and plan a collision-free path according to objective criteria. A vision-based model would take the optical information available to a pedestrian and generate human-like paths of locomotion, based on experimental data. The first specific aim is thus to determine the effective visual information that guides walking with a crowd. Specifically, we will test the hypotheses that (a) optic flow, (b) segmented 2D motion, or (c) perceived 3D motion, is used follow multiple neighbors, and how this information is spatially and temporally integrated. The second specific aim is to determine the visual control laws that regulate walking speed and direction in a crowd. Specifically, we will test competing models of collision avoidance, following, and overtaking, and formalize a vision-based pedestrian model. Based on these results, the third specific aim is to evaluate alternative approaches to sensory substitution for locomotor guidance. Specifically, we will compare coding schemes for a vibrotactile belt based on recoding the effective optical variables in tactile patterns, or using the vision-based model to steer the user with directional cuing. Behavioral experiments will test the optical variables and control laws that govern locomotion in crowds, by manipulating visual displays during walking in an immersive virtual environment (12m x 14m). Agent-based simulations will compare competing models of the experimental data and previously collected crowd data. This methodology will enable us to test alternative hypotheses about visual information and visual control laws, and create an experimentally-grounded vision-based pedestrian model. Sensory substitution experiments will test normally-sighted participants in matched visual and tactile virtual environments; if the results are promising, tests with low-vision and blind participants will be pursued in subsequent applications. The research will contribute to basic knowledge about visually-guided locomotion in complex, dynamic environments, and apply it to the design of an assistive mobility device.
|
1 |