1997 — 2000 |
Philbeck, John W |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Parietal Lobe Role in Updating Location While Locomoting @ Carnegie-Mellon University
Humans are remarkably accurate and precise when walking blindly to a previously viewed target. This complex spatial behavior relies heavily upon the ability to keep track of one's changing location. The proposed research tests the hypothesis that the human parietal cortex plays an important role in updating one's location while walking. It will also investigate properties of the spatial representation constructed in the parietal lobe. The performance of healthy humans and humans with focal brain lesions in the parietal lobe will be compared in behavioral tests of spatial updating. The participants will indicate the location of targets in a variety of ways, some which involve updating self-location (e.g., walking to previously viewed targets) and others which do not (e.g., verbal reports). The long term goal of this research is to come to a more complete understanding of how vision is used to control complex spatial behavior. This research will help develop a more detailed picture of spatial deficits in patients with injuries in the parietal lobe, and ultimately may help psychologists design effective therapies to rehabilitate brain-injured patients.
|
0.902 |
2005 — 2008 |
Philbeck, John W |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Medial Temporal Lobe Role in Human Locomotor Navigation @ George Washington University
[unreadable] DESCRIPTION (provided by applicant): This project investigates a key issue in human navigation. How does the medial temporal lobe (MTL) process information about space, time, and self-motion to keep us oriented while walking about? Extensive experimentation in rodents indicates that brain structures in the MTL play an important role in navigation, but currently there is a pressing need to validate the relevance of animal studies for understanding human navigation. In addition, there is very little data concerning the effects of brain injury on navigation ability, particularly within the nearby environment. The work in this project will bridge this gap by testing the navigational abilities of patients who have had part of the MTL removed as therapy for severe epilepsy. The project will answer the following questions: (1) Are deficits in navigation after MTL surgery due specifically to removal of MTL tissue, or instead to other factors related to the disease necessitating the surgery? (2) What specifically are the consequences of MTL injury for navigation? (3) How specialized are MTL structures for navigation? (Are only some types of navigation impaired but not others?) (4) Does the right hemisphere MTL play a more dominant role than the left for human navigation? The proposed experiments will provide a firm empirical foundation for understanding the effect of brain injuries and psychiatric disorders that impact the MTL. The work focuses specifically on navigation ability and will be particularly relevant for evaluating the homology of brain structures that subserve navigation in animals and humans. The studies are unique in that they will test a rare population of neurosurgical patients, using methods that focus on the perception of self-motion at an unprecedented level of detail. Furthermore, the methodology will allow an assessment of the possible non-specific effects on navigation of epilepsy and epilepsy medication. This basic data will be highly valuable for interpreting future research involving similar patients. [unreadable] [unreadable]
|
0.958 |
2011 — 2013 |
Philbeck, John W |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Role of the Ground Plane in Judging Absolute Distance After Brief Glimpses of Rea @ George Washington University
DESCRIPTION (provided by applicant: Although the visual world appears continuous and stable, visual information is actually sampled from the environment around 3 times per second. Observers must therefore process the contents of brief glimpses to form representations that can support effective behavior. Brief stimulus presentations have been crucial for illuminating the early stages involved in constructing scene representations. Very little is known about the time required to extract information about observer-to-object distances, however. There is a pressing need to understand these issues, because many factors, including real-world situations, visual impairment, normal aging, and neurological disorders, can place constraints on the time available for extracting and processing visual information. Thus, many people risk suffering consequences of poor object localization due to insufficient viewing time (e.g., falling, or colliding with objects when walking or driving). These consequences can be dire--the annual cost of falling, for example, is predicted to reach $54.9 billion in the next 10 years. There is a critical lack of knowledge about the consequences of insufficient viewing time on localization in distance, and this impedes identification of at-risk populations and slows development of evidence-based remediation plans. An important product of our investigation is that it will remove these critical barriers by quantifying the impact of insufficient viewing time on localization. This project's health relatedness thus derives from its ability to illuminate possible precursors to driving collisions and falling. Our long-term objectives are to characterize the time course of distance perception and to determine the psychological and neural mechanisms that govern this time course. We will address these issues using a novel, custom-built apparatus capable of providing very brief glimpses (e.g., 10 ms) of a real, 3D environment, followed by a masking image. After briefly glimpsing the environment, observers will use various methods (e.g., verbal report;blind walking) to indicate the egocentric distance of objects seen during the glimpse. This method allows us to study the factors that shape the early stages of distance perception. Experiments in this proposal will test our overarching hypothesis that both the stimulus-driven and top-down factors that govern early distance perception mechanisms are organized to confer a processing advantage for targets on the ground. Our specific aims are to (1) determine the visual requirements for extraction of distance information from brief glimpses-focusing particularly on the powerful angular declination (height in the field) cue;(2) determine the top-down influences on extraction of distance information from brief glimpses--focusing on perceptual and cognitive biases related to the ground plane, and (3) confirm that our results are not crucially dependent upon one particular environment, but instead are more fundamental and broadly applicable to a variety of environments. PUBLIC HEALTH RELEVANCE: This project investigates the ability of people to localize objects seen during brief glimpses of the surrounding environment-a critically important skill, given the potentially devastating consequences of mislocalization when insufficient time is available to extract distance cues (e.g., collisions when walking or driving, falling, etc.). Limitations in the time available to extract distance information can arise from many factors (e.g., visual impairment, neurological disorders, normal aging, and situational constraints in everyday life). By finding out how perceptual and cognitive factors govern the speed with which people localize objects in the environment, our work promises to help improve efforts to minimize the tremendous personal and health care costs of falling and driving accidents in a broad range of populations.
|
0.958 |
2013 — 2017 |
Sibley, Gabriel Hahn, James [⬀] Philbeck, John Almecija, Sergio Lee, Taeyoung (co-PI) [⬀] Richmond, Brian (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of Large-Scale Dense Scene Capture and Tracking Instrument @ George Washington University
Proposal #: 13-37899 PI(s): Hahn, James K. Lee, Taeyoung; Philbeck, John W.; Rickmond, Brian G.; Townsend, Gabe Sibley Institution: George Washington University Title: MRI/Dev.: Large-Scale Dense Scene Capture and Tracking Instrument Project Proposed: This project, developing a large-scale, dense 3D measurement instrument for capturing dynamic environments, integrates devices such as range-and-color sensing devices like depth cameras (RGB-D sensors) by designing and developing key technical methodologies to fuse the data received from remote networked sensors. The instrument will collectively cover a large space at a sampling resolution of at least 1cm with submillimeter resolution in localized regions. These data are then fused into a single underlying representation. The work involves developing a system that possesses both large-scale and real-time dense capture capabilities. Specifically, - Experimentally validating perception, planning and control algorithms of agile mobile robots (particularly those that operate with deformable objects) requires ground truth representation of those environments. - Validating computational tools for tether dynamics and control for flexible multibody systems requires the capture of their environment in a large environment. - Study of human motion for biomechanics, physical therapy, and exercise science applications requires accurate capture of dynamically changing deformable human shapes in a large environment. - Image-guided surgical procedures require capture of localized dense patient anatomical surface registered in a larger surgical environment. - Human visual perception and navigation require a dense model of the surrounding environments that include object in motion, thus advancing the state of eye movement analysis by enabling fast, automated and objective coding of object analysis by enabling fast, automated, and objective coding of objects people see as they move through the environment. - The study of foot deformations enabled by dense shape capture during running and walking on real sediments will shed light on the evolution of gait and human anatomy, and the biomechanics of barefoot walking and running. Thus, facilitating new research, the developed system enables rapid capture and construction of large dynamic high-resolution virtual environments that duplicate specific real-world environments, including deformable objects, with unprecedented density of detail.
|
1 |