We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Ellen C. Hildreth is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
1994 — 1998 |
Royden, Constance Hildreth, Ellen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rui: the Analysis of 3-D Motion For Visually-Guided Navigation
9301326 HILDRETH When an observer moves through the environment, a continually changing visual image appears on the surface of the eye. The motion of features in this image conveys information about the direction and speed of movement of the observer through space, as well as about the three-dimensional motion of other objects in the environment. It is known that from image motion alone, human observers can judge their 3-D direction of translation relative to a stationary scene with high accuracy. The recovery of the observer's movement from image motion becomes far more challenging when the environment contains objects that undergo their own motion through space. This research will investigate the mechanisms by which the human visual system analyzes the motions of the observer and objects in the environment from information available in the changing visual image. Psychophysical experiments will examine the accuracy with which humans judge their 3-D direction of translation when viewing dynamic visual displays that simulate the motion of an observer toward a scene containing moving objects. Further experiments will test the human ability to detect moving objects, measure their 3-D direction of translation relative to the observer, and judge the time-to-collision of the observer with approaching objects. The successful navigation of human observers through complex dynamic scenes requires that these tasks be performed with high accuracy. Computational models will be developed that capture the behavior observed in these experiments. These models will be implemented and tested in a computer vision system that recovers the 3-D motion of a mobile camera and moving objects from a sequence of digitized 2-D images. The results of this work will further our knowledge of how the human system uses the analysis of image motion to perform tasks such as navigation through complex scenes. They will also contribute to the development of successful computer vision sys tems for autonomous navigation and to applications requiring the interpretation of 3-D motion and structure from dynamic imagery in domains such as robotics, medical imaging, and surveillance. ***
|
0.915 |