2018 — 2021 |
Otero-Millan, Jorge |
K99Activity Code Description: To support the initial phase of a Career/Research Transition award program that provides 1-2 years of mentored support for highly motivated, advanced postdoctoral research scientists. R00Activity Code Description: To support the second phase of a Career/Research Transition award program that provides 1 -3 years of independent research support (R00) contingent on securing an independent research position. Award recipients will be expected to compete successfully for independent R01 support from the NIH during the R00 research transition award period. |
Perceptual Stability During Torsional Eye Movements @ Johns Hopkins University
PROJECT SUMMARY/ABSTRACT A central question in vision research is how we perceive a stable world despite continuous retinal motion from eye movements. Understanding these mechanisms is critical to improving diagnosis and treatment of patients suffering from blurred or jumping vision, mislocalization of objects, abnormal tilt and slant perception, loss of balance and falls. Current models of perceptual stability during eye movements ignore the fact that the eye ro- tates around three axes. One important mechanism uses the efference copy of the command that moves the eyes to discount the retinal motion or displacement caused by eye movements. Research has focused on the horizontal and vertical dimensions of motion, overlooking the important contribution from the third (torsional) dimension, in which the eyes rotate around the line of sight, as happens, e.g., every time we tilt our head towards the shoul- der. Torsional eye position directly contributes to our perception of tilt (clockwise or counterclockwise) and of slant (orientation in the sagittal plane). The main reason for the gap in knowledge about torsion has been the technological challenge of reliably measuring torsion. We overcame this barrier by developing a new method for measuring torsional eye movements noninvasively. This proposal aims to study the perceptual effects and brain mechanisms of the efference copy for torsional eye movements. During the mentored phase, the candidate will use transcranial magnetic stimulation (TMS) combined with measurements of perception of upright, and tor- sional eye movement recordings to determine the role of particular cortical circuits in taking into account static torsion that occurs during static head tilts. During the independent phase, the candidate will study how percep- tion is altered around the time of quick torsional eye movements and during torsional vestibular ocular reflex. This will enable us to determine if the brain uses signals in three or two dimensions to compensate for the retinal motion induced by eye movements. These experiments will give us a unique opportunity to provide critical new evidence to modify and expand current models of perceptual stability. Studying torsion, we will be able to dis- criminate between computational models and brain areas that only account for the two dimensions of the retinal image versus those that account for all three dimensions of motion. Ultimately, understanding these mechanisms will provide new diagnostic and therapeutic avenues for people with unsteady vision, spatial disorientation, and falls.
|
1 |
2021 — 2024 |
Banks, Martin (co-PI) [⬀] Otero-Millan, Jorge |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Hcc: Medium: Deep Learning-Based Tracking of Eyes and Lens Shape From Purkinje Images For Holographic Augmented Reality Glasses @ University of California-Berkeley
This project seeks to develop head-worn Augmented Reality (AR) systems that look and feel like ordinary prescription eyeglasses, and can be worn comfortably all day, with a field of view that matches the wide field of view of today's eyewear. Such future AR glasses will enable vast new capabilities for individuals and groups, integrating computer assistance as 3D enhancements within the user’s surroundings. For example, wearing such AR glasses, an individual will see around them remote individuals as naturally as they now see and interact with nearby real individuals. Virtual personal assistants such as Alexa and Siri may become 3D-embodied within these AR glasses and situationally aware, guiding the wearer around a new airport, or coaching the user in customized physical exercise. This project aims to advance two crucial, synergistic parts of such AR glasses: 1) the see-through display itself and 2) the 3D eye-tracking subsystem. The see-through display needs to be both very compact and have a wide field of view. To achieve these display requirements, the project uses true holographic image generation, and improves the algorithms that generate these holograms by a) concentrating higher image quality in the direction and distance of the user's current gaze, and b) algorithmically steering the "eye box" (the precise location where the eye needs to be to observe the image) to the current location of the eye's pupil opening. In current holographic displays, this viewing eye box is typically less than 1 cubic millimeter, far too small for a practical head-worn system. Therefore, a practical system may need both a precise eye tracking system that locates the pupil opening and a display system that algorithmically steers the holographic image to be viewable at that precise location. The 3D eye tracking system also seeks to determine the direction of the user's gaze, and the distance of the point of gaze from the eye (whether near or far), so that the display system can optimize the generated holographic image for the precise focus of attention. The proposed AR display can render images at variable focal lengths, so it could be used for people with visual accommodation issues, thereby allowing them to participate in AR-supported education and training programs. The device could also have other possible uses in medical (such as better understanding of the human visual system) and training fields.
The two branches of this project, the holographic display, and the 3D eye tracker, are closely linked and each improved by the other. The 3D eye tracker utilizes an enriched set of signals and sensors (multiple cameras for each eye, and a multiplicity of infra-red (IR) LEDs), from which the system extracts the multiple tracking parameters in real time: the horizontal and vertical gaze angles, the distance accommodation, and the 3D position and size of the pupil's opening. The distance accommodation is extracted by analyzing Purkinje reflections of the IR LEDs from the multiple layers in the eye's cornea and lens. A neural network extracts the aforementioned 3D tracking results from the multiple sensors after being trained on a large body of ground truth data. The training data is generated from multiple human subjects who are exposed, instantaneously to known patterns on external displays at a range of distances and angles from the eye. Simultaneous to these instantaneous patterns, the subject is also shown images from the near-eye holographic image generator whose eye box location and size have been previously optically calibrated. One part of each pattern will be shown, instantaneously, on an external display and the other part, at the same instant, on the holographic display. The subject can only answer correctly a challenge question if they have observed both displays simultaneously. This can only occur if the eye is at a precise 3D location and also at a precise known gaze angle. The eye tracker will be further improved by integrated its training and calibration with the high precision (but very bulky) BinoScopic tracker at UC Berkeley, which tracks using precise maps of the user's retina. The holograhic image generator uses the real time data from the 3D eye tracker to generate holograms whose highest image quality is at the part of image that is currently on the viewer's fovea, and at the distance to which the user is currently accommodated. The image quality is improved by a trained neural network whose inputs are images from a camera placed, during training, at the position of the viewer's eye.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |