cached image
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Stuart M. Anstis is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
1993 — 1995 |
Anstis, Stuart |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Flicker-Augmented Contrast--a New Visual Effect @ University of California San Diego
A steady gray test spot looks light gray on a dark surround and dark gray on a light surround, owing to simultaneous brightness induction, perhaps based on lateral inhibition. But a test spot of the same time-averaged luminance that flickers between black and white at 15 Hz looks almost white on a dark surround and almost black on a light surround. The flickering test spot changes its brightness by five to eight times as much as the gray spot. We call this newly discovered phenomenon flicker-augmented contrast (FAC). FAC is found in all the classic demonstrations of simultaneous contrast--Heinemann's disks, McCourt's induced gratings, White's effect, Koffka's rings and Benary's cross. We shall measure FAC as a function of surround luminance, retinal eccentricity, temporal frequency, and amplitude of the test flicker. Our experiments will test between various models of FAC, based on the idea that the light and dark phases of the flickering test disk each undergo separate brightness induction from the surround before being combined by the visual system. Two models of this combination are considered: linear summation of brightness, and nonlinear "winner-take-all" competitive combination. We propose that spatial increments and decrements are handled by separate Bright and Dark channels, possibly mediated by on- and off-center ganglion cells. These bright and dark signals are then combined in push-pull mode by an opponent output stage. The advantage of this opponent system is that it effectively doubles the limited intrinsic dynamic range of the neural pathways that signal luminance.
|
0.915 |
2011 — 2015 |
Macleod, Donald (co-PI) [⬀] Nguyen, Truong [⬀] Anstis, Stuart |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Medium: Understanding Quality of 3d Video With Applications in 3d Video Processing and Communications @ University of California-San Diego
Research on 3D image/video perception in the light of general principles of stereo processing in the human visual system is being used to derive 3D quality metrics for 3D video applications in order to deliver the best 3D experience.
Human observers can detect differences in depth with high sensitivity, but limited precision. Moreover, while the visual system can represent fine details of the 2D image that are carried by high spatial frequency components (even when the image is rapidly changing), it can not track variations in depth with comparably high resolution in space or time. Thus the representation of stereoscopic depth is restricted both in bandwidth and in bit depth. Because of those limitations, some deviations from accuracy in the representation of depth at the retinal level are perceptually salient, and others less so. Measurements of perceived image fidelity across a range of spatial and temporal profiles for the depth signal are being used to guide the development of optimal video processing techniques, and to allow evaluation of the advantages and limitations of alternative 3D video coding algorithms such as multiview versus video+depth).
Vision experiments investigate both perceived fidelity and perceived image quality in 3D video generated using a variety of encoding schemes. From those results quality metrics are developed and integrated into video processing and communications applications. A human-centric disparity estimation and view synthesis algorithm is being developed for video processing and communications applications; this can also be used to improve the performance of object detection, classification and tracking, and to generate multi views for autostereoscopic display, which finds applications in 3D enabled diagnostic medical imaging and surgical systems.
|
0.915 |