
Michael S. Landy - US grants
Affiliations: | New York University, New York, NY, United States |
Website:
http://www.cns.nyu.edu/~msl/We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Michael S. Landy is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
1985 — 1988 | Landy, Michael | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Models of the Processing of Visual Information (Information Science) @ New York University |
0.915 |
1989 — 1991 | Landy, Michael S | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Fusion and Calibration of Multiple Depth Cues @ New York University Biological visual systems make use of several depth cues (occlusion, texture, perspective, motion, disparity). We propose methods to combine the results of the modules which compute these cues. We describe an ideal depth observer which combines the separate depth cues to absolute depth, and weighting cues relative their estimated reliability. The normative weights assigned to cues should vary with the scene and viewing conditions (e.g., he amount of texture in the scene). An ancillary cue is used to assess the likely performance of various depth modules. We examine the (complicated) mapping between ancillary cues and weights selected for the rule of combination analytically, by comparison with human psychophysical performance, and through adaptive network simulations. We investigate two types of learning; calibration, and depth fusion learning. Calibration translates the output of various modules to veridical depth estimates. Depth fusion learning develops a mapping from ancillary cue values to optimal cue weights. We will develop: (1) psychophysical measurements of the depth combination rule used by human observers when cues are (approximately) in harmony; (2) a software testbed for the simulation and modeling of ideal and psychophysical depth observers; (3) models of the psychophysical observer based on these data and normative ('ideal observer') models; and (4) models of calibration and depth fusion learning. The proposed research will allow us to further understand the use of multiple depth cues by the human visual system. An understanding of the calibration process is immediately applicable to the recalibration that takes place in biological vision when basic parameters change over time, such as interpupillary distance. The research on fusion learning will shed light on how the human visual system can make the most reliable estimates of depth possible as the visual apparatus changes (through aging and/or disease) so as to alter the relative reliability of the cues. |
1 |
1990 — 1994 | Landy, Michael Hawken, Michael [⬀] Movshon, J. Anthony |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Software Development For Neural and Behavioral Research @ New York University Software will be developed that provides experimental control of stimulus display and data acquisition in neurophysiological and psychophysical investigations of visual processing. This software is being developed for three main reasons: (1) to take advantage of advances in computer technology that facilitate the development of such systems, (2) to incorporate an extensive library of low- level visual display drivers that have been developed at NYU over the past five years, and (3) to incorporate modern design concepts into the software. The software is expected to provide the basic experimental control program for the next 5-10 years for the research group involved in this proposal. In addition, a key design feature of the software system is portability, with the goal of freeing its general structure from dependance on a specific processor architecture or on a particular set of input and output devices. The common theme of the research that will initially use the newly developed software is that it relies upon computer controlled generation and display of visual patterns, either on monochrome or color monitors along with the collection of stimulus-evoked behavioral and neural responses. Although the initial effort will concentrate on the generation of software for visual experimentation, the design of the system is by no means limited to this area, and adaptation of the general control structures to a wide range of neural and behavioral experiments should be straightforward. |
0.915 |
1992 — 1994 | Landy, Michael S | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Shape Representation and Multiple Depth Cues @ New York University DESCRIPTION (Investigator's Abstract): Biological visual systems make use of multiple depth cues including occlusion, texture, perspective, motion parallax, disparity, shading, and so on. The first of three themes that guide the work proposed here is that an ideal depth observer is sensitive to the quality of information available from different depth cues in a particular scene. The second theme is that the ideal depth observer is sensitive also to the logical type of information (in a measurement-theoretic sense) available from different depth cues. The investigators will argue that depth information of different types must be promoted to a common type before being combined, and this process of promotion may explain many observed depth cue interactions. The third theme concerns the veridicality of stimuli (how closely they resemble 'real world' scenes).If an ideal depth observer is sensitive to the quality of information available from different depth cues, then the use of impoverished or distorted depth stimuli can in itself alter the course of depth processing. The experimental work proposed will make use of hardware and software capable of generating highly realistic, controllable viewing conditions involving large displays, observer motion, and accurately modeled scene and surface properties. The experimental methods used are based on perturbation analysis methods that permit analysis of a system that can potentially react to distortions and inconsistencies in stimuli. The proposed research consists of four major tasks. (1) The investigators will test the general applicability of a particular model of cue combination, the linear model, which the investigators have previously demonstrated for combinations of texture, motion and stereo. This will involve estimation of depth using additional depth cues as well as applications to other tasks including determination of absolute spatial location and 2D feature localization. (2) The investigators will examine whether the human depth combination rule is statistically robust in the sense of discounting cues that indicate depth values discrepant from other cues. (3) Exploration of the time course and limits of cue calibration for absolute depth will be carried out using rich 3D displays and an open-loop pointing task. (4) Experiments will probe the forms of depth representation used by human observers. |
1 |
1996 — 2000 | Landy, Michael S | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perception of Depth and Surface Properties @ New York University DESCRIPTION: The proposed experiments are intended to clarify how humans make use of the various different cues to depth. Three themes guide this work. The first is that an ideal observer is sensitive to the quality of information available from different depth cues in a particular scene. The second is that the ideal observer is sensitive also to the logical type of information available from different depth cues. Depth information of different types must be promoted to a common type before being combined, and this process may explain many depth cue interactions. The third theme concerns the veridicality of stimuli. If an ideal observer is sensitive to the quality of information available from different depth cues then the use of impoverished stimuli can alter the nature of the processing. These three themes lead to a Modified Weak Fusion framework for understanding depth cue combination. The experiments will use highly realistic, controllable viewing conditions involving large displays and accurately modelled scene and surface properties. The proposed research consists of three major tasks: 1) Examination of interactions between depth cues in terms of perceived depth as well as other perceived surface properties such as lightness to test the MWF model 2) Carry out a number of single cue depth studies and other control studies and 3) Study the form of the surface representation used by human observers. |
1 |
2001 — 2003 | Landy, Michael S | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Depth and Surface Properties: Perception and Action @ New York University Biological visual systems make use of many different cues for visual judgments. For depth and shape estimation, these include occlusion, texture, perspective, motion parallax, disparity, individual cues, but cannot occur until cues are promoted to a commensurate scale by filling in one or more needed parameters (e.g. the fixation distance for depth estimates, the illuminant color and intensity for estimates of surface color, etc.). These parameters are also estimated using multiple cues (e.g., both retinal and oculomotor cues for the fixation distance). We propose experiments intended to clarify how human observers promote and combine cues. The experimental methods used are bases on perturbation analysis which permits examination of a system that can potentially react to distortions and inconsistencies in stimuli. The proposed research consists of three major tasks. (1) We will examine depth judgments and motor responses in simulated 3-D scenes to determine whether behavior can be well-understood by modeling observers as Bayesian decision makers. If so, any difference between visual judgments and motor responses may be due to different weights given to visual cues due to differences in the corresponding risk factors. (2) The notion of cue promotion suggests that there are parameters that observers must estimate along the way to determining scene geometry and surface properties such as color. Again, a Bayesian model will be used to shed light on the process of estimating these internal parameters. (3) We will continue our studies of the form of representation used by human observers for curves and surfaces as well as 3-dimensional motion paths. These studies will inform our understanding of how object shape and motions are determined from sparse and often conflicting visual data. |
1 |
2004 — 2017 | Landy, Michael S | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perception and Action: Ideal Observers and Actors @ New York University DESCRIPTION (provided by applicant): Biological visual systems make use of many different sources of information ("cues") for visual judgments. For depth and shape estimation, for example, these include occlusion, texture, perspective, motion parallax, disparity, shading and contour. The combination of these cues is based on the relative reliabilities of the individual cues, but cannot occur until cues are promoted to a commensurate scale by filling in one or more needed parameters (e.g., the fixation distance and azimuth for depth and slant estimates). These parameters are also estimated using multiple cues (e.g., both retinal and oculomotor cues for the viewing geometry). We propose statistical decision theoretic models for ideal behavior in the visual estimation of scene properties and for movement planning. The ideal observer or actor must take into account measurement uncertainty, associated with different outcomes, and prior information about the current state of the world. We propose experiments intended to clarify how human observers promote and combine cues for vision and for the visual control of action. The experimental methods used are based on perturbation analysis which permits examination of a system that can potentially react to distortions and inconsistencies in the stimuli. The proposed research consists of three major tasks. (1) We will analyze observer behavior relative to predictions of ideal Bayesian decision makers confronted by the same levels of uncertainty in tasks of perceptual decision, reaching and grasping. (2) We will examine cue combination in the service of cue promotion, again with reference to ideal behavior. (3) We will continue our studies of spatial interpolation performance so as to better understand such aspects of the underlying model as the prior distribution, and the methods used by the observer to be statistically robust (which, in this context, is closely related to the scene segmentation problem). |
1 |
2005 — 2008 | Landy, Michael S | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Visual Perception and Coding of Texture @ New York University [unreadable] DESCRIPTION (provided by applicant): The experiments in this proposal share two basic themes. The first is that visual texture stimuli may be used to probe the initial visual coding of images and image sequences. The second theme is to relate all of our work to a basic 4-stage, multiple channel model, the "back-pocket model of texture segregation." Each channel consists of (1) an initial linear spatial filter, (2) a nonlinearity, and (3) a second linear filter. These channels feed (4) a combination rule or linking hypothesis that yields a response or prediction of performance in a given task. Our work attempts to confirm this model, characterize its mechanisms, analyze its capabilities, and relate it to the underlying physiology. This proposal has three major aims. The first aim concerns the coding and appearance of natural textures. We investigate whether human texture coding is matched to the statistics of natural textures and the degree to which observers can estimate physical characteristics of textures independent of viewing conditions. The second aim investigates how the back-pocket model channels are used for the identification of texture defined objects and includes further investigations of the structure and selectivity of back-pocket model channels. The third aim is to investigate the cortical implementation of texture or 2nd-order channels using fMRI to measure responses to and adaptation to texture modulations. [unreadable] [unreadable] |
1 |
2014 — 2017 | Landy, Michael | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ New York University A fundamental question in perceptual and cognitive science concerns how decisions about incoming sensory information unfold over time. The influx of available sensory information must be balanced across time with the need for quickness versus accuracy in making a decision. Sensory systems are flexible in response to changing context, resulting in several dynamic aspects of perception at multiple time scales, including the trade-off between speed and accuracy in decisions or actions, adaptation for repeated sensory stimuli, and long-term recalibration in response to a consistent change in stimulation or error. This proposal attempts to advance theoretical understanding of these processes by unifying disparate threads in modern perceptual theory while developing models of the dynamics of sensory integration and decision-making. The research could ultimately have implications for the design of virtual reality systems, lighting and sound systems, visual displays, and artificial vision and sound processing systems. |
0.915 |
2015 — 2019 | Landy, Michael S | P30Activity Code Description: To support shared resources and facilities for categorical research by a number of investigators from different disciplines who provide a multidisciplinary approach to a joint research effort or from the same discipline who focus on a common research problem. The core grant is integrated with the center's component projects or program projects, though funded independently from them. This support, by providing more accessible resources, is expected to assure a greater productivity than from the separate projects and program projects. |
Core Grant: Visual Displays Module @ New York University Abstract The generation of complex and accurately controlled visual stimuli is a major concern for most investigators in the Vision Core. We would all like to generate and control more complex stimuli with less programming effort. The Visual Displays Module supports research within NYU and in dozens of labs world-wide by maintaining and developing software for presenting visual stimuli using advanced graphics techniques. We propose to enhance and extend the MGL package, developed locally for this purpose, allowing it to interface with a wider range of behavioral and physiological response measurement devices and to be used on a wider range of platforms, including different computer operating systems, tablet devices and over the web. The Visual Displays Module is the most widely used module in the Core, and will have a moderate or extensive impact on the research of 17 of the 18 members of the Core; three of these are junior investigators, 13 are supported by NEI, and 9 hold qualifying NEI grants. In addition, the resources from this module have been and will continue to be widely used by vision scientists outside NYU. |
1 |
2019 — 2021 | Landy, Michael S | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multisensory Cue Integration: Theory, Behavior and Implementation @ New York University Project Summary To optimally estimate a property of the environment such as object size, location or orientation, one should use all available sensory information and combine it with prior information, i.e., a probability distribution across possible world states, reflecting knowledge of scenes one is likely to encounter. Sensory input typically arises from multiple sensory modalities, and is uncertain due to physical and neural noise. How are these sources of information combined? An ideal observer will combine all sources of information, taking into account the relia- bility of each source. In addition, such an observer needs to consider alternative causes of discrepancies be- tween sources of sensory information. Do two sources disagree so much that one should conclude they derive from different objects, and therefore have separate causes in the environment? Or, does a discrepancy indi- cate that one or both sources of information (e.g., sense modalities) have become uncalibrated? Many studies define ?optimal? cue integration as maximizing the reliability of the combined-cue estimate, which is generally consistent with human behavior. Do observers have access to the resulting reliability estimate to determine one's confidence in this estimate, perhaps to inform subsequent behavior? What computation does the brain use to solve these problems and how are these computations implemented? We propose research aimed to answer these questions. In our first aim we propose to develop biologically realistic models of how such computations are implemented, i.e., testable neural-network models of optimal behavior for sensory estimation, causal inference, recalibration and confidence. Second, we propose a series of experiments in an area that has been little studied in the framework of optimal cue integration: the combina- tion of visual, tactile and proprioceptive inputs for localization. These experiments test whether humans per- form optimal integration and recalibration of multisensory cues and priors under unclear causal structures in scenarios that are more complex than typically studied (i.e., involving dynamics, context effects, etc.) and thus more similar to the real world. These studies are important and innovative on their own. In addition, they will also provide the foundation for Aim 3, in which we will probe the implementation of cue combination, influence of priors, causal inference, recalibration and confidence in the human brain using fMRI. Together, the experi- mental data from Aims 2 & 3 will be used to test the models from Aim 1. These studies will shed light on the way in which multisensory stimuli are encoded to form a coherent percept, the information considered when perceptual decisions are made, and how vision is used to guide us in an ever-changing world. These experi- ments on normal humans will provide a starting point for understanding multisensory perception and perceptu- al adaptation in individuals in which these systems are compromised by conditions that impact sensory input (e.g., amblyopia, AMD, stroke). |
1 |