1992 — 1999 |
Knill, David C |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Constraints For Image Understanding in Humans @ University of Minnesota Twin Cities
When one looks at a scene, one is immediately able to classify the edges in the scene, perceiving that this edge is shadow boundary, that edge is a crease in a surface and so on. If we consider the visual system to be working to interpret scene attributes from image data, we would say that th visual system is very good at determining the physical causes of the contours to which edges project in an image. Moreover, the visual system uses information provided by the configuration of contours in an image to help interpret scene attributes such as the shapes of surfaces. The proble of contour interpretation, which includes both of these functions, appears quite difficult when one considers that the information provided in an imag for both functions is locally ambiguous. The approach to the problem adopted by many computer vision researchers is to specify a set of constraints on contour interpretation which, when taken together, define a unique solution. A similar approach can be used to organize research into human perceptual processing of contours. Within the approach, one consider the visual system as implicitly enforcing a set of constraints in its interpretation of contours. The constraints come from two sources; the image data itself and the natura structure of the environment. The present proposal aims to analyze the constraints of both types which are potentially available to the visual system for contour interpretation and to investigate psychophysically the nature of the constraints actually used by the visual system. The research will focus on constraints on two types of contour which have received relatively little attention from vision researchers: reflectance contours, which project from discontinuities in surface reflectance, and shadow contours, which project from discontinuities in illumination in a scene. Also considered will be occluding edges, due to their importance in determining scene structure. An important aspect of the research will be the use of 3D rendering techniques to generate naturalistic images for experimental stimuli. This will allow the manipulation of independent variables in both the 3D scene domain and the 2D image domain. The ability to control the 3D structure of scenes is necessary for the investigation of what natural constraints on edge structure are assumed by the visual system since these are specified in the scene domain, not in the image domain. Th results of the investigation will provide a deeper understanding of the information available in images for contour interpretation and the ways in which this information is used by the human visual system. More broadly, the research relates to the issue of how much knowledge of environmental structure and of the image formation process is incorporated into visual system processing.
|
1 |
1994 |
Knill, David C |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Constraints For Image Understanding @ University of Pennsylvania
When one looks at a scene, one is immediately able to classify the edges in the scene, perceiving that this edge is shadow boundary, that edge is a crease in a surface and so on. If we consider the visual system to be working to interpret scene attributes from image data, we would say that th visual system is very good at determining the physical causes of the contours to which edges project in an image. Moreover, the visual system uses information provided by the configuration of contours in an image to help interpret scene attributes such as the shapes of surfaces. The proble of contour interpretation, which includes both of these functions, appears quite difficult when one considers that the information provided in an imag for both functions is locally ambiguous. The approach to the problem adopted by many computer vision researchers is to specify a set of constraints on contour interpretation which, when taken together, define a unique solution. A similar approach can be used to organize research into human perceptual processing of contours. Within the approach, one consider the visual system as implicitly enforcing a set of constraints in its interpretation of contours. The constraints come from two sources; the image data itself and the natura structure of the environment. The present proposal aims to analyze the constraints of both types which are potentially available to the visual system for contour interpretation and to investigate psychophysically the nature of the constraints actually used by the visual system. The research will focus on constraints on two types of contour which have received relatively little attention from vision researchers: reflectance contours, which project from discontinuities in surface reflectance, and shadow contours, which project from discontinuities in illumination in a scene. Also considered will be occluding edges, due to their importance in determining scene structure. An important aspect of the research will be the use of 3D rendering techniques to generate naturalistic images for experimental stimuli. This will allow the manipulation of independent variables in both the 3D scene domain and the 2D image domain. The ability to control the 3D structure of scenes is necessary for the investigation of what natural constraints on edge structure are assumed by the visual system since these are specified in the scene domain, not in the image domain. Th results of the investigation will provide a deeper understanding of the information available in images for contour interpretation and the ways in which this information is used by the human visual system. More broadly, the research relates to the issue of how much knowledge of environmental structure and of the image formation process is incorporated into visual system processing.
|
0.958 |
2000 — 2004 |
Knill, David C |
T32Activity Code Description: To enable institutions to make National Research Service Awards to individuals selected by them for predoctoral and postdoctoral research training in specified shortage areas. |
Training in Visual Science @ University of Rochester |
1 |
2001 — 2009 |
Knill, David C |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Visual Computations For Motor Control @ University of Rochester
DESCRIPTION (provided by applicant): The goal of the proposed research is to understand how 3-dimensional (3D) visual information, both about target objects and about the moving hand, is used to plan and control goal-directed hand movements. The research focuses on what depth cues the visuomotor system uses for planning and online control and how it integrates those cues. Several theoretical considerations inform the work. First, different motor behaviors admit different solutions to the problem of mapping visual information to motor behavior. We therefore address the question of whether the brain uses a common visual representation of objects or relies on different task-specific strategies for planning and controlling different types of hand movements such as hand transport and hand rotation (e.g. during grasping movements). We will measure how subjects weight binocular disparity and texture/figural cues about object layout in a scene for controlling both components of hand movements. As a probe into the modularity of visuomotor computations, we will use haptic feedback to adapt subjects' cue weights in one task and measure transfer of adaptation effects between tasks. Second, the relative contribution of different cues to motor control depends on both the reliability of the information provided by the cues and the time course with which the brain processes them. We will study how changes in cue reliability affect how the brain uses the information provided by depth cues for both planning and online control. We will also measure the time course of processing binocular disparity and texture/figural cues as they contribute to motor control using a perturbation technique developed in the previous funding period. In order to derive a deeper understanding of how cue uncertainty and timing constraints interact to determine human visuomotor performance, we will supplement the experimental studies with computational work applying methods from optimal filtering (optimal statistical estimation over time) and control.
|
1 |
2005 — 2013 |
Knill, David C |
T32Activity Code Description: To enable institutions to make National Research Service Awards to individuals selected by them for predoctoral and postdoctoral research training in specified shortage areas. |
Training Grant in Vision Science @ University of Rochester
Twenty one faculty of the Center for Visual Science (CVS) at the University of Rochester request renewal of support for a pre-doctoral and postdoctoral training program that emphasizes two broadly defined areas of vision research - research into central visual processing using psychophysical, physiological, and computational approaches and research in physiological optics using advanced optical techniques to study both basic questions about retinal processing and for translational research on eye disease. Training is interdisciplinary, drawing particularly on the unique technical and intellectual resources of the Center. It covers a broad range of basic and clinical problems in vision, but emphasizes approaches that link visual performance to underlying neural mechanisms. We request each year support for six pre-doctoral trainees, who will generally enter the program through Brain and Cognitive Science, Computer Science, Neuroscience, Biomedical Engineering, or the Institute of Optics. Students take core courses plus advanced seminars in visual science, augmented by courses from the department through which they entered the program. They attend regular colloquia, research meetings and the biannual CVS Symposium and Fall Vision Meeting. Concurrently with course work, students complete research projects in CVS preceptor labs We request each year support for one postdoctoral fellow. Postdoctoral training has a heavy emphasis on research. The training grant will be used especially to draw talented scientists from other areas into vision research. We are also requesting stipends for eight summer undergraduate research fellows to participate in an ongoing program that we have developed to introduce students to research in vision science and recruit students into graduate training in visual science.
|
1 |
2007 — 2011 |
Knill, David C |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Bayesian Computations in Human 3d Visual Perception @ University of Rochester
[unreadable] DESCRIPTION (provided by applicant): The goal of the proposed research is to understand how the human visual system resolves the inherent geometric ambiguities associated with most visual cues to depth. The brain can resolve cue ambiguity in two ways, (1) by applying prior knowledge of ecological constraints on those variables (e.g. that figures tend to be symmetric) and (2) by cooperatively using the information from other sensory cues to disambiguate their values. The first two principal aims focus on the first part of the problem. They are shaped by the observation that much of the statistical structure that makes monocular cues to depth informative is categorical in nature - motions are rigid or not, figures are symmetric or not, textures are homogeneous or not, etc.. We will study how the visual system combines information from multiple cues to disambiguate which of the several possible prior constraints to use when interpreting a cue. Casting the problem within a Bayesian framework provides a formal system for modeling robust cue integration, which allows the visual system to effectively deal with large conflicts between sensory cues. We will perform experiments to test the Bayesian model against other models of robust cue integration. The model also provides a framework for characterizing how the brain adapts its internal models of the prior statistics that make monocular cues informative. We will study how human observers use the information obtained by combining multiple cues to adapt these internal models and how this impacts how they integrate cues to estimate surface orientation and shape. The final principal aim tests whether and how the brain uses non-visual information (haptic / kinesthetic) derived from active movement and exploration of objects to disambiguate scene properties on which visual cues depend. The research will focus on three monocular visual cues about surface orientation and shape- figure shape, texture and motion - and how the brain combines these cues with stereoscopic cues. The psychophysics is motivated by and will be coupled with computational modeling of ideal Bayesian models for visual cue integration, learning and multi-modal cue integration. The results of the proposed research will elucidate the types of statistical inferences that are built into the neural computations underlying visual depth perception and define the limits of these computations. This will ultimately direct and constrain future studies of the neural mechanisms underlying vision. [unreadable] [unreadable] [unreadable]
|
1 |
2012 |
Knill, David C |
R13Activity Code Description: To support recipient sponsored and directed international, national or regional meetings, conferences and workshops. |
Cvs Symposium: Computational Foundations of Perception and Action @ University of Rochester
DESCRIPTION (provided by applicant): A fundamental goal of neuroscience is to understand how the central nervous system transforms sensory signals into behavior. Central to this effort is understanding the computational problems posed to the CNS and what computational strategies the CNS employs to solve the problems. A description of the mapping from sensory signals to neuronal activity in an area of visual cortex, for example, is only one piece of an explanatory model of neural function in that area. A full understanding of the neural function requires understanding the computations in which the neurons are embedded and how their behavior is driven by and relates to the computations being performed. The past decade has seen a marked growth in computational studies of sensory, perceptual and sensori-motor processing. What is the most striking about this work is the converging application of common conceptual tools to understand everything from neural coding to decision-making. The 287th Symposium of the Center for Visual Science will bring computational, neurophysiological and psychophysical researchers together who study the computational foundations of problems in sensory and perceptual processing ranging from low-level sensory coding to higher-level aspects of perception and action such as cue integration, decision-making and sensorimotor control. The goal of the workshop is to provide a forum for investigating the common foundational computational principles that underlay the many seemingly different functions of sensory systems (and where they differ) and to discuss how to link computational theories to underlying mechanisms to gain a deeper understanding of perceptual behavior. With this in mind, we have invited speakers who bring together in some way computational and experimental approaches. Sessions will focus on five topics - sensory coding, multi-sensory integration, sensori-motor control, decision making, and perceptual learning and memory. Because some of the most exciting work in computational neuroscience is being done by young investigators, we have included in the speaker list a number of promising early career speakers. We believe that representing their voices along with more established leaders in the field will bring energy and new ideas into the discussion. We will also provide an opportunity for students and post-docs to present their work in poster sessions and will make competitive travel fellowships available to the best of the students and post-docs who wish to attend and present their work. PUBLIC HEALTH RELEVANCE: The past decade has seen a marked growth in computational studies of sensory, perceptual and sensori-motor processing. What is the most striking about this work is the converging application of common conceptual tools to understand everything from neural coding to decision-making. The 28th Symposium of the Center for Visual Science will bring computational, neurophysiological and psychophysical researchers together who study the computational foundations of problems in sensory and perceptual processing ranging from low-level sensory coding to higher-level aspects of perception and action such as cue integration, decision-making and sensorimotor control.)
|
1 |
2013 |
Knill, David C |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Kinesthetic Influences On Visual Motion Perception in Normal and Older Adults @ University of Rochester
DESCRIPTION (provided by applicant): How the brain integrates kinesthetic information about self-generated movements with other sensory signals caused by those movements is largely unknown. While there is a substantial and growing body of research on how the brain integrates multiple sensory signals generated by objects and events in the world, much less is known about how the brain integrates kinesthetic and visual motion signals. Even less is known about how the interactions between kinesthesis and vision change with age. The current proposal addresses these gaps in our understanding, specifically aiming to elucidate how kinesthetic signals generated by one's hand motion influence visual motion processing and how those interactions change with age - a question of clinical significance because of the known age-related deficits in visual motion processing. The first aim focuses on an aspect of multisensory integration that is often overlooked - how the brain determines whether or not, or how strongly, to couple signals from different modalities (most current research focuses on how the brain weights different signals when they are perfectly coupled). We will measure how subjects adapt their inter-modal coupling to changes in signal reliability and compare subjects' performance to that of optimal Bayesian models that are parameterized by estimates of individual subjects' sensory uncertainty. The models provide a tool for testing the hypothesis that aging leads to changes in multimodal integration mechanisms themselves, by allowing us to discount the effects of changes in unimodal signal uncertainty on older subjects' behavior. The second aim will study whether and how the brain uses kinesthetic signals to support and enhance early visual processing and how this changes with age. In one set of experiments, we will test the hypothesis that predictive signals associated with kinesthesis enhance the detectability of congruent visual motion signals and measure the tuning of this enhancement to conflicts between the signals. Another set of experiments will test a strong version of the interaction hypothesis - that kinesthesis can be solely sufficient to generate visual motion percepts. Here, we will expand on a phenomenon discovered in our preliminary studies - that many subjects report seeing visual motion embedded in a white noise field optically collocated with their moving hand. To quantify the strength of generated motion percepts, we will experimentally determine the real visual motions that perceptually match reported phantom motions. We will further explore this kinesthetic enhancement of visual processing to determine whether the underlying interactions between kinesthesis and visual motion processing are multiplicative or additive. A final set of experiments will test the hypothesis that the brain uses kinesthetic signals to aid in motion segmentation by both enhancing the motion signal from a moving target when the hand moves the target and by suppressing the background when the hand moves it. We will measure age-related changes for each of these three forms of interaction between kinesthesis and vision; matching signal uncertainty for young and older subjects to isolate changes that are result from age-related changes in multisensory integration mechanisms.
|
1 |