1985 — 1989 |
Todd, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Computational Analysis of Three-Dimensional Form From Shading and Texture |
0.599 |
1989 — 1994 |
Todd, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Visual Perception and Cognition of Smoothly Curved Surfaces
This research will investigate the fundamental mechanisms by which human observers perceive and remember the 3D structures of smoothly curved surfaces. The project will address two basic questions: First, how are objects in 3-space represented mentally; and second, what are the properties of optical structure which determine this knowledge perceptually. It is clear from our perceptual experience that visual images on the retina provide sufficient information for us to perceive adequately the 3D structure of the environment; yet it is equally clear upon closer reflection that the properties of visual images seem to have little in common with the properties of real objects. Real objects exist in 3D space and are composed of tangible substances such as earth, metal, or flesh, while an image of an object is confined to a 2D projection surface and consists of nothing more than flickering patterns of light. Although the problem of how human observers are able to deal with this seemingly incommensurate mapping between objects and images is an ancient one, research in this area has been given a new impetus in recent years by attempts to develop artificial visual systems for robots and prosthetic devices for the blind. This project will attempt to facilitate these efforts by providing a more detailed understanding of how similar problems are solved by the human visual system. In order to model rigorously the processes of 3D form perception, it is first necessary to define precisely what those processes accomplish for us. Most previous investigations in this area have assumed that each visible surface point is encoded perceptually in terms of its metric depth relative to the point of observation. This research, in contrast, is designed to explore a much wider variety of potential representations. It builds on earlier findings that perceptual judgments of metric depth are surprisingly inaccurate and stems from the hypothesis, supported by the earlier results, that visual knowledge of 3D structure may often involve a more abstract form of representation in which an observed surface is perceptually encoded in terms of its nonmetric properties (e.g., those involving ordinal or nominal relations). A variety of experiments, using both natural and computer generated images, will test this hypothesis. They will require people to make various types of judgments requiring different levels of knowledge about 3D structure. Other experiments will identify some of the specific computational mechanisms through which perceptual representations of 3D form are generated from optical information, and measure the stability of these mechanisms over a wide range of viewing conditions.
|
0.599 |
1996 — 2000 |
Todd, James (co-PI) Lindsey, Delwin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Effects of Spatiotemporal Pooling On Perceived Motion @ Ohio State University Research Foundation -Do Not Use
9514522 LINDSEY Objects moving within a human's field of view stimulate a good many neurons, which differ with regard to the region of the field they represent and with regard to the local characteristics of a moving object, such as color, orientation, speed, and depth, that each signals. Human perception of objects in motion suggests that the human visual system somehow unifies the responses of these many neurons into a single perceptual whole, using a process known as pooling. This process is spatiotemporal in nature, because it extends across neurons responding, as moving objects are observed, to different properties of the object and to different regions of visual space, as well as across responses that occur in the recent past as well as in the present. This research is concerned with spatiotemporal pooling by the human visual system and is motivated by a three-stage, quantitative model of motion perception. The stages of the model are Detection, Integration, and Decision. The first two stages simulate the acquisition and collation of motion information from low-level motion sensors analogous to those thought to exist in humans. The Decision stage simulates a neural network designed to compute an appropriate velocity from the pooled responses of the low-level motion sensors. The model is designed to resolve not only the motion of single objects moving against a background, but also to resolve two overlapping transparent objects moving at different velocities relative to one another. This research will involve four series of psychophysical experiments on human subjects. The research will flesh out various functional aspects of the pooling process in humans and of the computer simulation of these processes. The first series will determine the spatiotemporal ranges over which pooling occurs and will determine whether motion pooling is "hard-wired" or whether the visual system has the capacity to adjust its pooling parameters flexibly when additio nal image information is present in the visual scene. The second series of experiments will determine how motion information from many different potential sources, e.g., spatial frequency, orientation, color, disparity, is pooled and represented in velocity space. The third and fourth series of experiments will determine how finely or coarsely image velocity information is represented in velocity space and will determine the robustness of object velocity resolution in the presence of noise. This research will enhance our current understanding of the processes underlying motion perception in humans. The research may be of ultimate utility in robotics, in the design and construction of visual prosthetics, and in the clinical assessment of visual dysfunction. ***
|
0.728 |
1998 — 2001 |
Todd, James T |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Representation of 3d Surfaces
DESCRIPTION (Applicant's Abstract): The research described in this proposal is designed to measure the precision and accuracy of observers' judgments of various surface properties and relations that could potentially be used for the perceptual representation of 3D shape. Observers will be asked to estimate the distances between visible surface points, and to compare their local orientations and curvatures. The proposed research has four major goals. One is to measure how different attributes of surface structure are perceptually scaled, and the extent to which their representations are consistent with one another. A second is to determine how higher order properties of differential structure, such as orientation and curvature, are perceptually parameterized into component dimensions. Can these components be selected arbitrarily, or are there privileged coordinate systems? A third goal of the research is to investigate the qualitative aspects of perceived surface structure. What defines identifiable parts or features on smoothly curved surfaces, and to what extent are they viewpoint invariant? Finally, additional experiments will also be performed to investigate how perceptual knowledge of spatial relations is used to guide motor actions, such as reaching to a target.
|
0.936 |
2000 — 2003 |
Todd, James Lindsey, Delwin (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Detection and Segmentation of Image Motion
A fundamental problem for both human and machine is to detect and identify moving patterns from sensory information that is often contaminated by noise. Previous research has shown that one way of enhancing signal to noise sensitivity is to sum the outputs of many different motion detectors in different locations of the visual field. An important difficulty with this approach, however, is how to decide which local motion detectors should be summated. Whereas previous studies have examined the case of simple translatory motion where all elements in a pattern move at the same velocity, our research will consider more complicated motions in which it is possible for patterns to rotate or expand over time. We will also investigate the detection of camouflaged objects whose motions are confined to a limited region of the visual field. A series of psychophysical experiments will be performed to isolate the basic mechanisms by which the human visual system is able to cope with these situations. The basic paradigm of these studies is to present a pattern of moving dots with a superimposed pattern of scintillating dots. Observers are required to judge some basic aspect of the moving pattern, such as its shape or its direction of movement. The amount of scintillating noise is manipulated to determine the limits of human performance for each attribute to be judged. Based on the results of these experiments we will attempt to develop a computational model that can simulate the performance of human observers on a wide variety of different tasks.
|
0.915 |
2006 — 2011 |
Todd, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Perception of 3d Shape From Texture @ Ohio State University Research Foundation -Do Not Use
Our eyes present to us a world in three dimensions, and yet the information coming in through the eyes is not three dimensional. The retina that sits at the back of each eyeball is effectively a two-dimensional sheet of receptor cells. The light that is reflected or emitted from three-dimensional objects in the world is actually transduced as two-dimensional patterns on the retina. So how do we perceive the third dimension of depth on the basis of these two-dimensional patterns? This is a classic question in the area of visual perception, and many decades of research have resulted in a list of depth cues that our visual systems use to "reconstruct" the third dimension. What we are still trying to understand is, precisely how does the visual system utilize each cue to perform this feat of perception?
With support of the National Science Foundation, Dr. Todd will closely examine the nature of one visual depth cue called "optical texture". To illustrate, imagine looking closely at a dimpled golf ball. The visual appearance of the dimples is warped by the curvature of the ball, and our visual systems can use this warping as a source of information about its roundness in all three dimensions. Dr. Todd has developed a computational model of how the visual system calculates the shape of an object based on its textural patterning, and the present research project will investigate whether his model is an accurate characterization of how the human visual system uses optical texture as a depth cue. Experiments will be conducted to precisely test how human participants use optical texture, and the results from these experiments will be compared against the predictions of the model. Besides advancing our knowledge of human visual perception, these results may inform the design of more robust and effective algorithms in machine vision and computer graphics, and more functional prosthetic devices for the blind.
|
0.728 |
2010 — 2015 |
Todd, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Perceptual Identification and Representation of Image Contours
It has long been recognized that a convincing pictorial representation of an object can sometimes be achieved by drawing just a few salient contours in an image. This phenomenon is really quite remarkable, given that a line drawing effectively strips away almost all of the variations in color and shading that are ordinarily available in natural scenes. Somehow the artists who create such drawings are able to capture the essential information for perceptual recognition with just a few simple strokes. Although a well structured line drawing is easily interpreted by human observers, the ability to create these drawings can require considerable artistic skill. Indeed, despite almost a half century of research in the field of computer vision, there are no existing algorithms that can duplicate the performance of a competent human artist. In this project, Dr. James Todd and his students at the Ohio State University will investigate how human observers perceptually identify different types of image contours, such as shadows, corners or occlusion. The group will also examine which contours in an image are perceptually most important for creating pictorial representations of objects. The stimuli in these studies will include drawings by artists with varying amounts of training, who will be asked to produce line drawings of objects with known 3D structures. The drawings will be ranked by human observers to assess their relative perceptual effectiveness. The contours in the drawings will also be compared with different aspects of the depicted surface geometry in order to determine which specific aspects of a surface are most important for its pictorial depiction.
A better understanding of how human observers perceptually determine the 3D shapes of surfaces from 2D image data has many possible applications, including the design of more robust and effective algorithms in machine vision, improved techniques for 3D visualization in computer graphics and design, and the potential development of more functional prosthetic devices for the blind. This work may also have a significant impact on how students are taught to draw in art or design courses.
|
0.915 |