1997 — 1998 |
Zucker, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger: Intermediate-Level Structural Categories From Visual Complexity Analysis
This award funds an initial exploration of a theoretical basis for visual image structure, to aid in supporting queries to image archives. Digital image archives are proliferating, which raises the question of how to retrieve specific images from them. However, while queries are naturally posed in high-level, functional terms, the realizations of such queries are based on low-level image operators. An intermediate-level theory of visual structure is necessary to close this gap, which is precisely what this research addresses. Based on an interpretation of edge elements as tangent estimates, it is derived from geometric measure and complexity theory. Viewed from below, the theory provides an organization for edge elements according to dimension. They are either bound into extended (1-dimensional) groups, into (2-dimensional) structural classes that abstract different types of texture, or (0-dimensional) orientation discontinuities. Viewed from above, the representation induces equivalence classes of structure, such as bounding contours, texture flows for hair and grass patterns, and ``T''-junctions for points of occlusion. Thus queries involving forests can be differentiated from those involving fur patterns or turbulent water, and multiple, overlapping objects can be separated into components. Furthermore, images and drawings can be segmented from pages of text.
|
0.915 |
1997 — 2000 |
Zeller, Michael (co-PI) [⬀] Zucker, Steven Updegrove, Daniel Jaffe, C. Carl |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
High Performance Connections For Collaborative Research At Yale
This award is made under the high performance connections portion of NCRI's "Connections to the Internet" announcement, NSF 96-64. It provides partial support for the installation and operation of a OC-3 ATM connection to their Internet Service Provider and access to the Very High Performance Backbone Network Service (vBNS). Applications include: - Computational Neuroscience - Center for Advanced instructional Media - Physics Collaborations Collaborating with: - Brookhaven National Laboratory - Fermilab - Los Alamos National Laboratory - University of California at San Francisco The award provides partial support of the project for two years.
|
0.915 |
2008 — 2011 |
Zucker, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: High Performance Neural Computing
The investigators propose to a) develop tools for electrophysiologically realistic simulations of large areas of mammalian cortex using modern computers with many thousands of (hetero- or homogeneous) processors b) use genetic programming techniques to evolve models of primary and secondary visual cortical areas to solve difficult image processing tasks, namely image segmentation, c)understand the structure of computations performed by the brain (that is, its computational primitives) and discover the level of biological detail necessary and sufficient for these computations.
A distinguishing trait of the proposed approach is that physiological realism is not the goal, and it will be attempted only to the extent that it is needed for understanding the neural computation and for solving complex information processing tasks. That is, functional performance will be the means of bridging over gaps in the existing knowledge. Thus the resulting cortical models fall between the traditional (and oversimplified) Artificial Neural Networks and biomedically-inspired cellular and molecular descriptions.
|
0.915 |
2011 — 2015 |
Zucker, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-German Collaboration: Towards a Neural Theory of 3d Shape Perception
How the brain estimates the 3D shape of objects in our surroundings remains one of the most significant challenges in visual neuroscience. The information provided by the retina is fundamentally ambiguous, because many different combinations of 3D shape, illumination and surface reflectance are consistent with any given image. Despite this ambiguity, the visual system is extremely adept at estimating 3D shape across a wide range of viewing conditions, something that no extant machine vision system can do. The long-term goal of the project is to develop a computational model in neural terms to explain how 3D shape is estimated in the primate visual system. It will build upon the responses of cells early in visual cortex (V1) and develop models of how they can be organized into mid-level configurations that specify 3D shape properties. Importantly, the project will also measure human perception of 3D shape in a series of psychophysical experiments designed to test specific predictions, bringing together the complementary expertise of Roland W. Fleming (Giessen University: human perception, psychophysics) and Steven W. Zucker (Yale University: computational vision, computational neuroscience). The results should provide a deeper understanding of visual circuit properties in the ventral processing stream; they should provide models for 3D computer vision and graphics; and they may pave the way for the development of rehabilitation strategies for patients with visual deficits.
The basic approach starts with populations of neurons tuned to different orientations and seeks to understand how these provide basic information about local shape properties according to the principles of differential geometry. Specifically, when 3D surfaces are projected onto the retina, the distorted gradients of shading and texture lead to highly structured patterns of local image orientation, or orientation fields, which can be inferred via circuits involving long-range horizontal connections. The investigators seek to derive formal models showing how these networks can be organized to infer 3D surface properties. The specific approach is involves four stages: (i) modeling how the visual system obtains clean and reliable orientation fields from the outputs of model V1 cells through lateral interactions and feedback; (ii) establishing how local measurements are grouped into specific "mid-level" configurations to support the recovery of 3D shape properties (modeling V2 to V4); (iii) modeling how these low- and mid-level 2D measurements can be mapped into representations of 3D shape properties (V4 to IT); and (iv) modeling how grouping and global constraints can convert these shape estimates into global shape reconstructions (again V4 to IT). Targeted psychophysical experiments will complement all of the modeling and test specific predictions from it. The resulting stimuli will support next generation neurophysiological experiments. Although the above stages define a working strategy, dependencies among these stages should also provide a model of the feedforward/feedback projections that link different areas of cortex. The ultimate goal is a model that can correctly predict the errors, the successes, and the limits of human shape perception.
This project is jointly funded by Collaborative Research in Computational Neuroscience and the Office of International Science and Engineering. A companion project is being funded by the German Ministry of Education and Research (BMBF).
|
0.915 |
2013 — 2015 |
Zucker, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Collaborative Research: Non-Local Cortical Computation and Enhanced Learning With Astrocytes
The brain is composed of two major cell types: Neurons and glial cells. Glial cells are traditionally regarded as the brain's supportive cells. However, many lines of work over the past decade have documented that glial cells may also participate in complex neural processes and thereby comprise an integral element of higher cognitive function, such as working memory, learning, and sleep. Other lines of work have shown that human astrocytes are larger and structurally more complex than astrocytes in the rodent brain. In support of this concept, transplantation of human glial cells into mice resulted in generation of mice that were faster learners and performed better on memory tests. However, existing computational modeling techniques employed for understanding the processes involved in learning and memory do not include glial cells. The aim of the proposed research is to: 1) Develop computational modeling techniques that incorporate glial cells. 2) Use these novel computational modeling techniques to make predictions regarding the role of glial cells in learning and memory. 3) Test the predictions using a combination of patch clamping and Ca2+ imaging. 4) Use the data collected to continuously refine the computational modeling techniques. The broader impact of this proposal will be to further the scientific understanding of underappreciated, yet essential substrates of learning and memory. Including glial cells in addition to neurons in modeling approaches additionally carries the hope of increasing computational power and processing capabilities of adaptive learning technology, in addition to improving the performance of bio-integrated prostheses for individuals with impaired learning or other debilitating neurological disorders.
|
0.915 |
2018 — 2022 |
Zucker, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns Research Proposal: Collaborative Research: New Dimensions of Visual Cortical Organization
The visual system of the mouse is now widely studied as a model for developmental neurobiology, as well as for the understanding of human disease, because it can be studied with the most powerful modern genetic and optical tools. This project aims to discover how neurons in the visual cortex of the mouse allow it to see well by measuring how the cortex represents ecologically-relevant properties of the visual world. Quantitative studies of neurons in the mouse's primary visual cortex to date reveal only very poor vision, but their behavior indicates that mice can see much better than that -- they avoid predators and catch crickets in the wild. To understand mouse vision, the investigators will study responses to novel, mathematically tractable stimuli resembling the flow of images across the retina as the mouse moves through a field of grass. Studies based on these new stimuli indicate that most V1 neurons respond reliably to fine details of the visual scene. A mathematical understanding of how the brain takes in the visual world should have real implications for how we see, and should have great benefits for artificial vision by computers and robots. Bringing these ideas into the classroom will provide the foundation for new technologies, and will expose students to both real and artificial vision systems.
Analyses of the brain's visual function are limited by the stimuli used to probe them. Conventional quantitative approaches to understanding biological vision have been based on models with linear kernels in which only the output might be subject to a nonlinearity, all derived from responses of neurons in the brain to gratings of a range of spatial frequencies. This analysis fails to capture relevant features of natural images, which can not be constrained to linearity. The goal of this project is to probe the mouse visual system beyond the linear range but below the barrier posed by the complexity of arbitrary natural images. The investigators have identified an intermediate stimulus class--visual flow patterns--that formally approximate important features of natural visual scenes, resembling what an animal would see when running through grass. Flow patterns have a rich geometry that is mathematically tractable. This project will develop such stimuli and test them on awake-behaving mice, while recording the resultant neural activity in the visual cortex. Studying the mouse opens up the possibility of applying the entire range of powerful modern neuroscience tools-- genetic, optical, and electrophysiological. Visual responses will be analyzed using a novel variety of machine learning algorithms, which will allow the investigators to model the possible neural circuits and then test predictions from those model circuits. Such an understanding of the brain will inform both primate vision and the next generation of artificially-intelligent algorithms which, as a result, should benefit from being more "brain-like."
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |