We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, David K. Warland is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2006 — 2007 |
Warland, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger Collaborative Research: Hierarchical Models of Time-Varying Natural Images @ University of California-Davis
Title: Collaborative Research: Hierarchical Models of Time-Varying Natural Images
PIs: Bruno Olshausen and David Warland
The long-term goal of this research is to develop a computational model of visual perception that achieves the same degree of robust intelligence exhibited in biological vision systems. The proposed research will advance the state of the art in the analysis of time-varying images by building models that capture the robust intelligence of the mammalian visual system. These models will allow the invariant structure (form, shape) to be modeled independently of its variations (position, size, rotation) and will be composed of multiple layers that capture progressively more complex forms of scene structure in addition to modeling its transformations. Mathematically, these multi-layer models have a powerful bilinear form and their detailed structure is learned from natural time-varying images using the principles of sparse and efficient coding.
The early measurements and models of natural image structure have had a profound impact on a wide variety of disciplines including visual neuroscience (e.g. predictions of receptive field properties of retinal ganglion cells and cortical simple cells in visual cortex) and image processing (e.g. wavelets, multi-scale representations, image denoising). The approach taken by this project extends this interdisciplinary work by learning higher-order scene structure from sequences of natural time-varying images. Given the evolutionary pressures on the visual cortex to process time-varying images efficiently, it is plausible that the computations performed by the cortex can be understood in part from the constraints imposed by efficient representation. Modeling the higher order structure will also advance the development of practical image processing algorithms by finding good representations of the scene for the image-processing task at hand. Completion of the specific goals of this project will provide new generative models of time-varying image formation and tools with which to analyze the statistics of natural scenes.
Most image processing problems are greatly simplified by finding a good representation of the data. As a result, this research has practical applications for deriving improved means for representing, indexing, and accessing digital content such as 2D images, and video. the models developed as part of this project are also broadly applicable to advancing image processing algorithms such as denoising of movies, movie compression, and scene analysis and classification. In addition, these models have a mathematical form that makes them generally applicable to research areas other than vision such as analysis of auditory signals, dynamic routing of network signals, and general data mining of complex data sets.
|
0.915 |
2007 — 2011 |
Warland, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Collaborative Research: Hierarchical Models of Time Varying Natural Images @ University of California-Davis
Abstract
Title: Collaborative Research: Hierarchical Models of Time-Varying natural Images PIs: Bruno Olshausen, University of California-Berkeley and David Warland, University of California-Davis
The goal of this project is to advance the state of the art in image analysis and computer vision by building models that capture the robust intelligence exhibited by the mammalian visual system. The proposed approach is based on modeling the structure of time-varying natural images, and developing model neural systems capable of efficiently representing this structure. This approach will shed light on the underlying neural mechanisms involved in visual perception and will apply these mechanisms to practical problems in image analysis and computer vision.
The models that are to be developed will allow the invariant structure in images (form, shape) to be described independently of its variations (position, size, rotation). The models are composed of multiple layers that capture progressively more complex forms of scene structure in addition to modeling their transformations. Mathematically, these multi-layer models have a bilinear form in which the variables representing shape and form interact multiplicatively with the variables representing position, size or other variations. The parameters of the model are learned from the statistics of time-varying natural images using the principles of sparse and efficient coding.
The early measurements and models of natural image structure have had a profound impact on a wide variety of disciplines including visual neuroscience (e.g. predictions of receptive field properties of retinal ganglion cells and cortical simple cells in visual cortex) and image processing (e.g. wavelets, multi-scale representations, image denoising). The approach outlined in this proposal extends this interdisciplinary work by learning higher-order scene structure from sequences of time-varying natural images. Given the evolutionary pressures on the visual cortex to process time-varying images efficiently, it is plausible that the computations performed by the cortex can be understood in part from the constraints imposed by efficient processing. Modeling the higher order structure will also advance the development of practical image processing algorithms by finding good representations for image-processing tasks such as video search and indexing. Completion of the specific goals described in this proposal will provide (1) mathematical models that can help elucidate the underlying neural mechanisms involved in visual perception and (2) new generative models of time-varying images that better describe their structure.
The explosion of digital images and video has created a national priority of providing better tools for tasks such as object recognition and search, navigation, surveillance, and image analysis. The models developed as part of this proposal are broadly applicable to these tasks. Results from this research program will be integrated into a new neural computation course at UC Berkeley, presented at national multi-disciplinary conferences, and published in a timely manner in leading peer-reviewed journals. Participation in proposed research is available to both graduate and undergraduate levels, and the PI will advise Ph.D. students in both neuroscience and engineering as part of this project.
URL: http://redwood.berkeley.edu/wiki/NSF_Funded_Research
|
0.915 |