2010 — 2014 |
Fowlkes, Charless |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Biological Shape Spaces, Transforming Shape Into Knowledge @ University of California-Irvine
Collaborative Research: Biological Shape Spaces, Transforming Shape into Knowledge
This project will develop a framework to represent, analyze and interpret shapes extracted from images, supporting a wide range of biological investigations. The primary objectives are: (1) to develop a mathematical framework and computational tools for the quantification and analysis of shapes; (2) to integrate these computational models with machine learning and statistical inference methods to enable new discoveries, transforming imaging data into biological knowledge; (3) to deliver novel quantitative methodologies for shape analysis that start from a biological premise, rather than a purely geometric one. The aim is thus not only to quantitatively describe shape, but to develop methods for linking morphological variation to its underlying biological causes. To ensure that the project focuses on methods that are most promising to biology with significant breadth of application, model and tool development will be guided and supported by a set of diverse case studies, ranging from the sub-cellular to organismal scales.
Shape represents a complex and rich source of biological information that is fundamentally linked to underlying mechanisms and function. However, shape is still often examined on a qualitative basis in many disciplines in biology, an approach that is time consuming and prone to human subjectivity. While ad hoc quantitative methods do exist, they are often inaccessible to non-experts and do not easily generalize to a wide variety of problems. The inability of biologists to systematically link shape to genetics, development, environment, function and evolution often precludes advances in biological research spanning diverse spatial and temporal scales, from the movement of molecules within a cell to adaptive changes in organismal morphology. The primary goal of this project is to develop a new suite of widely applicable quantitative methods and tools into the study of biological shape to address the significant need for consistent and repeatable analysis of shape data.
|
0.915 |
2013 — 2018 |
Fowlkes, Charless |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Combinatorial Inference and Learning For Fusing Recognition and Perceptual Grouping @ University of California-Irvine
When presented with a novel image, humans typically have little problem providing a consistent interpretation of the scene in terms of contours, surfaces, junctions, and the relations between them. This process of perceptual organization is closely coupled with recognition of familiar shapes and materials. Perceptual organization can aid recognition by reducing the complexity of a cluttered scene to a small number of candidate surfaces while recognition can help resolve ambiguities in grouping based on local image cues. This project is developing a computational framework that fuses top-down information provided by recognition with bottom-up perceptual organization in order to automatically produce a coherent scene interpretation. This research includes (1) identifying local image features that provide cues to grouping and figure-ground, (2) developing libraries of composable detectors that capture the appearance of objects, parts and their spatial relations, and (3) designing models and efficient inference routines that explicitly reason about occlusion and the binding of image regions and contours into object shapes.
Integrated models of grouping and recognition have direct significance to expand the computer vision capabilities of robotics and assistive technologies that must operate in complex, cluttered environments. The framework being developed also has applications in automating biological image analysis where top-down shape information are useful in resolving noisy local measurements. The computational tools developed by the project along with dissemination and educational efforts are aimed at forming an interdisciplinary bridge between biological imaging and cutting-edge computer vision research.
|
0.915 |
2013 — 2017 |
Fowlkes, Charless |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Abi Innovation: Breaking Through the Taxonomic Barrier of the Fossil Pollen Record Using Bioimage Informatics @ University of California-Irvine
The practice of identifying pollen has a large number of scientific applications and is used in fields as diverse as archaeology, biostratigraphy (the dating of rocks), and forensic science. Pollen and spores play a particularly important role in paleontology, because they form the most abundant and extensive record of plant diversity, dating back hundreds of millions of years. However, the most critical hypotheses in plant ecology and evolution (e.g. the assembly of plant communities, speciation and extinction) cannot be fully tested with pollen data due to the extreme difficulty of recognizing species from pollen and spore material. This project develops new methods to probe the shape and fine structural and textural properties of the grains using high-throughput, super-resolution structured illumination microscopy and automated image analysis in order to transform species identification from a subjective, by-eye procedure to a quantitative, computational practice. Since it is not known a priori which morphological features are phylogenetically meaningful, new machine learning techniques are being developed to model pollen images at multiple scales, identify aspects of shape and texture that are statistically informative, and infer their relation to the underlying phylogenetic structure.
The project has the ambitious long-term goal of creating a high-throughput system for analyzing pollen data that incorporates meaningful characterizations of pollen and spore morphology, provides testable hypotheses of biological affinity, and is open and available to the entire scientific community. This will allow researchers to break through the current taxonomic limitations of pollen identification and fundamentally change current practices in the discipline on many levels, from the basic task of identification and counting to the interpretation and use of these data in global climate-vegetation models. The project brings together a diverse, interdisciplinary team including international collaborators at the Smithsonian Tropical Research Institute in Panama and will train graduate and undergraduate students from multiple scientific disciplines and backgrounds in an emerging area of interdisciplinary research. A public outreach component is in development that will include a virtual microscopy web site using images generated by this research to introduce non-experts to the beauty, complexity, and relevance of pollen morphology. Additional information about this project can be found at: http://www.life.illinois.edu/punyasena
|
0.915 |
2015 — 2017 |
Digman, Michelle (co-PI) [⬀] Fowlkes, Charless |
R25Activity Code Description: For support to develop and/or implement a program as it relates to a category in one or more of the areas of education, information, training, technical assistance, coordination, or evaluation. |
The Big Dipa: Data Image Processing and Analysis @ University of California-Irvine
? DESCRIPTION (provided by applicant): This proposal aims to establish a national short course in Big Data Image Processing & Analysis (BigDIPA) intended to increase the number and overall skills of competent research scientists now encountering large, complex image data sources derived from cutting edge biological/biomedical research approaches. Extraction of knowledge from these imaging sources requires specialized skills and an interdisciplinary mindset. Yet effective training opportunities of this sector of the Big Data science community are glaringly underappreciated and underserved compared to other big data fields such as omics. UC Irvine is ideally suited to host a short course to address this thematic training deficit on account of the synergistic colocalization between multiple facilities, renowned for development of numerous advanced imaging techniques, and the outstanding instructional environment provided by faculty with collaborative expertise in biological image processing and computer vision, bioinformatics and high performance computational approaches. Specifically, our BigDIPA proposal assembles an interdisciplinary alliance of faculty experts that can leverage the preeminent imaging resource facilities, such as the Laboratory of Fluorescence Dynamics (LFD) and the Beckman Laser Institute, and fuse these to ongoing campus big data initiatives, e.g. UCI's Data Science Initiative, to create a top-rated training course designed for senior graduate students, postdoctoral researchers, faculty and industry scientists from diverse scientific disciplines who have nascent interests and needs to handle BIG DATA sources beyond their current level of competency. The course theme is focused to utilize discreet examples drawn from the analysis of complex data acquired from different microscopy imaging modalities employed to investigate dynamics in cellular and tissue processes, including signal transduction networks, development, neuroscience and biomedical applications, and that hereto where hidden or inaccessible to standard methods of analysis. Participants will be guided along the complete acquisition- processing-analysis pipeline through exposure to a coherent progression of topics and issues typically encountered when handling BIG DATA. We believe this training approach will therefore be attractive to a broad and significant untapped pool of researchers from the biological disciplines, biomedical engineering, systems biology, math, biophysics, computer science, bioinformatics and statistics who possess some, but not all, of the requisite competencies to effectively traverse the BD2K landscape. We have designed the course such that skills and experience gained by trainees will be transferable to their own research interests. The BigDIPA course format will combine didactic lectures on the theory and foundational frameworks that underpin each step, with practical instruction on implementation and hands-on tutorials in image acquisition, large data handling, basic scripting of computational tools, image processing on high performance computing architectures, as well as feature extraction, evaluation and visualization of results. The course is designed to offer an intense learning experience delivered in a compact time frame, and opportunities to foster interdisciplinary interactions through small team exercises. Participants will also be encouraged to take advantage of pre-courses - separate and distinct training opportunities not funded by this proposal - that will be coordinated to directly precede our course. This unique format provides multiple benefits: it provides an efficient mechanism to address individual participant training deficiencies to permit a more productive experience in the BigDIPA course, adds no-cost mutual benefits to independent but synergistic programs, and facilitates recruitment of applicants who frequently feel interested but intimidated due to a perceived lack of prior adequate training. Beyond providing an intensive on-site training course, all course materials (lecture notes, video lectures and tutorials), tutorial exercises, open source software resources and sample datasets will be made freely available through on-line distribution to maximize outreach and encourage additional contributions of curated training resources solicited from the community.
|
0.915 |
2016 — 2019 |
Fowlkes, Charless |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Building Strong Geometric Priors For Total Scene Understanding @ University of California-Irvine
This project is exploring how capabilities for geometric image understanding can change the way people approach the problem of automatically interpreting the semantic content of individual photos or videos. By developing algorithms for accurately localizing cameras from images that integrate other sources of geo-spatial data, such as 3D models of buildings and maps of urban areas, the project aims to significantly improve the ability of computer vision systems to understand image content. Utilizing strong prior information for scene understanding has a wide range of important practical applications. An assistive robot providing elderly care in a home should leverage knowledge of the appearance and location of objects in its immediate environment while adapting to changes on multiple time scales (a coffee cup sitting on the table moves much more frequently than the table itself). A network of self-driving cars could benefit significantly from dynamically updated urban maps built from the stream of data collected by the cars and other cameras (e.g., adapting behavior to a temporary lane closure that changes typical car and pedestrian traffic patterns). The project involves students in research spanning a range of traditional disciplines and is engaging a wider audience across the UC Irvine campus in understanding and applying these technologies to novel social and scientific applications.
This research investigates an alternate approach in which scene priors (including affordances and semantic attributes) are represented in 3D geo-spatial model coordinates rather than in 2D image space. Incorporating geometric context into scene understanding has largely been pursued under very weak prior assumptions on scene geometry and camera pose. Importantly, the research allows for direct integration of non-visual data such as GIS maps. The project is developing the appropriate algorithms and datasets to integrating such data along with a continual stream of images to produce a strong, temporally-evolving (4D) scene prior that can improve accuracy of camera pose estimation, monocular geometry, object detection and semantic segmentation.
|
0.915 |
2018 — 2021 |
Krichmar, Jeffrey [⬀] Fowlkes, Charless |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Sparse Predictive Coding For Energy Efficient Visual Navigation in Dynamic Environments @ University of California-Irvine
This project develops efficient machine vision algorithms inspired by the architecture and energetic efficiency of the primate visual system for motion processing. Navigating through a rich cluttered natural environment, while both the observer and the objects in the scene are moving, is a difficult problem in machine vision, particularly for real-time processing under power constraints. However, humans and other animals perform these tasks with ease. The nervous system is under tight metabolic constraints and this leads to incredibly efficient representations of important environmental features, such as the observer's heading, the depth of objects, and the motion of objects. In addition, these efficient machine vision algorithms can be applied to robotics, the IoT, and edge processing. The algorithms can be applied to a wide range of applications, including augmented reality, assistive robotics, autonomous vehicles, and the Internet of Things (IoT) Thus, they could have a transformative economic and societal impact by creating applications that can operate autonomously over long periods in remote locations.
Inspired by ability of the nervous system to efficiently encode and appropriately respond to the visual features that make up a dynamic scene, the algorithm uses sparse predictive coding techniques to process data streams from cameras. Because the algorithms can be realized in spiking neural networks, where the artificial neurons only send signals when an event occurs, they can run efficiently on low powered neuromorphic systems; computers that support such representations. By employing an architecture inspired by the brain, where op-down signals from the frontal cortex and parietal cortex predict where objects will be in the future, the system will have better object tracking and overcome difficulties when objects become hidden from view. These representations are sparse and reduced, leading to energy efficient processing, less computation, and thus low power consumption. In summary, the machine vision algorithms: (1) increase our understanding of how the brain encodes behaviorally relevant signals in the world, (2) lead to computationally efficient handling of large data streams, and (3) realize power efficient processing for a wide range of embedded applications.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |