2003 — 2008 |
Liang, Zhi-Pei (co-PI) [⬀] Mitchell, Tom (co-PI) [⬀] Murphy, Robert Faloutsos, Christos (co-PI) [⬀] Kovacevic, Jelena |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Information Technology Research (Itr): Next-Generation Bio-Molecular Imaging and Information Discovery @ Carnegie-Mellon University
This collaborative project brings together a strong multi-institutional interdisciplinary team of investigators to study and advance the current understanding of cellular and sub-cellular events. Continuing technological advances in fluorescence and atomic-force microscopy allow scientists to observe molecular function, distribution, and interrelationships in living cells. However, a full understanding of tens of thousands of proteins and the complex molecular processes they engage in requires a voluminous amount of image data, which currently must be analyzed by visual inspection. To facilitate such an analysis, researchers from the four participating institutions are focusing on three main research thrusts. First, next-generation intelligent imaging involves information processing at the sensor level to enable high-speed and super-resolution imaging. The goal is to enable biologists to study cellular processes at resolutions in time and space that are not possible with current technologies. The second research thrust is pattern recognition and data mining as applied to bio-molecular image collections. Salient features that characterize the underlying patterns in cells and tissues need to be computed for the vast volumes of images acquired through automated microscopy. Third, a distributed database of bio-molecular images is being created. The merging of pattern-recognition and data-mining tools with new, powerful methods for indexing, data modeling, and collaboration, is aimed at creating a unique infrastructure that greatly facilitates image bioinformatics, thus complementing recent revolutionary advances in genomics.
The outcome of this research will lead to new and novel information-processing methods for bio-molecular image data. Efficient and effective representation of such data will enable researchers to search and browse through large collections of image and video data and look for similar patterns in such datasets, thus facilitating information discovery. During its five-year duration, this project will develop, test, and deploy a distributed database of bio-molecular image data accessible to researchers around the world. The impact of the distributed database will be through large-scale biology in which the results of a single experiment can be globally correlated with the results from other groups of scientists, thus accelerating discovery of dynamic relationships between structure and function in complex biological systems.
The project will develop new courses, and will facilitate student exchanges, semi-annual meetings, and workshops, benefiting students at all levels. This project will train a new generation of biologists, computer scientists and engineers well versed in the imaging and information-processing sciences at the forefront of next-generation biotechnology. Partnership will be established with institutions with large populations of students from groups underrepresented in science and engineering, such as the California State Universities at Fresno and San Bernardino and the Universidad Metropolitan in Puerto Rico, for undergraduate recruitment and outreach. An effective mode of outreach for students is to educate their teachers, and the project will offer summer fellowships for elementary, high-school, college, and university teachers.
|
1 |
2005 — 2008 |
Kovacevic, Jelena |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Frame Toolbox For Bioimaging, Biometrics and Robust Transmission @ Carnegie-Mellon University
In the past couple of decades, multiresolution techniques in signal and image processing have revolutionized the field. Most of the multiresolution techniques in use are nonredundant, that is, the underlying mathematical structures used are bases. However, many of today's applications require some redundancy in the system.
The requirement of redundancy requires a mathematical structure more general than bases, termed frames. Although the initial work on frames dates back to the 1950s, frames have become more popular only recently, mostly due to emerging applications requiring tools which provide redundancy. A fair amount of work has already been done on frames; however, their level of maturity is nowhere near that of wavelets. This is about to change as a host of applications requires redundancy offered by frames; the theory needs to follow fast.
This research addresses gaps in the current knowledge and solves some of the open questions in frames. These are related to the characterization of certain classes as well as the construction of a frame toolbox motivated by problems in bioimaging, biometrics and robust transmission. Developing the frame theory to this extent brings frames to the level of maturity of wavelets and significantly expands the multiresolution toolbox.
|
1 |
2006 — 2010 |
Pueschel, Markus [⬀] Kovacevic, Jelena |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Algebraic Signal Processing Theory: Towards Multiresolution Analysis @ Carnegie-Mellon University
Signal processing is the enabler and driving force ("brain in the box") behind many everyday technologies including audio/image/video processing (MP3, JPEG, MPEG), communications (cell phones), medical and bioimaging (MRI, fMRI, CAT/PET scan, high-throughput drug screening), sensor networks (cars, surveillance), security (biometrics for financial industry and securing US borders). As a discipline, signal processing makes heavy use of deep mathematics including complex calculus, linear algebra, stochastics and approximation theory. In fact, many of the above advances were made possible by importing and using novel techniques from mathematics. This research connects signal processing to abstract algebra (a major discipline in mathematics) at a fundamental level. In doing so, a whole new set of mathematical tools becomes available to signal processing. In preliminary work, these tools have already produced novel and efficient processing techniques. The investigators will develop many more and aim to tackle problems that have eluded solution with previous methods.
The platform for the research is a novel approach to signal processing, called algebraic signal processing theory (ASP), which will be further developed in this work. ASP generalizes the standard linear signal processing to provide novel notions of filtering, Fourier transforms, and others. ASP captures many existing signal processing methods into one common framework and enables the derivation of new ones in one and higher dimensions, separable and nonseparable, shift-invariant and shift-variant. A major goal in this research is to expand ASP to include general notions of filter banks and ultiresolution methods for important applications. At a fundamental level, ASP may provide a more rigorous, axiomatic approach to signal processing and thus impact education. A first course based on ASP will be developed in this project.
|
1 |
2006 — 2009 |
Matyjaszewski, Krzysztof (co-PI) [⬀] Leduc, Philip (co-PI) [⬀] Kovacevic, Jelena Anna, Shelley Islam, Mohammad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of a Laser Scanning Multi-Photon Confocal Microscope to Investigate Structure and Dynamics of Soft Materials of Biological and Synthetic Origin @ Carnegie-Mellon University
Technical Abstract
This proposal is for the acquisition of a laser scanning Multi-photon Confocal Microscopy Facility (MCMF) that will support a core group of faculties at Carnegie Mellon University (CMU) spanning eight departments and two colleges. The MCMF facility will include a point-by-point standard and resonant scanning module capable of acquiring images with high spatial and temporal resolutions, a multi-photon system consists of Coherent Chameleon XR Ti-Sapphire pulsed laser, and a fluorescence lifetime imaging microscopy (FLIM) module. Acquisition of a laser scanning multi-photon confocal microscope will fill a void in the existing imaging facilities at CMU by providing experimental capabilities that include fluorescence recovery after photobleaching, and fluorescence resonance energy transfer, and will have an immediate impact in numerous established and nascent research projects of senior and junior faculties investigating the structure and dynamics of soft materials. For example, MCMF will allow (a) direct visualization of phase transitions, self-assembly, defect dynamics, and morphology evolution in synthetic soft materials, (b) real-time imaging of cellular and sub-cellular localization of native and synthetic macromolecules related to fundamental biological discovery and disease therapy in biological soft materials, and (c) development of adaptive algorithms for efficient acquisition and analysis of complex biological images. The MCMF will also offer a unique opportunity for integration into classroom instruction and outreach activities by offering the possibility for students to gain direct, "hands-on" experience with microscale and smaller systems including cells, macromolecules and microdevices. Our goal is to use the proposed MCMF to bring together scattered and diverse researchers at CMU and local industry who will exchange ideas and expertise while working in close proximity and as a potent catalyst for nucleating new multi-disciplinary research and education.
Non-technical Abstract
Laser scanning multi-photon confocal microscopes allows for the imaging of microscopic objects and their dynamics deep within a three dimensional sample with very little photo-damage. As a result, confocal microscopes have become indispensable tools to perform state-of-the-art measurements in soft materials. Visualizing the structure and dynamics of soft materials at the microscopic scale allows for better understanding of their self-assembly and macroscopic properties. The proposed laser scanning Multiphoton Confocal Microscopy Facility (MCMF) will enable an exciting array of diverse research across the Carnegie Mellon University (CMU) campus. For example, MCMF will enable the development of better composite materials, increase the understanding of cellular mechanisms related to aging and embryonic development, improve drug delivery studies, etc. We also plan to develop a course on Advanced Microscopy for graduate and undergraduate students that would utilize the proposed facility. The highly visual nature of the research and education enabled by the facility will attract and inspire undergraduates and high-school students to high-level science by making complex ideas more tangible. Using the MCMF we will develop age-appropriate modules intended to communicate concepts of visualization of microscale systems via established outreach programs at CMU that target K-12 students and teachers particularly in schools with large under-represented groups. The MCMF will also increase ties with industry, other universities and the public.
|
1 |
2008 — 2009 |
Kovacevic, Jelena |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Automated Segmentation of Fluorescence Microscopy Data Sets @ Carnegie-Mellon University
DESCRIPTION (provided by applicant): In recent years, the focus in biological science has shifted to understanding complex systems at the cellular and molecular levels, a task greatly facilitated by fluorescence microscopy. Its success is due in part to the advent of a range of new fluorescent probes used to tag proteins or molecules of interest, including the nontoxic, green fluorescent protein (GFP). While fluorescence microscopes permit the collection of large, high-dimensional data sets, their manual processing is inefficient, not reproducible, time-consuming and error-prone, prompting the movement towards automated, efficient and robust processing for high-throughput applications. Segmentation, a fundamental, yet very difficult problem in image processing, is often the first processing step following acquisition. While it is always desirable for imaging tasks in biology to be as automated as possible, this is especially critical for segmentation, as it takes human experts anywhere from hours to days to segment by hand. The current segmentation algorithm used in fluorescence microscopy - the watershed algorithm - is not well-suited to this problem. Meanwhile, state-of-the-art segmentation algorithms have only recently begun to be applied to this problem. We will work both on a specific biological problem of Golgi study, as well as other fluorescence microscope data sets provided by our collaborators. Thus: We propose to develop a flexible framework, a family of algorithms and a software toolbox for the automated segmentation of fluorescence microscope images based on multiscale transformations and active contour methods. We plan on pursuing this goal through the following three specific aims: 7 Specific Aim M: Develop a class of multiscale active contour transformations to efficiently extract those features of the fluorescence microscope data needed for segmentation and develop a class of energy functionals and a corresponding family of segmentation algorithms that is flexible, modular and has an efficient implementation. 7 Specific Aim D: Develop different algorithmic modules to cater to data-specific issues pertaining to initialization, computation of the forces, topology preservation and multiresolution transformation, and nature of the data such as multidimensionality/tissue images, as well as auxiliary modules specific to the application. 7 Specific Aim S: Develop a flexible software platform and a user-friendly GUI to facilitate use by biologists as well as interaction between biologists and algorithm developers. The motivation is for this family of algorithms to be used for segmentation of fluorescence microscope data sets, as these are widely used to study processes at molecular and cellular levels. As segmentation is a typical first step in the analysis of such data sets, robust and automated segmentation algorithms are a must to enable large-scale studies of molecular and cellular processes.
|
1 |
2009 — 2010 |
Kovacevic, Jelena |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Algorithms and Image Analysis Software Tool For Automated Recognition and Identif @ Carnegie-Mellon University
DESCRIPTION (provided by applicant): In recent years, biologists and clinicians have gained access to unprecedented amounts of imaging data, depicting static and dynamic processes in cells and tissues. While this trove hides answers to a host of important questions, mining it visually, as it is typically done, is an enormous and error-prone task wasting valuable resources. As such, the automation of this processing has become an important area of emerging research. Classification, a standard task in image processing, underlies many problems in medicine and biology, such as recognizing proteins based on their subcellular location patterns, determination of developmental stages in Drosophila embryos, recognizing tissues in histology and diagnosis of otitis media. Thus: We propose to develop a flexible, modular and accurate algorithm and a software toolbox to automatically recognize and identify normal and pathological processes occurring in disease and development. A generic classification system computes a set of numerical features describing the data, followed by separating these features into classes. We propose to decompose the image first using a multiresolution transform, as we postulate that multiresolution subspaces hide valuable information. Each subspace performs separate classification, giving its vote. The arbiter reconciling these local votes into a single, global one, is the weighting block. It assigns a weight to each subspace based on how reliable its voting has been during training. Based on our preliminary work, we believe this system to have great potential for accurate and robust classification (recognition, identification) of normal and pathological processes occurring in disease and development. Specific Aim 1: Develop a classification algorithm based on multiresolution transforms, that is flexible, modular and accurate, and has an efficient implementation. Specific Aim 2: Develop a flexible classification software platform and a user-friendly GUI to facilitate both use by biologists and clinicians, as well as their interaction with algorithm developers. Significance of the Proposed Work: The flexibility and modularity of the proposed system together with features developed for our three testbeds will allow for a broad use in a wide range of applications within the broad hierarchy of organ development. The distribution of the software as an open-source ImageJ plugin will allow for its wide use in the biological and medical communities. Innovation the Proposed Work Brings. The algorithm we propose is flexible, accurate and novel: multiresolution tools offer a window into previously unseen features within a dataset. Each block of the multiresolution classifier will offer a novel contribution: (1) construction of frame families in the multiresolution block, (2) novel features in the feature extractor block, (3) multiresolution versions of known classifiers in the classifier block. Moreover, the testbeds we consider do not have an available tool for automated classification. PUBLIC HEALTH RELEVANCE: Narrative The motivation is for this algorithm and software toolbox to be available to the biological and medical communities for mining imaging data. As our three testbeds span various scales within the broad hierarchy of organ development, the success of our system will bring advances both in basic research at molecular and cellular levels (Drosophila project) as well as at tissue and organ levels (histology and otitis media projects).
|
1 |
2010 — 2014 |
Fickus, Matthew Kovacevic, Jelena |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Small: Theory of Multiresolution Classification With Bases and Frames @ Carnegie-Mellon University
Recent advances in imaging of biological systems at all scales, from molecular and cellular up to organ levels, have given biologists and clinicians opportunities to observe processes and interactions at a never-before-seen level, leading to the collection of huge amounts of high-dimensional data. As a result, the visual inspection of these data sets,already error-prone, nonreproducible and subjective, has become impractical as well. There is thus an acute need for the development of systems to both automate this analysis, as well as mine interactions not visible to the human eye. The task of classification has been at the heart of several of the group's projects in the past few years, including the determination of developmental stages in fly embryos,the recognition of H&E-stained tissue types in stem-cell teratomas, and the diagnosis of otitis media. As an accurate and efficient algorithm for automated classification would have been of great use to biologists and clinicians, a multiresolution (MR) classification algorithm was developed and, in each of the problems, consistent trends emerged: 1. MR classification always performed better than the no-MR version; 2. Redundant MR transforms frames, always performed better than the nonredundant ones bases. This consistency across data sets and applications indicates that MR has the power to make a significant impact on biomedical image classification performance. The investigators thus study MR classification to gain fundamental understanding of its underpinnings, in particular, the following two questions: 1. When/why does the MR classification work? 2. When/why does the MR frame classification work? These questions are approached by setting up a measure-theoretic theory of classification as a mathematically rigorous framework within which to pose and investigate real-world classification problems.
|
1 |
2010 |
Kovacevic, Jelena |
R41Activity Code Description: To support cooperative R&D projects between small business concerns and research institutions, limited in time and amount, to establish the technical merit and feasibility of ideas that have potential for commercialization. Awards are made to small business concerns only. |
Dx Ear: An Automated Tool For Diagnosis of Otitis Media @ Blue Belt Technologies, Inc.
DESCRIPTION (provided by applicant): Otitis media is a general term for middle-ear inflammation that is classified clinically as either acute otitis media (AOM) or otitis media with effusion (OME). AOM represents a bacterial super infection of the middle ear fluid and OME a sterile effusion that tends to subside spontaneously. Antibiotics are generally beneficial only for AOM. Accurate diagnosis of AOM, as well as distinction from both OME and no effusion (NOE) requires considerable training. AOM is the most common infection for which antimicrobial agents are prescribed for children in the US. By age seven, 93 percent of children will have experienced one or more episodes of otitis media.1 AOM results in significant social burden and indirect costs due to time lost from school and work. Estimated direct costs of AOM in 1995 were $1.96 billion and indirect costs were estimated to be $1.02 billion, with a total of 20 million prescriptions for antimicrobials related to otitis media.2 Given these considerations, our goal is to: Develop a software tool to classify images into one of three stringent clinical diagnostic categories (AOM/OME/NOE), and validate the algorithm on tympanic membrane (TM) images. We have assembled a strong multidisciplinary team that can successfully develop an automated diagnostic algorithm in this Phase-I program. We have (1) gathered a team of nationally-recognized otoscopists with substantial clinical and research experience in the context of AOM clinical trials;(2) studied the predictive value of diagnostic findings in discriminating AOM from OME from NOE;(3) acquired a large number of TM images from children;and (4) involved an internationally recognized expert in developing algorithms in all areas of image analysis and processing. In the planned Phase-II, we will use the algorithm developed in the Phase-I program and incorporate it into a user-friendly and marketable digital otoscope-software platform that can be used at the point-of-care by clinicians to improve the care of children with this frequently occurring condition. This will be followed by a clinical trial evaluating its immediate impact on clinical care, and, in particular, utilization of antimicrobials. Our main goal will be to develop an accurate automated algorithm for classifying the three diagnostic categories (AOM/OME/NOE). We aim to achieve an overall accuracy of 95 percent by applying a newly developed classification algorithm. This will include applying state-of-the-art classification methods as well as segmentation algorithms, for automated, robust diagnosis and classification of the three diagnostic categories (AOM/OME/NOE). We propose to achieve this through the following two specific aims: Specific Aim 1: Develop a robust and accurate diagnostic algorithm that can discriminate TM digital images into 1of 3 stringent diagnostic categories (AOM/OME/NOE). Specific Aim 2: Validate the algorithm on a dataset that includes over 2000 TM images collected in a recently completed NIAID-sponsored clinical trial. PUBLIC HEALTH RELEVANCE: AOM is the most common infection for which antimicrobial agents are prescribed in children in the US. By age seven, 93 percent of children will have experienced one or more episodes of otitis media. AOM results in significant social burden and indirect costs due to time lost from school and work. Estimated direct costs of AOM in 1995 were $1.96 billion and indirect costs were estimated to be $1.02 billion, with a total of 20 million prescriptions for antimicrobials related to otitis media. Developing an automated and accurate software tool to help classify otitis media images into one of three stringent clinical categories would have a great impact on both clinical care as well as reducing the unnecessary prescriptions of antibiotics in the US.
|
0.903 |
2011 — 2015 |
Bielak, Jacobo (co-PI) [⬀] Garrett, James (co-PI) [⬀] Garrett, James (co-PI) [⬀] Kovacevic, Jelena |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Indirect Bridge Health Monitoring Using Moving Vehicles @ Carnegie-Mellon University
The objective of this research is to provide accurate, rapid, nearly continuous, and cost-effective assessments of a large population of bridges using the data collected from a set of vehicles equipped with sensors able to capture the dynamic interaction between the vehicles and the bridge. This grant provides funding for the development of a new approach for assessing the health of bridges that uses vehicles with on-board sensors to collect condition information about the bridges over which they travel. The dynamic characteristics of the bridge are affected by the damage in the form of cracks, corrosion, and frozen bearings. The first premise of this research is that these changes will be detectable from the dynamic responses collected from a large number of vehicles travelling over the bridge. A second premise is that the type, location and extent of the damage on the bridge will be classified. The new approach will use multiresolution (MR) signal processing and pattern recognition algorithms to detect and classify bridge damage.
If successful, the results of this research will be beneficial to bridge authorities by leading to the development of a new indirect assessment method for monitoring the health of a large number of bridges using the same instrumented vehicles. The method will have a significant economic impact by providing an efficient and more cost effective method to improve the management of the overall structural condition of bridges. The results of this research will also lead to new signal-processing algorithms that will capture signals collected from vehicles. In addition, knowledge gained during the project will be useful for determining the applicability of this approach to different types of structures. The experience and the insights provided by this research project, which will directly involve two PhD graduate students and a number of undergraduate students, will be transferred in courses and demonstrations both at Carnegie Mellon University and the University of Pittsburgh.
|
1 |
2014 — 2017 |
Kovacevic, Jelena Sandryhaila, Aliaksei (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Small: Multiresolution Analysis of Graphs @ Carnegie-Mellon University
Datasets that are collected in engineering, social, commercial and other domains are becoming increasingly larger, complex and irregular in structure. There is an urgent need for the development of methods that formalize and automate analysis of such data and are capable of extracting valuable information. Recently, a theoretical framework called signal processing on graphs has emerged as a new approach to analyze data with irregular structure; it extends fundamental signal processing concepts to data residing on arbitrary graphs, as well as formulates data analysis problems as standard signal processing tasks. Moreover, as data often needs to be analyzed at multiple levels of detail, the investigators develop the fundamentals of the multiresolution analysis on graphs, extending the theory of discrete signal processing on graphs.
This research extends relevant signal processing concepts that are critical to the development of the multiresolution analysis, including signal translation, scaling, and sampling, to general graphs. The generalized definition is consistent with the classical multiresolution theory for time data and images; it also addresses additional challenges presented by structures and properties of graphs. Additionally, this research involves techniques and devices for the application and implementation of multiresolution analysis methods to data on graphs. The investigators also develop a set of general, theoretical methodologies that can later be instantiated and applied to datasets of different origin, nature and structure.
|
1 |
2015 |
Kovacevic, Jelena |
R13Activity Code Description: To support recipient sponsored and directed international, national or regional meetings, conferences and workshops. |
Ieee International Symposium On Biomedical Imaging (Isbi) 2015 @ Institute of Electrical-Electronic Engrs
? DESCRIPTION (provided by applicant): The project provides NHI Student travel and tutorial support to graduate students and Ph.D. candidates in computational bioimaging or closely related areas to attend and participate in the 2015 ISBI conference organized jointly by the IEEE Signal Processing Society (SPS) and Engineering in Medicine and Biology (EMB) in New York, NY April 16-19, 2015. The Objective of ISBI is to bring together researchers with interests in the mathematical and computational aspects of biomedical imaging, with a focus on addressing problems of significance to the development and application of imaging systems across the spatial scale, from microscopy to whole-body imaging. Topics include physical, biological and statistical modeling, image formation and reconstruction, computational image analysis, statistical image analysis, visualization, and image quality assessment. The focus emphasizes methodologies that have the potential to be applicable to multiple imaging modalities and to imaging at different scales. Audiences at ISBI are involved in biomedical imaging research and development, either in academic institutions, government laboratories, or R&D departments of private companies. Publication quality: ISBI, like other IEEE SPS and EMBS conferences, requires submission and review of a 4-page short paper. These detailed submissions provide reviewers the opportunity to thoroughly evaluate the novelty and potential impact of the proposed computational or modeling methodology. IEEE anticipates that the primary impact of this grant program will be increased student and fellow attendance to both the main conference and the tutorials. By offering to cover a portion of attendees travel expenses, we allow students to propose to her/his mentors or departmental administration that the cost-to-benefit ratio for attending the ISBI will be extremely favorable. The benefits can largely be summarized as exposure and education. ISBI provides opportunity for exposure to many more areas of research than those to which one is generally exposed in her/his home institution, and it provides exposure to many of the world leaders in the field through tutorials, plenary, oral, and poster presentations, lunches with leaders, and individual discussions. Through presentation of a research paper, a young investigator will expose her/his research to others for critical evaluation and dissemination, thus helping them make connections leading to research collaborations and new paths for career advancement.
|
0.913 |
2016 — 2019 |
Kovacevic, Jelena Singh, Aarti (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cif: Medium: Signal Representation, Sampling and Recovery On Graphs @ Carnegie-Mellon University
Datasets that are collected in physical and engineering applications, as well as social, biomolecular, commercial, security, and many other domains, are becoming larger and more complex. In many cases, such data is analyzed manually or using methods that extract only superficial information and can lead to subjective and non-reproducible conclusions. There is thus an urgent need for the development of methodologies that formalize analysis of complex data. Graphs provide a natural formalism to capture complex interactions that govern the structure of the data in many applications. However, a rigorous framework for signal and data processing on graphs has been lacking. This proposal aims to develop the fundamentals of signal representation, sampling and recovery on graphs.
Signal and data processing has been the focus of the principal investigators? work for many years. In this project, the team will develop a mathematically rigorous framework for signal processing on graphs that offers a new paradigm for the analysis of high-dimensional data with complex, non-regular structure. By extending fundamental signal processing concepts such as filtering, Fourier and wavelet analysis to data residing on general graphs, the framework will offer principled solutions to a number of data analysis problems, such as data compression, recovery, localization, detection, and others. Specifically, the team will 1) develop efficient succinct representations for signals on graphs, 2) design efficient strategies that leverage the graph structure for sampling signals on graphs, and 3) develop near-optimal and computationally efficient estimators for recovering graph signals from samples.
|
1 |