1994 — 1997 |
Lafferty, John Sleator, Daniel [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Grammatical Trigrams: a New Approach to Statistical Language Modeling @ Carnegie-Mellon University
IRI-9314969 This project intends to take advantage of the simplicity of the classical statistical trigram model of language while augmenting it with the syntactic and semantic aspects which constrain the use of the new grammatical trigram model to advantage over the purely stochastic model. The concepts of probabilistic link grammars are used in this research, incorporating trigrams into a unified framework for modeling long-distance grammatical dependencies in computationally efficient ways. The methods proposed are expected to have greater predictive power over current methods from the point of view of entropy measurements, and to integrate finite-state automata models and new statistical estimation algorithms with modern powerful machines resulting in improved speech recognition, translation, and understanding systems.
|
1 |
1998 — 2001 |
Lafferty, John Bryant, Randal (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Graphical Structures For Coding and Verification @ Carnegie-Mellon University
Algorithms on graphical structures play a central role in both communications technology and formal verification. Minimal trellises are graphical representations of error-correcting codes that have emerged as a unifying framework for understanding and manipulating codes of all types. Ordered binary decision diagrams and their variants are graph-based data structures for representing Boolean functions that have found widespread use in formal verification for a range of problems, including circuit checking, logic synthesis and test generation. This project builds on the close correspondence that has recently been established between the code trellis and binary decision diagram, and investigates the transfer of ideas between these previously disparate fields. The fundamental challenge that confronts both uses of graphical methods is the same: devise techniques to combat the exponential blowup in the size of the graph. The research is interdisciplinary, and can be expected to have a broad range of application, both within coding and verification, as well as to such areas as artificial intelligence, database search, and combinatorial optimization.
|
1 |
1998 — 2006 |
Lafferty, John Carbonell, Jaime (co-PI) [⬀] Yang, Yiming [⬀] Nyberg, Eric (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Kdi: Universal Information Access: Translingual Retrieval, Summarization, Tracking, Detection and Validation @ Carnegie-Mellon University
This is a three-year standard award. The ultimate goal of the Universal Information Access project is the full democratization of information and knowledge access, by removing -- or greatly lowering -- educational, linguistic and socio-economic barriers to effective information access and use. Progress towards this goal requires us to address the following challenges: (1) Translingual information retrieval, in order to access documents across language barriers, and across same-language jargon barriers, (2) Multi-level summarization, customized to the user's profile and information needs, (3) Automated hierarchical categorization, via high-dimensionality statistical learning methods, (4) Detection and tracking, of new topics and events of interest to each user as they unfold, and (5) Information validation as a function of source reliability and inter-source consistency. These capabilities will be integrated seamlessly into an information navigator's workstation, using a common underlying object model and a user-centric interface for visualization and management of information. These methods will be evaluated both with respect to quantitative metrics and with respect to user feedback from realistic tasks. Universal information access requires more than search engines and web browsing. For instance, much useful information may exist in languages other than English, or may come from sources of unknown reliability. Moreover, rapid analysis of information requires customized summarization, anti-redundancy filters, and hierarchical organization. Advances in these areas are beneficial to all disciplines which must cope with large volumes of rapidly growing information, such as scientific research, crisis management, international business, and improving our educational infrastructure. The proposed research, in addition to its clear impact on democratizing information access, should provide significant advances in: Information Retrieval, Machine Learning, Digital Libraries, and user-centered Information Management.
|
1 |
2000 — 2003 |
Lafferty, John Blum, Manuel (co-PI) [⬀] Blelloch, Guy [⬀] Sleator, Daniel (co-PI) [⬀] Blum, Avrim (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Algorithms: From Theory to Application @ Carnegie-Mellon University
With the explosion in connectivity of computers and in the size of data sets available for analysis, mathematically sophisticated algorithms are becoming increasingly crucial for a wide range of real-world applications. Unfortunately, it often takes many years for an algorithm to make it from theory into applications. In fact, the trend has been for different areas to develop their own algorithms independently, with the result that similar techniques reinvented many times in different contexts, and radically new approaches that require an algorithmic level of abstraction take a long time to make it into applications. The intellectual core of this proposal is to create a coordinated effort in "Algorithms from Theory to Practice" that connects the basic development of fundamental algorithms and data structures to their many disparate uses. This work will address critical needs by connecting relevant algorithms to application areas, by exposing and tackling important issues that are common to multiple applications, and by developing fundamentally new approaches to solving key problems via the connections made. This proposal aims to provide impact at a number of different levels. At the lowest level are specific research projects that target key application domains. These include algorithms for mesh generation with applications to scientific simulations and graphics, algorithms for indexing and searching needed for a number of data analysis tasks, and protocols that connect machine learning with cryptography to produce a fundamentally new way for people to securely authenticate to their computers. At a higher level, this proposal will create a center to which researchers in application areas can come to build connections and integrate algorithmic techniques and principles into their own projects. At the highest level, this proposal will create tools to improve the process of moving algorithms from theory to applications more broadly. As one example, the course "Algorithms in the Real World" run by PI Blelloch has already developed a set of web pages detailing how algorithms are used in various applications and what turn out to be the crucial issues involved. A new, extensible version of this database would provide support for theoreticians, practitioners, and educators. We hope the end result to be both a faster pipeline from algorithm design to application, and improved sharing of algorithm techniques across application areas. In addition, we expect the students supported by this effort to fulfill the highest-level goals of this project becoming the next generation vertically-integrated algorithm researchers. The PIs each have a strong track-record in algorithms, both theoretical and applied. Guy Blelloch is developer of the NESL parallel programming language, as well as fast parallel algorithms for a number of core problems. Arvin Blum is known for his work in machine learning and approximation algorithms, and is developer of the Graphphan planning algorithm, used as the basis of many AI planning systems. Manuel Blum is winner of the ACM Turning Award for his work in the foundation of computational complexity theory and its applications to cryptography and program checking. John lafferty is known for his work in language modeling and information retrieval, and is co-developer (along with PI Sleator)of the Link Grammar natural-language parser. Daniel Sleator is winner of this year's ACM Kanellakis "Theory and Practice" award for the development of the Splay Tree data structure, and more recently been developing algorithms for natural language applications.
|
1 |
2001 — 2007 |
Lafferty, John Blum, Lenore (co-PI) [⬀] Blelloch, Guy [⬀] Sleator, Daniel (co-PI) [⬀] Ravi, Ramamoorthi (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Sy+Im+Ap: Center For Applied Algorithms @ Carnegie-Mellon University
Algorithms are the basic procedures by which computers solve problems. With the explosion in the use and connectivity of computers, and in the sizes of the data sets being used, the performance of algorithms is becoming increasingly important. Being able to solve a problem ten times faster, for example, could mean designing a drug next year instead of several years later, or reducing the cost of developing a new space structure by allowing faster and more extensive computer simulations. Over the past 30 years there have been significant advances in the basic theory of algorithms. These advances have led to a "core knowledge" concerning algorithms and algorithmic techniques that has now been applied across an amazing diversity of fields and applications---surely more broadly than calculus is now applied.
The problem, however, is that there is a large gap between ongoing theoretical research, and the current use of algorithms in applications. It often takes more than ten years for the core ideas in a new algorithm to make it into an application, and ongoing theoretical research often does not properly address the needs of the applications. The purpose of the Center is to bridge this gap so that efficient and effective algorithms can be deployed more rapidly. This will be achieved through (1) a set of Problem Oriented Explorations (PROBEs), (2) developing an extensive set of web resources on algorithms, and (3) educational activities including holding workshops for educating teachers. The PROBEs will bring together algorithm designers and domain experts to rapidly deploy new algorithmic ideas within a specific domain.
|
1 |
2002 — 2008 |
Dill, Ken Lafferty, John Liberman, Mark (co-PI) [⬀] Joshi, Aravind [⬀] Pereira, Fernando |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Language, Learning, and Modeling Biological Sequences @ University of Pennsylvania
EIA-0205456 Joshi. Aravind K University of Pennsylvania
ITR: Language, Learning, and Modeling Biological Sequences
Recent significant advances in natural language processing such as the integration of grammatical and probabilistic machine-learning techniques have not been exploited for modeling biological sequences. These new techniques are highly relevant to the biological domain because they support the integration of sequence features at several scales, from dependencies between successive items through dependencies involving complex structures to overall sequence statistics. Hence, the major goals to be pursued are: (1) Development of new techniques for integrating grammatical and probabilistic information, in particular, integration and evaluation of grammatical, probabilistic, and approximate counting methods for fold prediction in secondary and tertiary structures of biomolecules. (2) Development and evaluation of probabilistic exponential models for gene finding, in particular genes for apicoplast-targeted proteins in eukaryotic human pathogens of the phylum `Apicomplexa'.
This research is highly interdisciplinary, involving the disciplines of computer science, biology and linguistics. It will have a significant impact on the modeling of biological sequences. It will also provide a wonderful opportunity to train new researchers to carry out this interdisciplinary research, thus contributing to science and mathematical education and human resource development.
The proposed research arose out of many discussions that took place at a landmark workshop on `Language Modeling of Biological Data' held at the University of Pennsylvania in February 2001.
|
0.964 |
2003 — 2006 |
Lafferty, John Blum, Avrim (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Machine Learning From Labeled and Unlabeled Data @ Carnegie-Mellon University
This project investigates the basic question of how unlabeled data can be most effectively used together with labeled data in machine learning. The goals of this work are three-fold. First, the research aims to achieve a fundamental understanding of this problem, including new methods for reasoning about the kind of information unlabeled data can provide. Second, this research explores new algorithms for using large amounts of unlabeled data together with small amounts of labeled data and background knowledge, in order to achieve performance that greatly exceeds that available using only labeled data and more traditional methods today. The approaches used by the investigators include graph algorithms and random fields, Monte Carlo sampling and spectral methods, closely connected areas of computer science that have found application in computer vision, but that have yet to be fully exploited in machine learning. Finally, targeted applications, including text analysis, image classification, and intrusion detection for computer security, will be investigated to validate the theoretical principles that are developed, and to explore algorithms and suggest new directions for investigation.
The broader impact of this research will be to help enable new technologies to use the volumes of data that are being collected in so many new domains, and on such a great scale. Advances in our understanding of the possibilities for, and fundamental limits to, combining labeled and unlabeled data has the potential to impact many scientific fields, allowing researchers to more easily use the vast quantities of data that are available but not necessarily annotated for their own specific needs. It also may ultimately influence the future data collection initiatives that our society chooses to invest in.
|
1 |
2004 — 2008 |
Lafferty, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Collaborative Research: (Acs+Nhs)-(Dmc+Soc): Machine Learning For Sequences and Structured Data: Tools For Non-Experts @ Carnegie-Mellon University
Sequential and graph-structured data arise naturally in a wide variety of scientific, engineering, and intelligence problems, such as handwriting and speech recognition, text mining, gene finding, and network analysis. While researchers have recently made significant progress on machine learning methods for processing structured data, these methods are much less accessible to scientists, engineers, and analysts than the better understood statistical learning techniques of classification and regression.
This project is researching methods to advance the state of the art in machine learning for structured data, building on recent work in conditional random fields and weighted transducers. The project is also developing a software toolkit to make the results of these advances accessible to researchers working in a wide range of disciplines and application domains. The toolkit will enable users to define, train, and apply models for structured data without requiring advanced expertise in machine learning. The functionality of the toolkit will include methods for specifying features relevant to an application, automatically selecting the most relevant features, adjusting parameters to optimize suitable training objectives, and combining models that pertain to different facets of an application.
The software, which will be freely distributed, will be tested with selected users in several application domains, and be carefully documented. The project will thus provide the scientific and engineering community with the first generally usable tool for learning from structured data, serving a role that is parallel to that of the more standard tools for classification and regression that are already widely used.
|
1 |
2006 — 2010 |
Lafferty, John Wasserman, Larry (co-PI) [⬀] Lee, Ann |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mspa-McS: Nonparametric Learning in High Dimensions @ Carnegie-Mellon University
Prop ID: DMS-0625879 PI: Lafferty, John D. Institution: Carnegie-Mellon University Title: MSPA-MCS: Nonparametric Learning in High Dimensions
Abstract:
The research in this proposal lies at the boundary of statistics and machine learning, with the underlying theme of nonparametric inference for high-dimensional data. Nonparametric inference refers to statistical methods that learn from data without imposing strong assumptions. The project will develop the mathematical foundations of learning sparse functions in high-dimensional data, and will also develop scalable, practical algorithms that address the statistical and computational curses of dimensionality. The project will rigorously develop the idea that it is possible to overcome these curses if, hidden in the high-dimensional problem, there is low-dimensional structure. The focus of the project will be on five technical aims: (1) Develop practical methods for high-dimensional nonparametric regression (2) Develop theory for learning when the dimension increases with sample size (3) Develop theory that incorporates computational costs into statistical risk (4) Develop methods for sparse, highly structured models (5) Develop methods for data with a low intrinsic dimensionality. These aims target the advancement of both statistical theory and computer science, and the interdisciplinary team for the project includes a statistician (Wasserman), a computer scientist (Lafferty), and a physicist who is now in a statistics department (Lee).
|
1 |
2007 — 2012 |
Lafferty, John Miller, Gary [⬀] Lee, Tai Sing (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Spectral Graph Theory and Its Applications @ Carnegie-Mellon University
Spectral Graph Theory or Algebraic Graph Theory, as it is also known, is the study of the relationship between the eigenvalues and eigenvectors of graphs and their combinatorial properties. Random walks on graphs, expander graphs, clustering, and several other combinatorial aspects of graphs are intimately connected to their spectral properties. Recent approaches to the analysis of high-dimensional data have exploited the fundamental eigenvectors of the data. These data sets are large and ever increasing requiring ``real-time" accurate responses to the given queries. This creates the need for very fast algorithms, that also provide strict theoretical guarantees on their output. Spectral techniques have been applied to image processing, both by computers and in the primary visual cortex of monkeys. Critical component to all these application is algorithms with efficiency and accuracy guarantees for solving these linear system and finding their fundamental eigenvectors.
A multidisciplinary team consisting of Theoretical Computer Scientists, Machine Learning Scientist, and Neuroscientist will develop and apply spectral graph theory to applications from data mining to clustering, and image processing. Enabling technology develop will include: 1) linear-work or O(m log m)-work algorithms that run in poly-logarithmic parallel time for computing extreme eigenvalues and generalized eigenvalues of diagonally-dominant matrices, including Laplacian matrices, as well as algorithms of similar complexity for solving the related linear systems. 2) Better estimates for Fiedler values and generalized Fiedler values. Application development: 1) Improvements in spectral image segmentation. 2) The use of generalized eigenvalues in data mining and image segmentation to combine multiple sources of information. 3) The use of preconditioners for approximate inference in graphical models. and 4) Combine insights into the problem of image segmentation gained from spectral algorithms with knowledge gained from recent experiments in visual system of monkeys to better understand how the primary visual cortex functions.
|
1 |
2011 — 2013 |
Lafferty, John Wasserman, Larry (co-PI) [⬀] Liu, Han [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iii: Small: Nonparametric Structure Learning For Complex Scientific Datasets @ Johns Hopkins University
The project brings together an interdisciplinary team of researchers from Johns Hopkins University, Carnegie Mellon University, and the University of Chicago to develop methods, theory and algorithms for discovering hidden structure from complex scientific datasets, without making strong a priori assumptions. The outcomes include practical models and provably correct algorithms that can help scientists to conduct sophisticated data analysis. The application areas include genomics, cognitive neuroscience, climate science, astrophysics, and language processing.
The project has five aims: (i) Nonparametric structure learning in high dimensions: In a standard structure learning problem, observations of a random vector X are available and the goal is to estimate the structure of the distribution of X. When the dimension is large, nonparametric structure learning becomes challenging. The project develops new methods and establishes theoretical guarantees for this problem; (ii) Nonparametric conditional structure learning: In many applications, it is of interest to estimate the structure of a high-dimensional random vector X conditional on another random vector Z . Nonparametric methods for estimating the structure of X given Z are being developed, building on recent approaches to graph-valued and manifold-valued regression developed by the investigators; (iii) Regularization parameter selection: Most structure learning algorithms have at least one tuning parameter that controls the bias-variance tradeoff. Classical methods for selecting tuning parameters are not suitable for complex nonparametric structure learning problems. The project explores stability-based approaches for regularization selection; (iv) Parallel and online nonparametric learning: Handling large-scale data is a bottleneck of many nonparametric methods. The project develops parallel and online techniques to extend nonparametric learning algorithms to large scale problems; (v) Minimax theory for nonparametric structure learning problems: Minimax theory characterizes the performance limits for learning algorithms. Few theoretical results are known for complex, high-dimensional nonparametric structure learning. The project develops new minimax theory in this setting. The results of this project will be disseminated through publications in scientific journals and major conferences, and free dissemination of software that implements the nonparametric structure learning algorithms resulting from this research.
The broader impacts of the project include: Creation of powerful data analysis techniques and software to a wide range of scientists and engineers to analyze and understand more complex scientific data; Increased collaboration and interdisciplinary interactions between researchers at multiple institutions (Johns Hopkins University, Carnegie Mellon University, and the University of Chicago); and Broad dissemination of the results of this research in different scientific communities. Additional information about the project can be found at: http://www.cs.jhu.edu/~hanliu/nsf116730.html.
|
0.951 |
2015 — 2018 |
Lafferty, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Constrained Statistical Estimation and Inference: Theory, Algorithms and Applications
This project lies at the boundary of statistics and machine learning. The underlying theme is to exploit constraints that are present in complex scientific data analysis problems, but that have not been thoroughly studied in traditional approaches. The project will explore theory, algorithms, and applications of statistical procedures, with constraints imposed on the storage, runtime, shape, energy or physics of the estimators and applications. The overall goal of the research is to develop theory and tools that can help scientists to conduct more effective data analysis.
Many statistical methods are purely "data driven" and only place smoothness or regularity restrictions on the underlying model. In particular, classical statistical theory studies estimators without regard to their computational requirements. In modern data analysis settings, including astronomy, cloud computing, and embedded devices, computational demands are often central. The project will develop minimax theory and algorithms for nonparametric estimation and detection problems under constraints on storage, computation, and energy. Other constraints to be studied include shape restrictions such as convexity and monotonicity for high dimensional data. The project will also investigate the incorporation of physical constraints through the use of PDEs and models of physical dynamics and mechanics, focusing on both algorithms and theoretical bounds.
|
1 |
2016 — 2021 |
Weare, Jonathan Barber, Rina (co-PI) [⬀] Anitescu, Mihai Stein, Michael Lafferty, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rtg: Computational and Applied Mathematics in Statistical Science
This Research Training Group (RTG) project supports creation of a dynamic, interactive, and vertically integrated community of students and researchers working together in computational and applied mathematics and statistics. The activity recognizes the ways in which applied mathematics and statistics are becoming increasingly integrated. For example, mechanistic models for physical problems that reflect underlying physical laws are being combined with data-driven approaches in which statistical inference and optimization play key roles. These developments are transforming research agendas throughout statistics and applied mathematics, with fundamental problems in analyzing data leading to new areas of mathematical and statistical research. A result is a growing need to train the next generation of statisticians and computational and applied mathematicians in new ways, to confront data-centric problems in the natural and social sciences.
The research and educational activities of the project lie at the interface of statistics, computation, and applied mathematics. The research includes investigations in chemistry and molecular dynamics, climate science, computational neuroscience, convex and nonlinear optimization, machine learning, and statistical genetics. The research team is made up of a diverse group of twelve faculty, including researchers at Toyota Technological Institute at Chicago and Argonne National Laboratory. The RTG is centered on vertically integrated research experiences for students, and includes innovations in both undergraduate and graduate education. These include the formation of working groups of students and postdocs to provide an interactive environment where students can actively explore innovations in computation, mathematics, and statistics in a broad range of disciplines. Post-docs will assume leadership roles in mentoring graduate students and advanced undergraduates. Participants in the RTG will receive an educational experience that provides them with strong preparation for positions in industry, government, and academics, with an ability to adopt approaches to problem solving that are drawn from across the computational, mathematical, and statistical sciences.
|
1 |
2018 — 2021 |
Turk-Browne, Nicholas [⬀] Clark, Damon (co-PI) [⬀] Lafferty, John Brock, Jeffrey (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tripods+X:Res: Investigations At the Interface of Data Science and Neuroscience
This project will build a transformative bridge between data science and neuroscience. These two young fields are driving cutting-edge progress in the technology, education, and healthcare sectors, but their shared foundations and deep synergies have yet to be exploited in an integrated way - a new discipline of "data neuroscience." This integration will benefit both fields: Neuroscience is producing massive amounts of data at all levels, from synapses and cells to networks and behavior. Data science is needed to make sense of these data, both in terms of developing sophisticated analysis techniques and devising formal, mathematically rigorous theories. At the same time, models in data science involving AI and machine learning can draw insights from neuroscience, as the brain is a prodigious learner and the ultimate benchmark for intelligent behavior. Beyond fundamental scientific gains in both fields, the project will produce additional outcomes, including: new collaborations between universities, accessible workshops, graduate training, integration of undergraduate curricula in data science and neuroscience, research opportunities for undergraduates that help prepare them for the STEM workforce, academic-industry partnerships, and enhanced high-performance computing infrastructure.
The overarching theme of this project is to develop a two-way channel between data science and neuroscience. In one direction, the project will investigate how computational principles from data science can be leveraged to advance theory and make sense of empirical findings at different levels of neuroscience, from cellular measurements in fruit flies to whole-brain functional imaging in humans. In the reverse direction, the project will view the processes and mechanisms of vision and cognition underlying these findings as a source for new statistical and mathematical frameworks for data analysis. Research will focus on four related objectives: (1) Distributed processing: reconciling work on communication constraints and parallelization in machine learning with the cellular neuroscience of motion perception to develop models of distributed estimation; (2) Data representation: examining how our understanding of the different ways that the brain stores information can inform statistically and computationally efficient learning algorithms in the framework of exponential family embeddings and variational inference; (3) Attentional filtering: incorporating the cognitive concept of selective attention into machine learning as a low-dimensional trace through a high-dimensional input space, with the resulting models used to reconstruct human subjective experience from brain imaging data; (4) Memory capacity: leveraging cognitive studies and natural memory architectures to inform approaches for reducing/sharing memory in artificial learning algorithms. The inherently cross-disciplinary nature of the project will provide novel theoretical and methodological perspectives on both data science and neuroscience, with the goal of enabling rapid, foundational discoveries that will accelerate future research in these fields.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.97 |
2020 — 2023 |
Lafferty, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Generative Models For Complex Data: Inference, Sensing, and Repair
The research in this project lies at the boundary of statistics and machine learning, and is focused on studying new families of statistical models. A generative model is an algorithm that transforms random inputs into synthesized data to mimic data found in a naturally occurring dataset, such as a database of images. The research will explore theory, algorithms, and applications of generative models to gain insight into phenomena observed in practice but poorly understood in terms of mathematical principles. The work will also pursue new applications of generative models in computational neuroscience, at scales from the cellular level to the macro level of human cognition. Anticipated outcomes of the research include development of software that implements new methodology, training of graduate students across traditional disciplines, and the introduction of modern statistics and machine learning to undergraduates through research projects based on this work.
The technical objectives of the project include four interrelated aims. First is to investigate the statistical properties of variational programs that are widely used in deep learning, and to develop new approaches to building generative models for novel data types. The second aim is to explore new algorithms to solve inverse problems based on generative models. Third, a new form of robust estimation will be studied where a model is corrupted after it has been constructed on data. Model repair is motivated from the fact that increasingly large statistical models, including neural networks, are being embedded in systems that may be subject to failure. Finally, the project will develop applications of generative modeling and inversion algorithms for modeling brain imaging data, including the use of simultaneous recordings in different modalities.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.97 |