1999 — 2005 |
Ye, Qiang Li, Ren-Cang [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Fast and Accurate Computations of Applied Eigenproblems @ University of Kentucky Research Foundation
Eigenproblems appear ubiquitously all across applied science and engineering, and their efficient computations have become an integral part of related research and teaching. The research component of this proposal is concerned with numerical linear algebra problems that have significant impacts on related applications. Although the existing general purposed software libraries such as LINPACK, EISPACK, and LAPACK solve many common eigenvalue and singular value problems quite efficiently, many problems from applied science and engineering such as primary eigenvector computations for object recognition in image processing can be solved much more efficiently if their own intrinsic characteristics that are mostly ignored by general purpose libraries are incorporated properly. This project demonstrates that point of view with a preliminary and promising study that combines numerical linear algebra techniques and characteristics of image data to attack a huge eigenvalue (singular value) problem for a large set of high resolution images. The research objective is to exploit in depth the structural properties of applied problems from their application background and, consequently, will develop accurate and efficient algorithms. The research into relative perturbation theory and highly relative accurate methods for matrix computational problems has been extremely active in the last ten years, and exciting advances are being made. The results could have a significant impact on research in other disciplines that previously used conventional algorithms to carry out eigenproblem computations. The education component of this project is to implement a set of ideas and concrete projects, including the exploitation of new information technologies for more effective teaching such as avoiding too much time for students' note taking, keeping constantly in touch with struggling students, and bringing math material alive in the classroom. The project is also concerned with modifying the curriculum of the existing two-semester numerical linear algebra courses to include applied problem lectures that ultimately will lead to a new course on solving applied computational problems.
|
0.955 |
2001 — 2005 |
Ye, Qiang |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Preconditioned Krylov Subspace Algorithms For Computing Eigenvalues of Large Matrices @ University of Kentucky Research Foundation
Proposal #0098133 Ye, Qiang U of Kentucky
This project will develop preconditioned Krylov subspace methods with analysis for computing a few eigenvalues of a large generalized eigenvalue problem Ax = lambda Bx, and study their robust implementations with a long term goal to develop a specialized package for public distributions. The resulting algorithms should inherit desirable characteristics of the existing Krylov subspace methods, but extend their capability for efficient preconditioning. The feasibility of this objective has been demonstrated by a preliminary study that led to an algorithm of this type for computing the smallest eigenvalue of a symmetric definite problem. In this project, this preliminary work will be strengthened and its idea further developed and generalized to produce algorithms that are capable of delivering extreme as well as interior eigenvalues for symmetric as well as nonsymmetric problems alike.
This project builds upon the PI's research expertise and contributions over the past decade, to significantly advance the state-of-the-art of numerical methods for large matrix eigenvalue problems. Unifying several existing ideas and concepts and bringing new approaches, the resulting methods would be an ideal topics for classroom learning and thesis research. Moreover, the preliminary study indicates that they would be well suited for black-box implementations and thus have the potential to reach a broader application community.
|
0.955 |
2004 — 2008 |
Ye, Qiang |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computing Interior Eigenvalues of Large Matrices by Preconditioned Krylov Subspace Methods @ University of Kentucky Research Foundation
The investigator will develop preconditioned Krylov subspace methods with analysis for computing a few interior eigenvalues of large scale matrix eigenvalue problems. It will also develop black-box implementations for public distributions. In a previous work of PI, a method of this type has been developed for computing some extreme (smallest or largest) eigenvalue of the symmetric problems, which has also been implemented in a library-quality software called EIGIFP. The investigator proposes to develop a generalization of the existing method for computing interior eigenvalues for symmetric and nonsymmetric matrix problems. The resulting algorithms not only inherit desirable characteristics of the existing Krylov subspace methods, but also allow convergence acceleration through the use of a preconditioner (or approximate inverse) rather than the inverse of a shifted matrix.
Computations of interior eigenvalues for large matrices arise in many important applications such as electromagnetic field simulations in cavities for particle accelerator models and the Anderson model of localization in quantum mechanics. In spite of tremendous progresses made in developing iterative methods and software packages for large-scale eigenvalue problems, these applications pose a significant challenge. The proposed work shall advance the theory, algorithms and software toolboxes for the computation of interior eigenvalues. It aims to generate maximal impact in applications by developing an efficient and publicly accessible software package that can be easily used by non-expert users.
|
0.955 |
2009 — 2013 |
Ye, Qiang |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
High Relative Accuracy Iterative Algorithms For Large Scale Matrix Eigenvalue Problems With Applications @ University of Kentucky Research Foundation
Eigenvalue computation is a fundamental problem in numerical linear algebra. While there are numerous established algorithms for this, they do not guarantee in general to compute, in a floating point arithmetic, smaller eigenvalues to high relative accuracy. Indeed, the relative errors of smaller eigenvalues computed by conventional algorithms are proportional to the condition number of the matrix. Research over the last two decades has resulted in several classes of special matrices for which the eigenvalues/eigenvectors can be computed to high relative accuracy (i.e. independent of the condition number). All existing high relative accuracy algorithms, however, appear to concern small (dense) matrices only. Yet, large matrices are often inherently ill-conditioned and it is typically a few smaller eigenvalues that are of interests. The goals of this proposal are to study high relative accuracy iterative algorithms for computing a few eigenvalues/eigenvectors of a large symmetric positive definite matrix. The investigator will develop algorithms for matrices arising in some important applications such as discretization of differential operators and dimensionality reduction in machine learning, where special structure/properties of the matrices may be exploited to achieve higher accuracy.
Spectral (eigenvalue) analysis is a widely used mathematical tool in science and engineering. Eigenvalues of differential operators describe the natural vibrating frequencies of mechanical structures. Dimensionality reduction of high-dimensional data in machine learning often leads to minimization of certain quadratic functionals, the solutions of which are eigenvectors. Solving such eigenvalue problems to high relative accuracy pose a challenge to existing algorithms. Therefore, the proposed works will not only advance theoretical foundations and algorithm developments for large scale eigenvalue problems, but also contribute to the general areas of applied science/engineering and machine learning by significantly improving the accuracy of the algorithms used in these applications.
|
0.955 |
2013 — 2017 |
Ye, Qiang |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Accurate and Efficient Algorithms For Computing Exponentials of Large Matrices With Applications @ University of Kentucky Research Foundation
Matrix exponential is an important linear algebra tool that has a wide range of applications. Its efficient computation is a classical numerical linear algebra problem that is of considerable importance to many fields. This research project is concerned with numerical algorithms for computing exponentials of large matrices. The main objectives are: (1) to develop efficient preconditioning techniques for computing the product of the exponential of a matrix with a vector, and (2) to develop accurate and efficient algorithms to compute some selected entries of the exponential of an essentially nonnegative matrix. The proposed research will advance theory and algorithms for matrix exponentials in the setting of iterative methods for large scale problems. It will systemically address the problems of preconditioning and entrywise relative accuracy that are critically important in certain applications. The resulting algorithms will improve the existing ones in computational efficiency and/or accuracy. At the conclusion of this project, robust MATLAB implementations of the algorithms developed will be made publicly available.
The algorithms proposed in this project will provide new computational tools that are sufficiently efficient and/or accurate to meet the challenges posed by many large scale application problems. A fully developed efficient preconditioning technique would significantly advance the state of the art in solving large scale initial value problems, which are used to model and solve a large number of practical problems in science and engineering. The proposed algorithms for accurately computing selected entries of the exponential of a large essentially nonnegative matrix would remove the numerical accuracy issue that may present a significant challenge to the traditional algorithms. The need for entrywise accurate computations arise in continuous-time Markov chain models, where the entries represent transition probabilities, and in large complex networks, where the entries define various network properties such as connectivity. Thus, the new algorithms will be applicable to a wide range of problems that involves continuous-time Markov chains or complex networks. They include problems from genetics, sociology, neurology, biological networks, social networks and homeland security, telecommunication networks, and computer networks.
|
0.955 |
2013 — 2017 |
Ye, Qiang |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Cds&E-Mss: Robust Algorithms For Interpolation and Extrapolation in Manifold Learning @ University of Kentucky Research Foundation
The objective of this proposal is to develop robust algorithms for reconstructing or synthesizing highly structured high-dimensional data based on a low-dimensional representation learned from a training dataset, i.e., the interpolation and extrapolation problems in manifold learning. The project will address the elusive issue of computing a usually not well-defined low-dimensional parametrization in the setting of various interpolation and extrapolation problems for manifold learning, emphasizing the notion of physically meaningful paramterizations. It will develop innovative computational methodology for flexibly learning a low-dimensional parametrization together with other physically important variables in the context of both unsupervised and semi-supervised learning and especially active learning settings, for learning and synthesis of dynamic data, and for manifold extrapolation based on transfer learning. Included in the project is a development of a publicly available software package which will disseminate the research results and promote applications of nonlinear dimension reduction methodology to real-world problems.
The discoveries from this proposed research are expected to impact a wide range of areas of applications. Computing compact representation of high-dimensional data represents a very challenging statistical learning problem, and manifold learning has become a very active research field aiming at discovering hidden structures from the statistical and geometric regularity inherent in many high-dimensional data. Reconstruction and synthesis of high-dimensional data in the context of interpolation and extrapolation will have significant applications in image and video processing, computer vision, video surveillance for homeland security, computational biology, and scientific visualization. The proposed theoretical tools and computational methods have the promise of significantly expanding the applicability and functionality of existing and new manifold learning methods and thus advancing the state of the art in nonlinear dimension reduction research. The proposed research lies at the interface between applied mathematics, computational science, and machine learning applications and provides an ideal setting for research cross-fertilization and collaboration as well as training of graduate students in interdisciplinary research.
|
0.955 |
2016 — 2019 |
Ye, Qiang |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Accurate Preconditioing For Computing Eigenvalues of Large and Extremely Ill-Conditioned Matrices @ University of Kentucky Research Foundation
Computations of eigenvalues of large matrices arise in a wide range of scientific and engineering applications, including, for example, page ranking of the Google Search Engine. Large scale eigenvalue problems are often inherently ill-conditioned which implies that their eigenvalues differ vastly in magnitude. This poses a significant challenge to the existing eigenvalue algorithms in the sense that smaller eigenvalues computed may have a poor accuracy, caused by roundoff errors in computer arithmetic. This project will develop new algorithms to address this numerical difficulty. The research results will have applications in a variety of problems where extreme ill-conditioning arises. In particular, a notable ill-conditioning problem is the biharmonic differential operator, which has been used in modeling and design of rigid elastic structures such as beams, plates, or solids, in constructions of multivariate splines, as well as in geometric modeling and computer graphics. A discrete version of the biharmonic operator has also found applications in circuits, image processing, mesh deformation, and manifold learning. With the discretized biharmonic operators easily becoming extremely ill-conditioned, this research will resolve the numerical accuracy issues of the existing algorithms for these applications.
Computing smaller eigenvalues of large and extremely ill-conditioned matrices is an important and intellectually challenging task. Indeed, the effect of ill-conditioning on accuracy is often regarded as an unsolvable problem that is attributable to the formulation of the eigenvalue problem itself. While recent research results have shown that this may be mitigated by exploring structures of matrices, the main objective of this project is to propose an innovative use of preconditioning as a new general methodology to solve the accuracy issue caused by ill-conditioning. We will develop new methods that combine preconditioning with accurate structured inversion methods to accurately compute smaller eigenvalues of an extremely ill-conditioned matrix. As an application, we will also study various discretization schemes and derive suitable structured preconditioners for biharmonic differential operators.
|
0.955 |
2022 — 2025 |
Ye, Qiang |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Robust Preconditioned Gradient Descent Algorithms For Deep Learning @ University of Kentucky Research Foundation
Deep learning is at the forefront of research in artificial intelligence and machine learning, impacting a variety of applications in data science such as computer vision, speech recognition, natural language processing, and bioinformatics. A key challenge in deep neural network learning is model optimization, which is used for network training. However, traditional optimization algorithms are not applicable, primarily due to the high complexity and nonlinearity of deep neural networks. The goal of this project is to develop novel robust optimization algorithms that can effectively address these difficulties and can more efficiently train deep learning models in practice. The project also involves the application of this work to the translation of equivalent chemical representations used in drug design as well as Bayesian inference for uncertainty quantification. As part of this project, graduate and undergraduate students will be trained in deep learning research, and software will be developed and made freely available.<br/><br/>This project includes the development of two new classes of optimization algorithms that are built on the frameworks of traditional preconditioning and conjugate gradient methods but incorporate ideas from some successful specialized deep learning optimizers such as normalization methods and momentum methods. Specifically, the project will develop a new class of preconditioning methods as a widely applicable alternative to the normalization methods and a new class of adaptive momentum methods as a robust alternative to the fixed momentum methods. Related convergence theory will be established, and the new methods will be adapted to state-of-the-art neural network architectures such as transformer and graph neural networks. The novel algorithms developed in this project intend to bring some of the most fruitful ideas in numerical analysis to the advancement of neural network optimization.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.955 |