Qiang Ye, Ph.D. - US grants
Affiliations: | University of Pittsburgh, Pittsburgh, PA, United States |
Area:
synaptic plasticity, learning, neural modelsWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Qiang Ye is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
1999 — 2005 | Ye, Qiang Li, Ren-Cang [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Fast and Accurate Computations of Applied Eigenproblems @ University of Kentucky Research Foundation Eigenproblems appear ubiquitously all across applied science and |
0.955 |
2001 — 2005 | Ye, Qiang | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Preconditioned Krylov Subspace Algorithms For Computing Eigenvalues of Large Matrices @ University of Kentucky Research Foundation Proposal #0098133 |
0.955 |
2004 — 2008 | Ye, Qiang | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computing Interior Eigenvalues of Large Matrices by Preconditioned Krylov Subspace Methods @ University of Kentucky Research Foundation The investigator will develop preconditioned Krylov subspace methods with analysis for computing a few interior eigenvalues of large scale matrix eigenvalue problems. It will also develop black-box implementations for public distributions. In a previous work of PI, a method of this type has been developed for computing some extreme (smallest or largest) eigenvalue of the symmetric problems, which has also been implemented in a library-quality software called EIGIFP. The investigator proposes to develop a generalization of the existing method for computing interior eigenvalues for symmetric and nonsymmetric matrix problems. The resulting algorithms not only inherit desirable characteristics of the existing Krylov subspace methods, but also allow convergence acceleration through the use of a preconditioner (or approximate inverse) rather than the inverse of a shifted matrix. |
0.955 |
2009 — 2013 | Ye, Qiang | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Kentucky Research Foundation Eigenvalue computation is a fundamental problem in numerical |
0.955 |
2013 — 2017 | Ye, Qiang | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Accurate and Efficient Algorithms For Computing Exponentials of Large Matrices With Applications @ University of Kentucky Research Foundation Matrix exponential is an important linear algebra tool that has a wide range of applications. Its efficient computation is a classical numerical linear algebra problem that is of considerable importance to many fields. This research project is concerned with numerical algorithms for computing exponentials of large matrices. The main objectives are: (1) to develop efficient preconditioning techniques for computing the product of the exponential of a matrix with a vector, and (2) to develop accurate and efficient algorithms to compute some selected entries of the exponential of an essentially nonnegative matrix. The proposed research will advance theory and algorithms for matrix exponentials in the setting of iterative methods for large scale problems. It will systemically address the problems of preconditioning and entrywise relative accuracy that are critically important in certain applications. The resulting algorithms will improve the existing ones in computational efficiency and/or accuracy. At the conclusion of this project, robust MATLAB implementations of the algorithms developed will be made publicly available. |
0.955 |
2013 — 2017 | Ye, Qiang | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Kentucky Research Foundation The objective of this proposal is to develop robust algorithms for reconstructing or synthesizing highly structured high-dimensional data based on a low-dimensional representation learned from a training dataset, i.e., the interpolation and extrapolation problems in manifold learning. The project will address the elusive issue of computing a usually not well-defined low-dimensional parametrization in the setting of various interpolation and extrapolation problems for manifold learning, emphasizing the notion of physically meaningful paramterizations. It will develop innovative computational methodology for flexibly learning a low-dimensional parametrization together with other physically important variables in the context of both unsupervised and semi-supervised learning and especially active learning settings, for learning and synthesis of dynamic data, and for manifold extrapolation based on transfer learning. Included in the project is a development of a publicly available software package which will disseminate the research results and promote applications of nonlinear dimension reduction methodology to real-world problems. |
0.955 |
2016 — 2019 | Ye, Qiang | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Accurate Preconditioing For Computing Eigenvalues of Large and Extremely Ill-Conditioned Matrices @ University of Kentucky Research Foundation Computations of eigenvalues of large matrices arise in a wide range of scientific and engineering applications, including, for example, page ranking of the Google Search Engine. Large scale eigenvalue problems are often inherently ill-conditioned which implies that their eigenvalues differ vastly in magnitude. This poses a significant challenge to the existing eigenvalue algorithms in the sense that smaller eigenvalues computed may have a poor accuracy, caused by roundoff errors in computer arithmetic. This project will develop new algorithms to address this numerical difficulty. The research results will have applications in a variety of problems where extreme ill-conditioning arises. In particular, a notable ill-conditioning problem is the biharmonic differential operator, which has been used in modeling and design of rigid elastic structures such as beams, plates, or solids, in constructions of multivariate splines, as well as in geometric modeling and computer graphics. A discrete version of the biharmonic operator has also found applications in circuits, image processing, mesh deformation, and manifold learning. With the discretized biharmonic operators easily becoming extremely ill-conditioned, this research will resolve the numerical accuracy issues of the existing algorithms for these applications. |
0.955 |
2022 — 2025 | Ye, Qiang | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Robust Preconditioned Gradient Descent Algorithms For Deep Learning @ University of Kentucky Research Foundation Deep learning is at the forefront of research in artificial intelligence and machine learning, impacting a variety of applications in data science such as computer vision, speech recognition, natural language processing, and bioinformatics. A key challenge in deep neural network learning is model optimization, which is used for network training. However, traditional optimization algorithms are not applicable, primarily due to the high complexity and nonlinearity of deep neural networks. The goal of this project is to develop novel robust optimization algorithms that can effectively address these difficulties and can more efficiently train deep learning models in practice. The project also involves the application of this work to the translation of equivalent chemical representations used in drug design as well as Bayesian inference for uncertainty quantification. As part of this project, graduate and undergraduate students will be trained in deep learning research, and software will be developed and made freely available.<br/><br/>This project includes the development of two new classes of optimization algorithms that are built on the frameworks of traditional preconditioning and conjugate gradient methods but incorporate ideas from some successful specialized deep learning optimizers such as normalization methods and momentum methods. Specifically, the project will develop a new class of preconditioning methods as a widely applicable alternative to the normalization methods and a new class of adaptive momentum methods as a robust alternative to the fixed momentum methods. Related convergence theory will be established, and the new methods will be adapted to state-of-the-art neural network architectures such as transformer and graph neural networks. The novel algorithms developed in this project intend to bring some of the most fruitful ideas in numerical analysis to the advancement of neural network optimization.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria. |
0.955 |