2012 — 2016 |
Yan, Yonghong Shi, Weidong (co-PI) [⬀] Shah, Shishir |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-New: Collaborative Research: Image Processing Cloud (Ipc): a Domain-Specific Cloud Computing Infrastructure For Research and Education
The image processing domain is experiencing rapid increases in data size and algorithm complexity. These increases in data size and algorithm complexity demand large amounts of computing and storage power. However, modern computer architectures have evolved to be extraordinarily complex, and frequently become a challenge rather than a help for general researchers and educators who work in image processing technologies.
In order to fill the gap between complicated modern architectures and complex applications, and to support the research and education of image processing, the PIs are developing an integrated image processing research environment within a computing Cloud infrastructure. This infrastructure includes: 1) an open image processing computing Cloud to support researchers and students to conduct image processing research, share knowledge and research results, and stimulate education materials among three universities: Prairie View A&M University, University of Houston, and University of Delaware; 2) a high-level domain specific language designed to provide an abstract and productive programming model for image processing applications; 3) a general compiler optimization framework with the capability to tune into image processing applications at various levels starting from high-level representations to low level transformations in the Cloud environment.
|
0.958 |
2014 — 2017 |
Ross, Robert Kimpe, Dries Yan, Yonghong Chapman, Barbara Chen, Yong |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Medium: Compute On Data Path: Combating Data Movement in High Performance Computing
High performance computing enabled simulation has been widely considered a third pillar of science along with theory and experimentation, and is a strategic tool in many aspects of scientific discovery and innovation. High performance computing simulations, however, have become highly data intensive in recent years due to data acquisition and generation becoming much cheaper, newer high-resolution multi-model scientific discovery producing and requiring more data, and the insight that useful data can be mined out of large amounts of data being substantially increased.
This project combats the increasingly critical data movement challenge in high performance computing. This project studies the feasibility of a new Compute on Data Path methodology that expects to improve the performance and energy efficiency for high performance computing. This new methodology models both computations and data as objects with a data model that encapsulates and binds them. It fuses data motion and computation leveraging programming model and compiler. It develops an object-based store and runtime to enable computations along data path pipeline. In recent years, a proliferation of advanced high performance computing architectures including multi- and many-core systems, co-processors and accelerators, and heterogeneous computing platforms have been observed. The software solution that addresses the critical data movement challenge, however, has significantly lagged behind. This project has the potential of advancing the understandings and the software solution and further unleashing the power of high performance computing enabled simulation.
|
0.958 |
2014 — 2019 |
Yan, Yonghong |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf:Small:Collaborative Research: Application-Aware Energy Modeling and Power Management For Parallel and High Performance Computing
One of the critical challenges in scaling out current and future high performance computing (HPC) and enterprise computing systems is the requirement that their power envelope remain comparable to that of today?s systems. This project addresses this ?power wall? challenge from the system software aspect by developing application-aware methodologies of energy modeling and power management. The project optimizes system efficiency by tuning performance and energy consumption to resonate with application runtime behavior while staying below the system power envelope. The project develops user interfaces and new compiler models and runtime tuning techniques to manage the tradeoffs between performance and energy consumption. The approach enables cooperative, application-specific control of energy consumption between hardware, system software and applications. The investigations and solutions deepen understanding of application power usage and guide users to customized performance and energy consumption behavior.
This collaborative project integrates the development, education, and outreach efforts of collaborating University partners and is well positioned to have a substantial impact on both the HPC research community and hardware designers and vendors. All findings are published in peer-reviewed conferences and journals while source code and results are available through a project web site. This work addresses the need for energy efficiency improvements in large-scale systems in support of high-end simulations used to design pharmaceuticals, aircraft, global warming scenarios, etc. The proposed techniques influence the design of future directions HPC and enterprise computing systems from industry and government. The project engages and trains graduate and undergraduate students, including underrepresented minority students, in the area of energy efficient computing, parallel and high performance computing, and computer architecture and systems. The open source evaluation platforms are used in teaching related coursework in graduate and undergraduate classes.
|
0.958 |
2017 — 2022 |
Yan, Yonghong |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Programming the Existing and Emerging Memory Systems For Extreme-Scale Parallel Performance
High performance computing (HPC) focuses on using numerical model to simulate complex science and engineering phenomena, such as galaxies, weather and climate, molecular interactions, electric power grids, and aircraft in flight. Over the next decade the goal is to build HPC parallel system capable of extreme-scale performance (one exaflop (1018)operations per second) and processing exabyte (1018) of data. However, one of the biggest challenges of achieving extreme-scale performance is what is known as the hardware memory wall, which is about the growing gap between the speed of computation performed by CPU and the speed of supplying data to the CPU from memory systems (about x100 time slower). The low performance efficiency of modern HPC system (average <60% and could be as low as 5%) manifests the memory wall impact since a huge amount of computation cycles are wasted for waiting for the arrival of input data. It becomes very critical to create effective software solutions for achieving the computation potential of hardware and for improving the efficiency and usability of the existing and future computing system. Such solutions will significantly benefit a broad range of disciplines that use parallel computers to solve scientific and engineering problems, and accelerate scientific discovery and problem solving to improve quality of life of the society. This CAREER project develops innovative software techniques to address the programming and performance challenges of the existing and emerging memory systems: 1) a portable abstract machine model for programming, compiling and executing parallel applications, 2) new programming interface and model for data mapping, movement, and consistency, and 3) machine-aware compilation and data-aware scheduling techniques to realize an asynchronous task flow execution model to hide the latency of data movement. It addresses the memory wall challenge by developing a memory-centric programming paradigm for helping achieve extreme-scale performance of parallel applications with minimum impairment to programmability. For education, the project involves a broader community starting from high school in the area of HPC and computer science.
|
0.958 |