1988 — 1990 |
Hwu, Wen-Mei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Research Initiation: Integrating Compiler Technologies and Parallel Microarchitectures For High Performance Micro System Design @ University of Illinois At Urbana-Champaign
The goal of this project is to derive effective approaches for integration of compiler, microarchitecture and circuit design techniques to achieve high performance. At the compiler level, the focus is to combine register allocation/assignment and code scheduling algorithms to exploit architecture parallelism. At the microarchitecture level, the objective is to investigate how various forms of parallelism (such as pipelining and multiple instructions per cycle) can be used by the compiler to improve performance. At circuit level, the intent is to apply novel memory and logic design techniques to efficiently support the parallelism in the microarchitecture. The principal investigator is a junior faculty who has done good research as a Ph.D. student. This research project has good ideas and it is likely that new results will be obtained in an important research area. Funding is highly recommended.
|
0.915 |
1993 — 1996 |
Yew, Pen-Chung [⬀] Hwu, Wen-Mei Bruner, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Improving the Performance of Scalable Shared-Memory Multiprocessors @ University of Illinois At Urbana-Champaign
Yew Sophisticated performance measurement and simulation tools developed on the Cedar multiprocessor system during the last four years are being used to study several key architectural and compiler issues that can enhance the performance of scalable shared memory multiprocessors. These issues include memory latency reduction and hiding strategies, data synchronization requirements for loop-level parallelism, and hierarchical network design. The study of these issues involves the hardware-assisted collection of empirical data on Cedar and the use of simulation. The information thus obtained could lead to the design of next- generation systems that, compared to present-day systems, provide higher sustained performance across a broader range of applications.
|
0.915 |
1993 — 1995 |
Hwu, Wen-Mei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Speculative and Predicated Execution Support For Instruction-Level Parallel Processing @ University of Illinois At Urbana-Champaign
Hwu Speculative execution and predicated execution are two important sources of parallelism for VLIW and superscalar processors. Speculative execution tentatively executes instructions before knowing that their execution is required. Predicated execution merges multiple possible execution paths into a single path so that the hardware can simultaneously process multiple paths. Both methods allow the compiler to extract program parallelism in the presence of conditional branches. With superscalar and VLIW designs becoming increasingly popular in the microprocessor industry, these methods have become increasingly important for future high performance microprocessors to achieve their performance goals. This project addresses three critical issues involved in incorporating speculative execution and predicated execution into future superscalar and VLIW microprocessor systems. First, the design complexity of increasing levels of architecture support for speculative execution and predicated execution are being studied. Secondly, compiler optimizers and schedulers that exploit each level of the architecture support are being developed. Thirdly, an integrated approach is being defined to coordinate speculative execution and predicated execution to best improve program execution performance. The objective is to provide architecture expertise and compiler prototypes required for the microprocessor industry to understand the cost and effectiveness of each level of hardware support.
|
0.915 |
1994 |
Hwu, Wen-Mei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
21st Annual International Symposium On Computer Architecture, Chicago, Illinois @ University of Illinois At Urbana-Champaign
This is an attendance and travel grant for the 21st Annual International Symposium on Computer Architecture on April 18-21 in Chicago, Illinois. It is co-sponsored by the Association for Computing Machinery and the IEEE. The Symposium, through a combination of invited talks, panel sessions, tutorials, workshops and refereed paper presentations, continues to serve the various needs of the computer architecture research community. It is an important conference in the computer architecture area. This grant provided support to help twenty graduate students attend the symposium.
|
0.915 |
1998 — 2000 |
Hwu, Wen-Mei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A New Approach to Accurate and Effiecient Pointer Analysis For Large C and Object Oriented Programs @ University of Illinois At Urbana-Champaign
Pointer analysis has become one of the most critical components of modern C and C++ compilers. In Java, it is also critical to derive alias relations between object references to enable aggressive optimizations. The objective of this project is to develop an accurate and efficient pointer analysis framework based on the alias-pair approach. The key ideas are: (1) a new multirelation alias graph representation to allow accurate derivation of transitive alias relations, (2) a new interprocedural analysis framework based on the multirelation alias graph to derive accurate parameter and global alias information at very low cost, (3) a new context sensitive function-level analysis framework based on Static Single Assignment (SSA) forms to achieve high accuracy at low cost, (4) an iterative application of (2) and (3) to approach the accuracy of context sensitive analysis at much lower cost, and (5) to distinguish between maybe and definite aliases to make effective use of run-time disambiguation support in future high performance instruction set architectures. Theoretical results and empirical experiences based on prototype compiler implementation and large C, C++ and Java input programs are published to advance the state of the art of pointer analysis.
|
0.915 |
2000 — 2004 |
Sanders, William Iyer, Ravishankar (co-PI) [⬀] Hwu, Wen-Mei Lumetta, Steven (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Experimental Validation of Large-Scale Networked Software Systems @ University of Illinois At Urbana-Champaign
Large-scale networked software systems are hard to design, and even more difficult to validate. Validation of such systems is increasingly important, since they are more and more being called on to perform critical functions. This validation difficulty stems from the inherent complexity of these systems, and often is due to the fact that they are often designed to adapt to variable workloads and operating conditions at the process, node, and network levels. Incorrect operation during periods of dynamic adaptation can lead to unpredictable and potentially hazardous consequences. In order to ensure that such systems operate correctly in critical environments, one must perform validations to confirm that they will function reliably in the presence of faults/failures, have predictable performance, and will continue to operate when intrusions occur. Validation of multiple behavior dimensions (e.g., reliability/availability, performance, and survivability) is also critical. This research will develop the theory, methodology, and tools necessary to experimentally validate the reliability/availability, performance, and survivability of large-scale networked software systems. The intention is to develop a comprehensive framework for experimentally validating large-scale networked software systems. Taken as a whole, this work will provide a sound and fundamental approach to validation of networked software and applications.
|
0.915 |
2006 — 2011 |
Sanders, William Iyer, Ravishankar (co-PI) [⬀] Hwu, Wen-Mei Nahrstedt, Klara (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cri-a Configurable, Application-Aware, High-Performance Platform For Trustworthy Computing @ University of Illinois At Urbana-Champaign
This project, investigating new sets of application-aware methods to provide customized levels of trust (specified by application) via an integrated approach involving re-programmable hardware and novel compiler methods to extract security and reliability properties, supported by a configurable OS and middleware, develops a lab to support application aware hardware for trustworthy computing. The work enables ground-breaking experimental research in creating large-scale, demonstrably trustworthy, cluster computing platforms for on-demand/utility computing and/or adaptive enterprise computing. The infrastructure augments a cluster of computers, each with hardware and software support allowing certain application functions to be executed in silicon. The facility supports innovative research in new software that takes advantage of the reconfigurable logic available from the Trusted ILLIAC system, a validation system considered the cornerstone for quantitative assessment of alternative designs and solutions. Exploring customized trust models via an integrated approach involving compiler, hardware, OS, and middleware, the cluster architecture includes programmable hardware where many designs can be tested or optimized for applications without the costs of new chips. The Trusted ILLIAC supports a rich set of research projects that span online hardware-software assessment, efficient programming environments for heterogeneous multiprocessor systems, SW bug detection, HW validation, configurable trust-providing mechanisms, automated fault management, on-line model-based adaptation strategies, middleware support for trustworthiness, application-based placement detectors, and smart card utilization.
Broader Impact: Trusted ILLIAC represents a fundamental change in how computing is accomplished (i.e., direct representation of tasks in silicon) enabling that paradigm by merging the new architecture with existing cluster and operating system functionality. In the field of trusworthiness, it provides customizable computing technology to the broader community of students, researchers, and institutions, enabling the creation of integrated trustworthy computing testbeds. The infrastructure benefits technology transfer efforts from research to real world environments, enabling collaborations with government and industry developers to determine how trustworthy hardware assists and how software stacks can be integrated into products.
|
0.915 |
2007 — 2017 |
Gropp, William (co-PI) [⬀] Pennington, Robert (co-PI) [⬀] Hwu, Wen-Mei Snir, Marc (co-PI) [⬀] Seidel, Edward Dunning, Thomas Kramer, William Beldica, Cristina |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Leadership Class Scientific and Engineering Computing: Breaking Through the Limits @ University of Illinois At Urbana-Champaign
0725070: University of Illinois at Urbana-Champaign PI: Thomas H. Dunning ABSTRACT In this project, the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign will provide a new class of computing capability to the research community, opening up new possibilities in science and engineering. It will provide the capability for researchers to tackle much larger and more complex research challenges across a wide spectrum of domains. NCSA will acquire, deploy and operate a very large, architecturally coherent, innovative, leadership-class, high-performance computational resource, to be known as Blue Waters, for the science and engineering research community. This system will be sited at University of Illinois at Urbana-Champaign (UIUC) where it will be operated by the National Center for Supercomputing Applications (NCSA) and its partners in the Great Lakes Consortium for Petascale Computing (GLC). Initially, large allocations of resources on the new system will be awarded by NSF through a separate peer-reviewed competition. As research activities are identified in this way, the GLC will form Petascale Application Collaboration Teams to provide collaborative consulting services to each activity. The Blue Waters project also includes education and outreach programs that will target pre-college, undergraduate, graduate and post-graduate levels. The Girls Engaged in Mathematics and Science (GEMS) and Southeastern University and College Coalition for Engineering Education (SUCCEED) programs will be augmented with materials related to petascale computing. Graduate education will be enhanced by the establishment of a Virtual School of Computational Science and Engineering that brings together the faculty at each of the universities in the Committee on Institutional Cooperation (CIC) as well as Iowa State University, Louisiana State University, and the University of North Carolina, to create courses that focus on petascale computing and petascale-enabled science and engineering. The Virtual School will explore new instructional technologies and create courses, curricula, and certificate programs that are tailored to science and engineering students and will also sponsor workshops, conferences, summer schools, and seminars. The project will include an annual series of workshops targeted at the developers of simulation packages and aspiring application developers. In addition, the project will include two industrial partnership activities. The first, Industry Partners in Petascale Engagement program (IPIPE) will provide industrial partners with a first look at the technological and scientific developments that flow from the petascale program. The second, an Independent Software Vendor (ISV) Application Scalability Forum will promote collaborations between Consortium members, ISVs, and the industrial end-user community. This award will permit investigators across the country to conduct innovative research in a number of areas including: . the development of structure in the early cosmos; the physics of supernovae, gamma-ray bursters, binary black-hole systems, and collisions between neutron stars; . the first-principles design of catalysts, pharmaceuticals, and other molecular materials for specificity and efficiency; . the mechanisms of reactions involving large bio-molecules and bio-molecular assemblages, such as enzymes, ribosomes and cellular membranes; the assembly of capsids; . the interaction of very short laser pulse trains with polyatomic molecules; . nonlinear interactions between cloud systems, weather systems and the Earth's climate; the detailed structure of, and the nature of intermittency in, stratified and unstratified, rotating and non-rotating turbulence in classical and magnetic fluids; and . the exploration of the internal structure of the Earth using high-resolution, broad-band, global seismic inversions The broader impacts of this award include: provisioning of unique infrastructure for research and education; extensive efforts to accelerate education and training in the use of high-performance computation in science; training in petascale computing techniques; promoting an exchange of information between academia and industry about the applications of petascale computing; and broadening participation in computational science through NCSA's GEMS program designed to encourage middle-school girls to consider mathematics-oriented and science-oriented careers.
|
0.915 |
2016 — 2021 |
Hudson, Matthew (co-PI) [⬀] White, Bryan (co-PI) [⬀] Hwu, Wen-Mei Robinson, Gene (co-PI) [⬀] Iyer, Ravishankar [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I/Ucrc: Computing and Genomics-An Essential Partnership For Biology Breakthroughs @ University of Illinois At Urbana-Champaign
The application of genomics across the life sciences industries is currently challenged by an inadequate ability to generate, interpret, and apply genomic data quickly and accurately for a wide variety of applications. Major Innovations in the applicability, timeliness, efficiency, and accuracy of computational genomic methods are needed, and these innovations will develop best when an interdisciplinary team of scientists, engineers, and physicians from academia and industry, spanning computer systems, health care/pharmaceuticals, and life sciences, work together. The University of Illinois at Urbana-Champaign (UIUC) and the Mayo Clinic are building on their longstanding collaboration to form the Center for Computational Biotechnology and Genomic Medicine (CCBGM), which will bring together their excellence in computing, genomic biology, and patient-specific individualized medicine. Working closely with industry, the CCBGM's multidisciplinary teams will use the power of computational genomics to advance pressing societal issues, such as enabling patient-specific cancer treatment, understanding and modifying microbial communities in diverse environments related to human health and agriculture, and supporting humanity's rapidly expanding need for food by improving the efficiency of plant and animal agriculture. The CCBGM will leverage UIUC's long-standing prowess in large-scale parallel systems, big data analytics, and hardware and software system design, to develop new technologies that enable future genomic breakthroughs. A key element of the Center's vision is to advance breakthroughs at the interface of biology and computing to transform health-care delivery while enhancing efforts that focus on the health science needs of underrepresented minorities.
The CCBGM will bring together an interdisciplinary team to address the colossal genomic data challenge. Academia/industry partnerships will enhance research, education, and entrepreneurship while performing important technology transfer. The Center will achieve transformational computing innovations on three fronts. (1) It will innovate computing and data management to deal with issues of scaling to the ever-growing volume, velocity, and variety of genomic data. It will concentrate initially on scaling the computation of epistatic interactions (interactions between two or more genes or DNA variants) in genome-wide association study data, generating lists of genomic features that are maximally predictive of phenotypes, and information-compression algorithms for genomic data storage and transfer. (2) It will revolutionize the generation of actionable intelligence from multimodal structured and unstructured data, to generate knowledge from big data. The emphasis will be on the processing and integration of genomic and multi-omic data, and on the merging of unstructured phenotypic data with information from curated data sources (e.g., electronic medical records, annotation databases). The integration of these diverse data types will improve discovery research, predictive genomics, diagnostics, prognostics, and theranostics. Application areas include targeted cancer therapy, pharmacogenomics, crop improvement, and predictive microbiome analysis. (3) It will achieve systems innovation by designing computer systems specially suited for computational genomics, providing unprecedented speed and energy efficiency while preserving the accuracy of the analytics. The systems will be used to quantify and improve the accuracy of detecting genomic variation and, more generally, to optimize computing architectures for the execution of genome analysis workflows.
|
0.915 |
2020 — 2021 |
Iyer, Ravishankar [⬀] Hwu, Wen-Mei Nahrstedt, Klara (co-PI) [⬀] Kramer, William Xu, Tianyin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pposs: Planning: Inflight Analytics to Control Large-Scale Heterogeneous Systems @ University of Illinois At Urbana-Champaign
The goal of this project is to fundamentally reinvent the design of the system, from hardware to application, using fast, novel inflight analytics to control and optimize large-scale heterogeneous computer systems to meet the performance and resiliency requirements of emerging applications such as data mining, artificial intelligence, and individualized medicine. Towards that goal, advanced machine-learning (ML) methods along with domain knowledge will be employed to support real-time system-state estimation and decision-making, including resource management, congestion/failure detection and mitigation, preemptive intrusion detection, and configuration management. Innovations across the system stack will be needed to achieve optimal results by taking full advantage of contextual information collected from multiple layers of the system and adapting rapidly to the deployment environment, workloads, and application requirements. ML-driven inflight analytics methods, developed in this effort, will be demonstrated on a heterogeneous ?rack-scale? computing system, with the ultimate future objective of scaling up the framework to a warehouse-scale computing system.
The project will be organized around the following research activities. (i) Work with noisy and incomplete telemetry data (e.g., hardware telemetry, OS-level logs, and application-level traces) available from monitors across the system stack to perform system-state estimation (e.g., resource utilization). Telemetry data are often noisy and inconsistent in terms of semantics, modalities, and time granularities, making systems only partially observable. Bayesian deep-learning models will be developed to accurately capture system states and cope with data noise and incompleteness. (ii) Design models and algorithms for practical inflight analytics that make decisions (e.g., on scheduling or failure mitigation) based on the estimated system state to enhance system performance, reliability, and security. Such a framework will consist of an ensemble of interdependent ML models based on partially observable Markov decision processes (POMDPs) augmented with domain knowledge (e.g., interconnect topology) and trained in real time. (iii) Synthesize hardware accelerators for fast, low-cost inflight analytic. Toward that end, a compiler and a runtime framework will be developed that take high-level declarative probabilistic programs (i.e., the POMDPs), automatically compile them onto accelerators, and plan their execution across heterogeneous hardware (FPGAs, ASICs, and CPUs/GPUs). (iv) Assess the trustworthiness of inflight analytics. For that, a trust-assessment framework will be created to evaluate resiliency to failures and attacks due to residual imperfections of heterogeneous components, input uncertainty, and the use of stochastic ML algorithms. While in the planning stage, this project will focus on design of inflight analytics in the context of rack-scale systems. The methods and algorithms developed will be useful in helping smaller-scale sites with limited resources manage their systems more efficiently. Students involved in this project will have a rare opportunity to participate in the design of heterogeneous ML-driven systems with broad applicability. The integration of ML methods and algorithms into real systems can be attractive to a diverse range of individuals, including underrepresented minority students. The goal is to raise awareness of scientific and engineering challenges in design and deployment of next-generation computing systems to support emerging applications.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |