1995 — 1999 |
Ferrante, Jeanne (co-PI) [⬀] Carter, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hierarchical Tiling: a Framework For Multi-Level Parallelism and Locality @ University of California-San Diego
To achieve maximum benefit from tiling, compiler optimization must exercise more control over data movement and storage assignments than is commonly done. Hierarchical tiling takes responsibility for several phases of compilation and code improvements that are traditionally done separately, such as scalar replacement, register allocation, generating message passing calls, and storage mapping. It uses the mechanisms of explicitly naming and copying data to control the movement of data up and down the memory hierarchy and to exploit all levels of parallelism. Its effectiveness as a systematic framework for hand-crafting highly optimized code has been tested on scientific applications for IBM SP1 system. This project will extend the research in compiler optimizations by investigating the following: (1) Develop a parameterized machine model that captures the architectural information needed to guide hierarchical tiling. (2) Study interactions between tiling at various levels of granularity. (3) Incorporate hierarchical tiling into the SUIF toolset. (4) Extend the work to disk storage, explicit I/O and implicit paging virtual memories. (5) Validate the approach using various programs on different parallel machines.
|
1 |
1998 — 2002 |
Ferrante, Jeanne [⬀] Carter, Larry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Coordinated Restructuring of Programs and Storage @ University of California-San Diego
Tiling is a well-known optimization technique that has been used to obtain orders of magnitude performance improvements on certain types of computer programs -- ones with nicely structured loops and regular memory accesses. This research will extend tiling and related optimization techniques to a larger class of programs. The methods employed to carry out these goals are: (1) Collect a corpus of scientific applications, including unstructured programs and programs with irregular memory accesses. (2) Build a high- level analysis tool to study this corpus and to provide information that can be used to guide program transformations. (3) Develop more powerful transformations to extend tiling to this corpus. These transformations will be embodied in a source-to- source program restructurer. (4) Build an architecturally-driven guidance system to guide the choice of transformations. This system will model the multitiered parallelism and memory hierarchies of modern computers. (5) Evaluate the new transformations compared to hand-optimized versions of the applications. The projected impact of the work is the development of compiler technology that will automatically improve the performance of a broad class of computer programs for scientific applications.
|
1 |
2003 — 2007 |
Ferrante, Jeanne [⬀] Carter, Larry Casanova, Henri (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Software: Autonomous Scheduling On Large Distributed Systems @ University of California-San Diego
Advances in network and middleware technologies have brought computing with many widely-distributed and heterogeneous resources to the forefront, both in the context of Grid Computing and of Internet Computing. These large distributed platforms allow scientists to solve problems at an unprecedented scale and/or at greatly reduced cost. The high level goal of this work is to further the development of software methodologies and algorithms to enable scientists, engineers and others to use large heterogeneous distributed systems.
Application domains that can readily benefit from such platforms are many; they include computational neuroscience, factoring large numbers, genomics, volume rendering, protein docking, or even searching for extra-terrestrial life. Indeed, those applications are characterized by large numbers of independent tasks, which makes it possible to deploy them on distributed platforms with high network latencies. More specifically, in this work we assume that all application data initially resides in a single repository, and that the time required to transfer that data is a significant factor. Efficiently managing the resulting computation is a difficult and challenging problem, given the heterogeneous and typically dynamic attributes of the underlying components. Such an approach allows for adaptivity and scalability, since decisions and changes can be made locally. This approach is particularly effective for scheduling in environments that are heterogenous, dynamic, and unstructured, such as global and peer-to-peer computing platforms consisting mostly of home PC's.
This research develops a simple yet general computation and communication model for Grid and Internet platforms, and autonomous and decentralized scheduling techniques based on this model. It analyzes the optimality of these techniques in terms of steady-state and overall application performance. Further, it encorporates adaptability and fault-tolerance, and evaluates the resulting techniques by both simulating and running real applications on actual testbeds. Its overall impact to the scientific community is to enable scientists to solve important classes of problems faster and in a more cost-effective fashion.
|
1 |
2003 — 2008 |
Ferrante, Jeanne [⬀] Carter, Larry Casanova, Henri (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-France Cooperative Research: Algorithms and Simulations For Scheduling On Large-Scale Distributed Platforms @ University of California-San Diego
0314180 Ferrante
Scheduling computational tasks on a given set of processors is a key issue for high-performance computing. Future computing systems, such as the computational grid, are likely to be widely distributed and strongly heterogeneous. This three-year US-France cooperative research award between the University of California at San Diego, Ecole Normal Superieure and French National Institute for Research in Informatics and Applied Mathematics (INRIA) in Lyon addresses the impact of heterogeneity on design and analysis of static scheduling techniques on grid-based systems. The project has three major objectives: (1) development of hierarchical, steady state scheduling algorithms for heterogeneous platforms; (2) adaptation of peer-to-peer strategies for client-server applications; and (3) extension of SIMGRID simulation methodologies and tools. SIMGRID is a discrete-event simulation toolkit that can be used for distributed applications and computing environment topologies. The researchers involved in this project are: Jeanne Ferrante, Larry Carter and Henri Casanova of the University of California at San Diego and the San Diego Supercomputing Center, and Eddy Caron, Yves Robert of the Ecole Normale Superieure in Lyon, Frederic Vivien of INRIA.
This award represents the US side of a joint proposal to NSF and INRIA. NSF provides funds for visits to France by US investigators and students. They will participate in joint research and a concluding workshop at the end of the third year. INRIA supports the visits of French researchers to the United States. The joint activities take advantage of combined US-French expertise in models and algorithm techniques for scheduling on large-scale distributed, grid-based systems. The project advances NSF's priority area - cyberinfrastructure research and development - which will enable collaboration among scientists and engineers across disciplines and national boundaries.
|
1 |