1993 — 1996 |
Tick, Evan Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Parallel Performance Visualization @ University of Oregon Eugene
The Parallel Performance Visualization (PARASEER) project applies the metaphor of "visual abstraction," proven successful in scientific visualization, to the problem of visualizing parallel performance information. A formal methodology is developed for mapping parallel performance data to visual displays based on a theory of "performance behavior abstraction" and "performance views." The methodology is implemented as a high-level, representational framework that can be used to describe how, independent of graphics technology, conceptual performance characteristics embodied in the performance abstractions are rendered in the performance views. The ultimate goal of the PARASEER project is to explore new parallel performance visualization techniques and to evaluate their effectiveness in real parallel performance problem domains. In addition to the formal framework for performance visualization design, PARASEER will take two approaches to developing performance display graphics. The first builds interfaces to existing data visualization software to provide a flexible environment for performance display prototyping. This approach leverages the tools' capabilities for handling large data sets, their support for distributed processing, and their extensible visualization to graphical user interface software to provide more programmatic performance visualization support.
|
0.915 |
1994 — 2001 |
Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf Young Investigator Award @ University of Oregon Eugene
Malony 9457530 This NYI proposal lies at the heart of the HPC software crisis: the performance evaluation and optimization of software developed for high-performance, parallel computer systems. The central theme of the research is that the challenge of maximizing the productivity of HPC software developers can be best addressed by an aggressive integration of sophisticated tools for program and performance analysis into an HPC programming environment. There are four research focus areas that will be actively pursued with this grant: parallel performance visualization; performance prediction and extrapolation; knowledge-based automation of performance diagnosis and tuning; and language-based parallel program analysis environments. The plan for each area is to establish the fundamental principles and techniques, build prototypes tools for evaluation, integrate the tools into program development systems, and test the benefits of the research in real HPC application environments.
|
0.915 |
1994 — 1995 |
Lynch, Michael Haydock, Roger (co-PI) [⬀] Conery, John [⬀] Cuny, Janice (co-PI) [⬀] Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Problem-Specific Programming Environments For Computational Science: Instrumentation Acquisition and Development @ University of Oregon Eugene
9413532 Conery This award is for the purchase of a parallel computer to do research in Problem Specific Programming Environments (PSPE). These environments will be built to use domain specific information in order to aid the scientist developing application programs for that area. The three PSPEs to be experimented with on this parallel computer are: Computation of the electronic structure of superconductors; simulation of mutations over generations; and constraint-based computations in molecular evolution. All of these domains are computation intensive needing the power of a parallel computer. ***
|
0.915 |
1995 — 2001 |
Toomey, Douglas [⬀] Cuny, Janice (co-PI) [⬀] Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Development and Application of a Problem Specific Parallel Programming Environment For Marine Seismic Tomography @ University of Oregon Eugene
Toomey 9522531 This research which is a cooperative effort between seismologists and computer scientists at the University of Oregon - aims to develop a high performance computing (HPC) environment for marine seismic tomography and to test that environment by applying it to existing delay-time data from the east Pacific Rise (EPR) at 9 degrees 30 minutes north. This project will develop a problem-specific programming environment (PSPE) for seismologists, initially designed around an existing tomographic code. The environment will extend infrastructure (already developed by the PIs) that supports "models of observability " for uniform tool interaction at the programming language level. The extensions will raise the model abstractions to the applications level, giving the seismologist a familiar environment for interacting with development tools. The environment will be tested first by applying it to delay-time data from the EPR. This data has already been analyzed in publications addressing the velocity and attenuation structure of axial magma chambers but efforts to image the P wave velocity structure have focused only on the upper crustual sect. Significant performance improvements in the code will, for the first time, allow a simulations analysis of the complete data set, resulting in an unprecedented image of the full crustal.
|
0.915 |
1995 — 2000 |
Driscoll, Michael Pancake, Cherri Landau, Rubin Malony, Allen Cuny, Janice (co-PI) [⬀] Daasch, W. Robert (co-PI) [⬀] Otto, Steve Burnett, Margaret Reynales, E. Tad |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mra: Network-Based Training and Access to Hpc Using Nero-- the Network For Engineering and Research in Oregon @ Oregon State University
9523629 Pancake This NSF Metacenter Regional Alliance(MRA) will Link the Oregon Joint Graduate School of Engineering to the San Diego Supercomputer Center. The objective is to improve access to high performance computing (HPC) for engineers and scientists in the Pacific Northwest. Facilitating this collaborative work will be a high speed fiber optic network, the Network for Engineering and Research in Oregon (NERO), which was first deployed in 1994. The proposed MRA will exploit that infrastructure to expand the HPC user community through distributed access to HPC platforms, to tools and environments supporting parallel programming, to online training materials and example applications, and to key human resources via several types of remote collaborative sessions. The MRA will exploit the NERO infrastructure to provide: a distributed, network-based repository of information on HPC tools and environments network-based, interactive training materials and example applications developed specifically for non-computer scientists network-wide interactive broadcasts of seminars, remote user group meetings, and interactive consulting sessions, both real-time and as after-the-fact replay desktop videoconferencing, shared white boards and shared file system for MRA collaboration over the NERO wide area network highspeed network access to parallel computing platforms within Oregon dedicated network access to the San Diego Supercomputer Center a framework for collaboration with other national and regional metacenters
|
0.903 |
1996 — 1998 |
Bothun, Gregory (co-PI) [⬀] Toomey, Douglas (co-PI) [⬀] Humphreys, Eugene (co-PI) [⬀] Cuny, Janice [⬀] Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ari: Collaborative Research Between Geological Sciences, Astrophysics, and Computer Science: Infrastructure Support For a Visualization Laboratory @ University of Oregon Eugene
9601802 Cuny, Janice E. Humphreys, Eugene D. University of Oregon Academic Research Infrastructure: Collaborative Research Between Geological Sciences, Astrophysics, and Computer Science: Infrastructure Support for a Visualization Laboratory This Academic Research Infrastructure award supports the development of high speed computational, networking, and graphics facility. The research projects supported by the facility include: 1. Geophysical studies of mid-ocean ridges. 2. Kinematic and dynamic modeling of the deformation of the western United States lithosphere. 3. Geological and environmental fluid mechanics. 4. Characterization of fault rupture and the recurrence behavior of large earthquakes: and 5. Retrieving and processing observational astrophysical data by representing it as a virtual N-dimensional universe.
|
0.915 |
2003 — 2007 |
Nunnally, Ray Tucker, Don (co-PI) [⬀] Tucker, Don (co-PI) [⬀] Posner, Michael (co-PI) [⬀] Conery, John (co-PI) [⬀] Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Acquisition of the Oregon Iconic Grid For Integrated Cognitive Neuroscience Informatics and Computation @ University of Oregon Eugene
Future progress in cognitive neuroscience research will rely increasingly on the application of systems for high-performance computation and high-volume data management to address the challenges of integrated neuroimaging, multi-modality sensor fusion, and cognitive modeling. With a Major Research Instrumentation award from the National Science Foundation, the University of Oregon will establish the Integrated COgnitive Neuroscience, Informatics, and Computation (ICONIC) Grid, composed of parallel computing clusters, large-scale data servers, workstations, and interactive visualization devices. Connected by a high-bandwidth campus network linking the Department of Psychology, the Center for Neuroimaging , the Neuroinformatics Center, the Department of Computer and Information Science, and the Computational Science Institute, the ICONIC Grid will enhance Oregon's excellence in cognitive neuroscience with needed computing power to solve neuroimaging problems of tissue/feature segmentation, dense-array EEG source localization, multi-modal MRI integration, and functional components analysis. The ICONIC Grid will be organized as a distributed computing environment to promote grid-style collaboration among cognitive neuroscience research groups. Computer science research in high-performance parallel and distributed computing, scientific databases, informatics, and interactive visualization will enhance the ICONIC Grid for highly productive use as a computational science tool.
The interchange between cognitive neuroscience and computational science is now important at both theoretical and empirical levels. For several decades, cognitive psychology has drawn from concepts of cybernetics and information processing in the development of models of human mental function. However, it is in the integration of psychological with neural evidence that the methodological demands for computational advances have become particularly intense. Many investigators in cognitive neuroscience now recognize the limitations of individual brain imaging methods, such as in the temporal or spatial resolution, or practical implementation of the technology. The result is an increasing demand for integrated imaging and analysis, in which convergent methods are brought to bear on a particular issue of brain mechanisms.
The University of Oregon began the decade with a bold Brain, Biology, and Machine Initiative (BBMI) to promote interdisciplinary research between neuroscience, cognitive science, molecular biology, genomics, and computational science. The establishment of the Center for Neuroimaging , which houses a new Siemans Allegra 3-Tesla fMRI machine, and the Neuroinformatics Center, were Oregon's first steps towards integrative cognitive neuroscience. The ICONIC Grid is the next critical piece of the puzzle providing an essential resource to further advancements in cognitive neuroscience research, collaboration, education, and outreach.
The broader impact of the ICONIC Grid will be important for the University's educational goals, for minority recruitment and retention, and for extending advances in computation to medical advances in society. With on-campus access to both advanced imaging facilities and the computational and visualization infrastructure that processes and presents the experimental data, students in Psychology will be exposed to a state-of-the-art problem-solving environment for cognitive neursocience education. New Psychology curricula are planned for providing students training in the use of such tools. Similarly, the CIS department's academic objectives in parallel and distributed computing, computational science, networking, human-computer interaction, and visualization will benefit greatly from hands-on access to parallel cluster and distributed grid technology.
|
0.915 |
2004 — 2008 |
Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
St-Hec: Collaborative Research: Scalable, Interoperable Tools to Support Autonomic Optimization of High-End Applications @ University of Oregon Eugene
Extremely large scale systems offer a new challenge to application designers. Current software development techniques do not scale well in execution efficiency on these systems or, more importantly, in the amount of time the programmer spends writing, debugging, and tuning the software. To realize extreme-scale computing, we must increase programmer productivity. To that end, we require three advances in the programming paradigm for these systems. First, the application programmer must interact with the development environment at a level higher than processes or execution threads. Tools must support these interaction modes and more abstract application views. Second, system monitoring functions must exist to provide feedback to the application programmer on overall system performance. Monitoring an extreme-scale system must include some degree of automation and must be able to infer overall performance from a small set of monitoring points. Feedback must be compressed to highlight performance issues at the high abstraction level the programmer requires. Finally, many low-level optimization decisions must be automated by incorporating a new generation of compiler optimizations targeting global program behavior, and these must be intimately integrated with the monitoring system. These advances are described collectively as autonomic performance optimization.
Our proposed research addresses these requirements by developing new tools and extending current tools to manage large software projects. We will extend our prior work on the Tau framework of performance instrumentation and analysis tools to the scale used by these HEC applications. To do this, we will incorporate a new framework for monitoring representative "skeletons" that can provide information to the programmer about total system performance by using a simpler model that matches the execution profile of the full application. Performance modeling of the skeleton is achieved by placement of profile monitors at strategic points in the system. We will utilize advanced machine learning techniques to determine the placement of these monitor points, as well as to synthesize the resulting large quantity of performance information into the proper form for the application designer. Finally, we will automate some critical low-level design decisions by feeding profile data directly to the compiler and dynamic code translator. The optimizations developed target data layout, data duplication throughout the system, and dynamic data movement. Optimizing data management will decrease average access latency for memory references, reducing congestion on the inter-processor and inter-cluster networks while freeing the programmer from making detailed data placement decisions.
The intellectual merit of this proposal is in the new paradigm of autonomic performance optimization as a framework for the integration of performance methods and tools for HEC systems. The broader impact is both technical and societal. Ultimately, we strive to enhance the computational tool infrastructure used to solve Grand Challenge scientific computing problems. However, we believe this must be done in association with the evolution of large-scale computing to use introspective, autonomic platforms and systems. Our work will enable practitioners to more easily build efficient, scalable applications, to solve very large and complex problems, and to do so more quickly than is currently feasible. The significant increase in the productivity of applications writers will not only enhance the development of scientific applications important to our national infrastructure, but will also open HEC to important economic and societal applications where computing is advancing science and technology.
|
0.915 |
2007 — 2011 |
Kufrin, Richard Shende, Sameer (co-PI) [⬀] Malony, Allen Nystrom, Nicholas Moore, Shirley |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sdci Hpc Improvement: High-Productivity Performance Engineering (Tools, Methods, Training) For Nsf Hpc Applications @ University of Oregon Eugene
Intellectual Merit The promise of high-performance computing (HPC) will be realized by science and engineering (S&E) applications executing on scalable HPC computer systems at the high end of their performance range. Performance optimization of S&E application codes will be achieved through a process of performance engineering, where tools for parallel performance measurement, analysis, and tuning are used productively to discover sources of performance inefficiency and remove them. Parallel performance tools research and development has created powerful techniquesfor performance observation, analysis, and optimization, and produced technology solutions that are portable, interoperable,and scalable. It is now important to transfer successful, robust parallel performance infrastructure to a performance engineering framework, integrated with HPC cyberinfrastructure and directed at documented user requirements for HPC performance problem solving. In addition, if HPC resources are to be maximized, human-centric investments must also be made to help train application developers to be good performance engineers.
Broader Impact This performance software foundation will be complemented by a community-driven education and training initiative to increase human productivity in performance engineering efforts across multiple S&E fields. The proposed project will also create a training program for performance technology and engineering, which will be piloted and refined at the Pittsburgh Supercomputing Center and integrated with the TeraGrid Education, Outreach,Training (EOT) program over time. This program's objectives will be to educate application developers and students in sound performance evaluation methods, to teach them best practices for engineering high-performance code solutions based on expert tuning strategies, and to train them to use the performance tools effectively. The project will develop training materials and infrastructure for distributed access, as well as institute a series of tutorials and bring your own code workshops that will be offered in-person and over the AccessGrid. In addition, application engagement will be an important component of this activity. The project will work with undergraduate and graduate students directly in performance analysis of S&E applications, and with developers of leadinglarge-scale applications to integrate performance engineering in their projects. A performance repository containing detailed characterization data for a broad set of applications and platforms will be created and made available for use across all HPC centers for performance data mining. Project success will be measured by three metrics: the improvements in application performance achieved on high-impact S&E applications, the increased performance competency of application developers across S&E domains, and the acceptance and ubiquity of the performance infrastructure among the NSF Track 1 and Track 2 centers.
|
0.915 |
2010 — 2013 |
Guenza, Marina (co-PI) [⬀] Tucker, Don (co-PI) [⬀] Tucker, Don (co-PI) [⬀] Conery, John (co-PI) [⬀] Malony, Allen Lockery, Shawn (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri-R2: Acquisition of An Applied Computational Instrument @ University of Oregon Eugene
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
Building on the success of a previousMRI-funded project, an interdisciplinary group of computer scientists, psychologists, biologists, chemists, and physicists at the University of Oregon is acquiring a large-scale computational resource, the Applied Computational Instrument for Scientific Synthesis (ACISS), to support continued cutting-edge scientific research in these areas. The ACISS hardware will consist of general purpose multicore computing nodes, high performance computing nodes augmented with GPGPU acceleration, a 400TB storage system, high-bandwidth networking infrastructure and additional computing resources that will be incorporated into an existing visualization lab in the Department of Computer and Information Science. A key part of the proposed infrastructure is the unique opportunity to manage ACISS as a computational science cloud.
The ACISS infrastructure will allow an expanded the scope for the current projects in the areas of software tools for performance measurement, programming environments and languages for describing and executing complex simulations and scientific work flows, new algorithms for multiple sequence alignment and phylogenetic inference and undertake new projects in support of the domain sciences. Research projects that will benefit include: a) modeling neural networks in C. elegans to better understand the neural mechanisms responsible for chemotaxis and klinotaxis, and investigation of the evolution of genes involved in development and their role in speciation and phenotypic variation; b) development of neuroinformatic techniques used in brain imaging and analysis, integrating structural information from fMRI and other sources with EEG data; c) molecular modeling research, including the definition of new techniques for meso-scale modeling and applying computational methods to understand phase transitions and nitrogen fixation; d) astrophysical simulations of turbulent plasma flows that influence the early stages of planet formation.
The ACISS infrastructure will provide the computational resources necessary for future multidisciplinary research. ACISS will establish a novel paradigm for computational science research and practice. The experience gained in early adoption of the ACISS cloud computing technologies will allow us to more rapidly apply this knowledge to create new scientific work flows, more productive research collaborations, and enhanced multidisciplinary education programs. Farther reaching, ACISS can be seen as a model for translational computational science, in which ACISS-based services function as cyber-incubators where new work flows for scientific research are prototyped.
|
0.915 |
2012 — 2016 |
Dominguez, Jose Bothun, Gregory (co-PI) [⬀] Espy, Kimberly Rejaie, Reza [⬀] Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc-Nie Network Infrastructure: Bridging Open Networks For Scientific Applications and Innovation (Bonsai) @ University of Oregon Eugene
The availability and growing use of high-performance heterogeneous computing and storage components by scientists across university campuses has led to realization that the general-purpose campus network offers neither the capacity nor the capabilities necessary for data-intensive research project. This project designs, builds and maintains a new network at the University of Oregon (UO) campus called Bridging Open Networks for Scientific Applications and Innovation (BONSAI). BONSAI is designed as a high-performance science network providing high end-to-end throughput and unique capabilities between five interconnected UO facilities, as well as computing resources located at other institutions throughout Internet2.
This project primarily focuses on the following five major tasks:(1) Creating a Science DMZ platform among major computing facilities across UO; (2) Adding a new 10Gbps network circuit between the UO and Internet2; (3) Implementing and operating Software-Defined Networking(SDN) technologies throughout the network; (4) Promoting the development of IPv6- and service-aware scientific applications, and (5) Socializing the use of the UO's membership to the InCommon federation.
BONSAI also serves as a testbed for experimental research on new networking technologies (e.g.,SDN) and facilitates to support data-intensive applications (e.g., visualization), including their translation for educational purposes. This project will directly impact teaching and training opportunities for undergraduates, graduate students, and postdoctoral researchers by providing access to advanced computing infrastructure and networking capabilities. Finally, the project will significantly advance the broader access and dissemination of UO research results for advancing scientific and technological understanding.
|
0.915 |
2012 — 2016 |
Shende, Sameer (co-PI) [⬀] Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Si2-Ssi: Collaborative Research: a Glass Box Approach to Enabling Open, Deep Interactions in the Hpc Toolchain @ University of Oregon Eugene
Parallel computing has entered the mainstream with increasingly large multicore processors and powerful accelerator devices. These compute engines, coupled with tighter integration of faster interconnection fabrics, are drivers for the next-generation high end computing (HEC) machines. However, the computing potential of HEC machines is delivered only through productive parallel program development and efficient parallel execution. This project enables application developers to improve performance on future HEC machines for their scientific and engineering processes. This project challenges the current model for parallel application development via "black box" tools and services. Instead, the project offers an open, transparent software infrastructure -- a Glass Box system -- for creating and tuning large-scale, parallel applications. `Opening up' the tools and services used to create and evaluate peta- and exa-scale codes involves developing interfaces and methods that make tool-internal information and available for new performance management services that improve developer productivity and code efficiency.
The project will explore the information that can be shared 'across the software stack'. Methods will be developed for analyzing program information, performance data and tool knowledge. The resulting Glass Box system will allow developers to better assess the performance of their parallel codes. Tool creators can use the performance data to create new analysis and optimization techniques. System developers can also better manage multicore and machine resources at runtime, using JIT compilation and binary code editing to exploit the evolving hardware. Working with the `Keeneland' NSF Track II machine and our industry partners, the project will create new performance monitoring tools, compiler methods and system-level resource management techniques. The effort is driven by the large-scale codes running on today's petascale machines. Its broader impact is derived from the interactions with technology developers and application scientists as well as from its base in three universities with diverse student populations.
|
0.915 |
2015 — 2019 |
Shende, Sameer [⬀] Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Si2-Ssi: Collaborative Research: a Software Infrastructure For Mpi Performance Engineering: Integrating Mvapich and Tau Via the Mpi Tools Interface @ University of Oregon Eugene
Message-Passing Interface (MPI) continues to dominate the supercomputing landscape, being the primary parallel programming model of choice. A large variety of scientific applications in use today are based on MPI. On the current and next-generation High-End Computing (HEC) systems, it is essential to understand the interaction between time-critical applications and the underlying MPI implementations in order to better optimize them for both scalability and performance. Current users of HEC systems develop their applications with high-performance MPI implementations, but analyze and fine tune the behavior using standalone performance tools. Essentially, each software component views the other as a blackbox, with little sharing of information or access to capabilities that might be useful in optimization strategies. Lack of a standardized interface that allows interaction between the profiling tool and the MPI library has been a big impediment. The newly introduced MPI_T interface in the MPI-3 standard provides a simple mechanism that allows MPI implementers to expose variables representing configuration parameters or performance measurements from within the implementation for the benefit of tools, tuning frameworks, and other support libraries. However, few performance analysis and tuning tools take advantage of the MPI_T interface and none do so to dynamically optimize at execution time. This research and development effort aims to build a software infrastructure for MPI performance engineering using the new MPI_T interface.
With the adoption of MPI_T in the MPI standard, it is now possible to take positive steps to realize close interaction between and integration of MPI libraries and performance tools. This research, undertaken by a team of computer scientists from OSU and UO representing the open source MVAPICH and TAU projects, aims to create an open source integrated software infrastructure built on the MPI_T interface which defines the API for interaction and information interchange to enable fine grained performance optimizations for HPC applications. The challenges addressed by the project include: 1) enhancing existing support for MPI_T in MVAPICH to expose a richer set of performance and control variables; 2) redesigning TAU to take advantage of the new MPI_T variables exposed by MVAPICH; 3) extending and enhancing TAU and MVAPICH with the ability to generate recommendations and performance engineering reports; 4) proposing fundamental design changes to make MPI libraries like MVAPICH ``reconfigurable'' at runtime; and 5) adding support to MVAPICH and TAU for interactive performance engineering sessions. The framework will be validated on a variety of HPC benchmarks and applications. The integrated middleware and tools will be made publicly available to the community. The research will have a significant impact on enabling optimizations of HPC applications that have previously been difficult to provide. As a result, it will contribute to deriving "best practice" guidelines for running on next-generation Multi-Petaflop and Exascale systems. The research directions and their solutions will be used in the curriculum of the PIs to train undergraduate and graduate students.
|
0.915 |
2017 — 2018 |
Huck, Kevin [⬀] Malony, Allen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Phylanx: Python Based Array Processing in Hpx @ University of Oregon Eugene
The availability and size of data sets has increased significantly over the course of the past decade. To enable the analysis of large data sets on High Performance Computing (HPC) resources while minimizing time- and energy-to-solution requires incorporating static and runtime information to determine the best possible data layout of the large data arrays used by an application to minimize data movement. The goal of this proposal is to deliver Phylanx, a general purpose framework supporting a variety of data science, machine learning, and statistically oriented applications. Phylanx is designed such that a user?s code will be able to perform efficiently on current and future architecture as long as the runtime system is maintained. This greatly reduces the maintenance burden and will increase the productivity of domain scientists. Phylanx lays a solid foundation for technology transfer from academia to industry and fills the gap between academic innovation and commercial application, by creating a software layer that industrial partners can feel confident relying upon. Phylanx is a scalable, array-based and distributed framework targeting HPC systems using the HPX, dynamic asynchronous task-based parallel runtime system. The dataflow-style capabilities exposed by HPX guarantee the preservation of all data-dependencies even for complex distributed workflows. This project overcomes some of the limitations of existing Big Data solutions such as Hadoop, Spark, and Flink by providing users the ability to: implement NumPy-styled expression-graphs using Python or C/C++, optimize these graphs for optimal data layout, distribution, tiling, and minimal communication overheads, and evaluate those graphs with high efficiency on a runtime interpreter targeting distributed HPC systems. Additionally, Phylanx uses greedy sub-modular techniques on the expression tree to provide a mathematically provable guarantee of optimal performance in machine learning domains and in data placement problems. The platform will provide implementations of 6 benchmarks which have been selected for their domain specificity in text, image, and graph applications.
|
0.915 |