1983 — 1984 |
Zeigler, Bernard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computer Research Equipment (Computer Science) |
0.915 |
1983 — 1985 |
Zeigler, Bernard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Theory of Discrete Event Systems: Distributed Simulation Ofmultilevel Models (Computer Research) |
0.915 |
1984 — 1987 |
Zeigler, Bernard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Distributed Simulation of Hierarchical, Multicomponent Models |
0.915 |
1987 — 1990 |
Zeigler, Bernard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Intelligent Simulation Environment For Advanced Computer Architectures
During the first year of this research project the primary objective was to develop a simulation environment modelling and design that would facilitate construction of variant families of models and variable structure models. Although a number of application areas for such an environment were considered, the research team focussed on computer network and computer architecture design and simulation. This year, work will continue by further specializing the simulation environment to support design, modelling, and simulation of advanced computer architectures.
|
0.915 |
1987 — 1989 |
Zeigler, Bernard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Variant Families of Hierarchical Discrete Event Models: Distribution Simulation |
0.915 |
1988 — 1989 |
Zeigler, Bernard Rozenblit, Jerzy [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computer Equipment Research Proposal
A Lisp workstation will be provided for researchers at the University of Arizona for research in the Department of Computer Engineering. This equipment is provided under the Instrumentation Grants for Research in Computer and Information Science and Engineering program. The research for which the equipment is to be used will be in the areas of research combining simulation, artificial intelligence, and system design.
|
0.915 |
1993 — 1998 |
Sanders, William Zeigler, Bernard Rozenblit, Jerzy Ball, George Marefat, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Massively Parallel Simulation of Large Scale, High Resolution Ecosystem Models
9318163 Bielak The Grand Challenge Application Groups competition provides one mechanism for the support of multidisciplinary teams of scientists and engineers to meet the goals of the High Performance Computing and Communications (HPCC) Initiative in Fiscal Year 1993. The ideal proposal provided not only the opportunity to achieve significant progress on (1) a fundamental problem in science or engineering whose solution could be advanced by applying high performance computing techniques and resources, or (2) enabling technologies which facilitate those advances. but also significant interactions between scientific and computational activities, usually involving mathematical, computer or computational scientist, that would have impact in high performance computational activities beyond the specific scientific or engineering problem area(s) or discipline being studied. The main objective of the proposed research is to develop and demonstrate the capability for predicting, by computer simulation the ground motion of large basins during strong earthquakes, and to use this capability to study the seismic response of the Greater Los Angeles Basin. The proposed research seeks to: 1. Develop three-dimensional models of large-scale, heterogeneous basins that take into account earthquake source, propagation path, and site conditions; 2. Develop nonlinear models for sedimentary basins that experience sufficiently strong ground motion; 3. Develop unstructured mesh methods and associated fast parallel solvers, enabling the study of much larger basins; 4. Develop software tools for the automatic mapping of the computations associated with large unstructured mesh problems on parallel computers; 5. Characterize the computation and communication requirements of unstructured mesh problems, and make a set of recommendations for the design of future parallel systems. While the proposed work is motivated by an interest in gaining a better understanding of strong se ismic motion in large basins, the algorithms and software tools developed will be applicable to a wide range of applications that require unstructured meshes. This award is being supported by the Advanced Projects Research Agency as well as NSF programs in engineering, atmospheric and computer sciences. 9318183 Davis The Grand Challenge Application Groups competition provides one mechanism for the support of multidisciplinary teams of scientists and engineers to meet the goals of the High Performance Computing and Communications (HPCC) Initiative in Fiscal Year 1993. The ideal proposal provided not only the opportunity to achieve significant progress on (1) a fundamental problem in science or engineering whose solution could be advanced by applying high performance computing techniques and resources, or (2) enabling technologies which facilitate those advances but also significant interactions between scientific and computational activities, usually involving mathematical, computer or computational scientist, that would have impact in high performance computational activities beyond the specific scientific or engineering problem area(s) or discipline being studied. The investigators will study the application of high performance parallel computing to a class of scientifically important and computationally demanding problems in remote sensing-land cover dynamics problems including generating improved fine spatial resolution data for the global carbon cycle, hydrological modeling and global ecological responses to climate changes and human activity. The research is collaborative, including scientist from the University of Maryland, University of Indiana, University of news Hampshire and NASA s Goddard Space Center. The award will combine research on -new analysis procedures for remotely sensed data -the integration of multispectral, multiresolution and multitemporal image data sets into a unified global data structure based on hie rarchical data structures (i.e., quadtrees) -the utilization of these hierarchical, parallel data structures for the representation of spatial data (maps and products developed from image analysis) and the development of a spatial data base system with a sophisticated query language that scientist can use to control the application of biophysical models to global data sets -run-time support for constructing scalable and parallel solutions to problems involving the manipulation of irregular data structures such as quadtrees -parallel I/O,especially novel methods for mapping large arrays and quadtrees onto parallel disks and disk systems, and for accessing them using low overhead bulk transfers The development work will be conducted on a 32 processor Connection Machine CM5, installed at the University of Maryland, and on an IBM SP1 which we propose to obtain as part of the program. This award is being supported by the Advanced Projects Research Agency as well as NSF programs in geological, biological, and computer sciences. 9318145 Messina The Grand Challenge Application Groups competition provides one mechanism for the support of multidisciplinary teams of scientists and engineers to meet the goals of the High Performance Computing and Communications (HPCC) Initiative in Fiscal Year 1993. The ideal proposal provided not only the opportunity to achieve significant progress on (1) a fundamental problem in science or engineering whose solution could be advanced by applying high performance computing techniques and resources, or (2) enabling technologies which facilitate those advances. but also significant interactions between scientific and computational activities, usually involving mathematical, computer or computational scientist, that would have impact in high performance computational activities beyond the specific scientific or engineering problem area(s) or discipline being studied. This multi-disciplinary project will investigate and develop strategies for efficient implementation of I/O intensive applications in computational science and engineering. Scalable parallel I/O approaches will be pursued by a team of computer scientists and applications scientists who will work together to: * Characterize the I/O behavior of specific application programs running on large massively parallelcomputers * Abstract and define I/O models (templates) * Define application-level methodologies for efficient parallel I/O * Implement and test application level I/O tools on large-scale computers The Pablo performance analysis environment will provide the foundation for the performance instrumentation and analysis. The application programs are already fully operational on advanced architecture systems and their authors are all co-investigators in this project. The principal computers used will be the Intel Touchstone Delta and Paragon systems at Caltech, each with over 500 computational nodes. Five application areas will be included: fluid dynamics, chemistry, astronomy, neuroscience, and modelling of materials-processing plasmas. The parallel programs for these applications cover a range of I/O patterns and volume, and the techniques that will be developed in this project will be of relevance to a broad spectrum of engineering and science applications. In addition, by overcoming their current I/O limitations, the specific applications targeted in this award will achieve significant new science and engineering results. By the end of the project, sustained teraFLOPS computers will become available. The project will devise and implement general methods for scalable I/O using today's advanced computers, immediately apply those methods to carry out unprecedented applications in several fields, and use the
|
0.915 |
1999 — 2003 |
Zeigler, Bernard Schlichting, Richard Sen, Suvrajeet Ciarallo, Frank Sarjoughian, Hessam |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Next Generation Software: a Simulation Platform For Experimentation and Evaluation of Distributed-Computing Systems (Speed-Cs)
EIA-9975050 Suvrajeet Sen University of Arizona
Next Generation software: A Simulation Platform for Experimentation and Evaluation of Distributed-Computing Systems (SPEED-CS)
The proposed project seeks funding to develop a workbench that will facilitate rapid composition, evaluation, modification and validation of distributed embedded systems. Today's technology requires considerable time and effort in programming and verifying performance of embedded systems. One of the major goals of this project is to develop a methodology through which software modules used in simulations can be directly exported into the embedded environment. The workbench will allow users to compose applications, simulations, run experiments and, build performance models for distributed computing systems. These tools will make it possible for users to develop applications in "plug-and-play" manner, thus reducing the time required for software development, and increasing its reliability at the same time. The project team is truly multi-disciplinary with expertise in communications systems, embedded systems, simulation and distributed systems modeling and performance analysis. In addition, the new technology will be developed through a partnership between researchers at the University of Arizona and Modular Mining Systems Inc. (MMSI, a leading software vendor in the mining industry. Finally, a start-up manufacturing company will also use the software to verify the generality of our approach.
|
0.915 |
2000 — 2001 |
Zeigler, Bernard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Scalable Enterprise Systems: Discrete Event System Specification (Devs) as a Formal Modeling Framework For Scaleable Enterprise Design - Case Study: Model-Driven Data Management
This research will investigate a theoretical foundation for a major responsibility of enterprise systems to ensure that the right information about the enterprise is available to decision makers at the right time. The Internet and e-commerce are rapidly creating an environment in which businesses can be usefully likened to organisms which must tap into the relevant features of their environments to make rapid life/death decisions-where even what is relevant is continually in flux. The primary focus of this project is to formulate an approach to incorporating flexibility in connecting decision-makers (brain) to dynamically relevant data sources (internal and external environment) to support time-critical decisions. Within this framework, the research will study the application of model-driven data acquisition, filtering and attention mechanisms to achieve flexible sensor-to-decision maker connectivity.
Enterprise resource planning (ERP) systems are a response to the realization that manufacturing control systems cannot function in isolation from other major enterprise functions. The objectives of ERP systems should also include standardization of principal functional modules to minimize customization and enhance reusability. The ultimate result of this research would be a framework for scalable enterprise system development that does not constrain the continued development of the framework as would a pure software code-based approach. Based on a mathematical formalism, the modules developed would not be bound to any one technology but would be amenable to transition to scalable network infrastructures as they evolve. The development of a model-based framework for goal-driven data delivery could ultimately guide the design of scalable data management as well as other standardized modules in flexible corporate enterprise systems.
|
0.915 |
2001 — 2004 |
Zeigler, Bernard Sarjoughian, Hessam (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Scalable Enterprise Systems Phase Ii: Discrete Event System Specification (Devs) as a Formal Modeling and Simulation Framework For Scaleable Enterprise Design
This Scalable Enterprise Systems Phase II project will develop the Discrete Event System Specification (DEVS) Formal Framework for Scalable Enterprise Design and extend earlier-developed DEVS-based modeling and simulation environments to support several real world test cases. As the Internet expands toward 1 billion nodes forming a highly interconnected and computationally powerful medium, and companies increase specialization and horizontal layered organization, new complexity and dynamics are emerging. Scalability, the ability to avoid performance degradation and system breakdown as the scale of activity greatly increases, is one of the urgent global problems that needs to be addressed. This research will seek to enhance scalability at three inter-related levels of abstraction: the Enterprise Architecture level, the Information Technology Infrastructure level, and the Modeling and Simulation level. Earlier research developed a theoretical foundation for architecting a major responsibility of enterprise systems -- to ensure that the right information about the enterprise is available to decision makers at the right time. Having extended the DEVS formalism to express time-critical behaviors in enterprise data management, the researcher proposes to implement the extended DEVS functionality by suitably extending the distributed real-time execution environment previously developed in NSF-sponsored research. This environment will be tested by two diverse applications: a small scale but complete and real factory automation test bed and a large-scale web-hosting service for e-business.
The Integrated Manufacturing Technology Initiative (IMTI) sponsored by the primary governmental funding agencies (NIST, DOE, NSF, and DARPA) states that modeling and simulation are emerging as key technologies to support manufacturing in the 21st century. This research will attempt to fill in some of the gap between the current state of the art and the IMTI vision of the future. In this vision enterprise processes, equipment and systems are linked via a robust communications infrastructure that delivers the right information at the right time; and integrated enterprise management systems that ensure that decisions to be made in real-time and on the basis of enterprise-wide impact. Achieving scalability in M&S and IT infrastructure will enable a wide array of M&S studies and implementations, as well as supporting the scalability of the future M&S-based networked, extended and distributed enterprise systems envisioned by IMTI.
|
0.915 |
2001 — 2003 |
Zeigler, Bernard Sarjoughian, Hessam (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Ultra-Large Networks: Challenges and New Research Directions For Modeling and Simulation
We propose to hold a two-day workshop in Tucson, Arizona during fall 2001 on emerging ultra-large networks, such as the Internet. Hosted by Arizona Center for Integrative Modeling and Simulation, the meeting will bring together some of the world's leading researchers in the networking area to meet with counterparts with expertise in modeling and simulation of networks and of systems more generally. The workshop will provide these researchers with the task of coming up with the unknowns of ultra-large networks and with new directions of research in modeling and simulation that can address these unknowns. The Internet is increasing in connectivity, toward 1 billion nodes in 2005 and node capability providing a highly interconnected and computationally powerful medium. Such a globally and ubiquitously dispersed network will provide a new frontier for new kinds of educational, commerce and entertainment activities. However, there are many issues that arise in the emergence of such a large, highly decentralized, collection of interaction parts. The increased connectivity and capability creates new complexity and dynamics that we are only on the verge of appreciating. Difficulties in dealing with large-scale software systems are well documented in a recent report by the National Research Council. Techniques that work for small software systems fail markedly when the scale is increased by one million fold. Computer-based modeling and simulation (M&S) methodology is required to address these issues since the scale is well beyond what analytical tools alone can handle and there is limited ability to do controlled experiments on the always on Internet. Traditional M&S approaches have focussed on the micro-level components rather than the macro level integration of these components. However, with the advent of ultra-large scale systems such as the Internet of the future, it is necessary to develop M&S approaches for understanding the behaviors of very large inter-connected networks with very few loci of control and many interacting and varied sources of input and services demand. The results of this workshop are expected to be a set of specific finding of gaps in our knowledge of the behavior of ultra-large networks and how to deal with their design, management, and control. Participants may assess whether current approaches can be evolved to deal with the large increases in scale or whether different, revolutionary paradigms are required. Participants will address the need for new techniques and approaches for building models of ultra-large networks and developing simulation environments for studying their behaviors. Suggestions for borrowing points of view form other areas such as complex adaptive systems and from basic theory of modeling and simulation will be encouraged. The proceedings will be compiled in a form that will provide a useable and significant guide for new NSF funding initiatives for future network infrastructure research.
|
0.915 |
2003 — 2004 |
Zeigler, Bernard Hariri, Salim (co-PI) [⬀] Sarjoughian, Hessam (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Modeling and Simulation For Design of Large Software-Intensive Systems: Challenges and New Research Directions; Tucson, Arizona; October 2003
Fueled by Moore.s law of exponentially expanding computational and networking infrastructure, we are witnessing a trend toward ever-larger software structures to drive business, science, and military systems on such infrastructure. Unfortunately, the science of system design has lagged behind to guide the development of such software-intensive systems.
Many issues arise in the design of such large, highly decentralized, collections of interacting parts. The increased connectivity and capability create new complexity that is difficult to control and dynamics that are difficult to predict.
Computer-based modeling and simulation (M&S) methodology is required to address these issues since the scale is well beyond what analytical tools alone can handle and there is limited ability to do controlled experiments. Traditional M&S approaches have focused on the micro-level components rather than the macro level integration of these components. However, large software-intensive systems demand new M&S approaches for understanding the dynamic behaviors of very large inter-connected networks with very few loci of control and many interacting components.
The goal of the proposed workshop is to explore directions for a science of M&S-based design for large software-intensive systems. To do this researchers in the theory and formalisms of M&S will be brought together with researchers in software development concepts and methodologies. Among software elements to be considered for their contribution to a science of design are: o Spiral development, a normative, flexible, risk-driven process model that is used to guide multiple stakeholders through concurrent engineering of software-intensive systems. o Formal method including the possibilities of .lightweight. variants that allow for inclusion of informal elements trading rigor for expressibility. o Architectural principles that provide uniform structures with known properties to organize the complexity of large systems. Architectural styles, design patterns and Unified Modeling Language constructs provide instances of such principles
|
0.915 |
2004 — 2008 |
Zeigler, Bernard Yeh, Tian-Chyi Jim Hariri, Salim (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Sei (Ear): Adaptive Fusion of Stochastic Information For Imaging Fractured Vadose Zones
A stochastic information fusion methodology is developed to assimilate electrical resistivity tomography, high-frequency ground penetrating radar, mid-range-frequency radar, pneumatic/gas tracer tomography, and hydraulic/tracer tomography to image fractures, characterize hydrogeophysical properties, and monitor natural processes in the vadose zone. The information technology research will develop: (1) mechanisms and algorithms for fusion of large data volumes; (2) parallel adaptive computational engines supporting parallel adaptive algorithms and multi-physics/multi-model computations; (3) adaptive runtime mechanisms for proactive and reactive runtime adaptation and optimization of geophysical and hydrological models of the subsurface; and (4) technologies and infrastructure for remote (pervasive) and collaborative access to computational capabilities for monitoring subsurface processes through interactive visualization tools.
The combination of the stochastic fusion approach and information technology can lead to a new level of capability for both hydrologists and geophysicists enabling them to "see" into the earth at greater depths and resolutions than is possible today. Furthermore, the new computing strategies will make high resolution and large-scale hydrological and geophysical modeling feasible for the private sector, scientists, and engineers who are unable to access supercomputers, i.e., it is an effective paradigm for technology transfer.
|
0.915 |