1987 — 1993 |
Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pyia: Synthesis of Robust Control System @ California Institute of Technology
Something of revolution has occurred in control theory in the past 10 years which involves a renewed focus on the critical role that uncertainty in modeling physical systems plays in controller design. Although this revolution has included the introduction of significant new mathematics into the field (e.g. Structured Singular Values (SSV or mu), H infinity optimal control, mu-synthesis), it is most noteworthy for the impact it is having on practical applications. There has finally emerged systematic, quantitative methods for the design of robust control systems, systems that provide performance in the presence of substantial uncertainty about the physical system being controlled. The focus of this research effort will be on further extensions to the mathematics, particulary mu-synthesis. This effort will complement the other research in this area at Caltech, which is focused on applications, nonlinear control, and system identification.
|
0.915 |
1989 — 1990 |
Man, Guy Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
3rd Annual Conference On Aerospace Computational Control, to Be Held in Oxnard, Ca On August 28-30, 1989 @ California Institute of Technology
This award supports a conference which assesses the status of and identifies research needs and opportunities in the field of aerospace computational control. Many of the topics addressed at the meeting have a direct bearing on research supported by the Dynamic Systems and Control Program. Specifically, multibody simulation formulation, articulated multibody component representation, parallel computation for multibody dynamics, model reduction, symbolic manipulation, and control design and analysis are included in the conference topics. Support of this proposal satisfies, in part, several key NSF goals including strengthening cooperation between industry and university researchers, and aiding in the transfer of technology from the research level to the users. It also represents cooperation between government agencies on research of mutual interest to the three agencies involved.
|
0.915 |
1992 — 1993 |
Man, Guy Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fifth Annual Workshop On Aerospace Computational Control; Santa Barbara,Ca @ California Institute of Technology
This project deals with the organization of the Fifth Annual Workshop on Aerospace Computational Control. The major objectives of this workshop are to provide NSF and NASA an assessment of current computational control research and development; to identify future needs in important research areas such as computer-aided control design and multibody simulation tools which can be built with the combined efforts of government, industrial and university personnel; and to strengthen the link between tool developers and end-users. Another important goal of the workshop is to strengthen cooperation between the industry and university researchers thereby aiding in the transfer of technology from the research level to the end-users. The issues discussed at the workshop include multibody dynamics, computational techniques, model reduction, real-time simulation, symbolic manipulation and concurrent processing.*** //
|
0.915 |
1994 — 1996 |
Mceliece, Robert [⬀] Morari, Manfred (co-PI) [⬀] Goodman, Rodney (co-PI) [⬀] Doyle, John Murray, Richard (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Renovation of Research Facilities For Communications and Automatic Control @ California Institute of Technology
This NSF award will be utilized to renovate approximately 5,600 square feet of research space in the Steele Laboratory of Electrical Sciences on the campus of the California Institute of Technology. Steele Lab is approximately 30 years old, and the renovation will retrofit the facility to the changed emphasis of Cal Tech electrical engineering research programs. In addition, it will consolidate researchers into one facility to work together on interdisciplinary projects in communications and automatic control. All seven project faculty have joined the faculty in the last ten years. Specific renovation activities include construction of a unique high bay area to house a flexible structure experiment as well as a distillation column of sufficient size to mimic actual industrial problems. A local area network will be established to increase researcher interaction and to overcome the inefficiencies associated with use of the campus backbone system which is approaching maximum capacity. The project will have an important impact on faculty growth and student enrollment, including minorities and women, in the competitive area of communications and automatic control.
|
0.915 |
1998 — 2002 |
Ortiz, Michael (co-PI) [⬀] Antonsson, Erik (co-PI) [⬀] Marsden, Jerrold (co-PI) [⬀] Doyle, John Schroder, Peter [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Opaal: Integrated Design, Modeling, and Simulation @ California Institute of Technology
Integrated Design, Modeling, and Simulation
At the highest level, the process of engineering design is: given a desired functionality, synthesize, refine, and describe a device which robustly exhibits the desired function. To build computational environments which support the designer and engineer in this task --- as the complexity of engineering continues to grow --- requires new, integrated modeling, simulation, visualization, and design environments. To be truly predictive they must be mathematically well-founded, scale gracefully to large problem sizes, and reliably model physical behavior across a wide range of scales. The research team represents computer science, mathematics, mechanical engineering, control and dynamical systems, and engineering design, to develop wholly new representations and algorithms, capable of addressing these issues in concert. The core integrating principle they employ is the mathematics of "multiresolution," i.e., descriptions at different levels of resolution, for geometry, numerical solvers, mechanics, and conceptual design. The technical approach is based on the use of subdivision geometry, coupled with thin shell equation dynamics. Integrating automatic model reduction techniques --- encompassing geometry, topology, and mechanics --- based on rigorous mathematics and dynamics principles allows them to accurately and efficiently model the behavior of complex assemblies across a wide range of physical scales from microscopic to gross behavior. Fluidly and accurately moving across many physical scales makes these representations and algorithms highly suitable for engineering design. Coupling them with set-based design methods allows designers and engineers to rapidly explore large design spaces.
The objective of the proposed research is to significantly advance modeling, simulation, and design environments as needed in, for example, the aircraft and automobile industries (among many others). Current industry practice is based on separately developed, often incompatible computational modules for geometric modeling, physical simulation, and analysis. With the increasing complexity of modern engineering, this approach incurs ever larger overhead and ultimately yields results whose physical accuracy is questionable. Reconsidering the mathematical, physical, and computer science foundations of such systems from scratch, employing an integrated team of researchers and students, can radically advance the state of the art in engineering design. Cross fertilization among the team members is already yielding a new approach to simulating the behavior of thin sheets of metal, removing a long standing problem in mechanics simulation, with the potential to revolutionize the use of computer simulations in this area. To ensure the relevance of their work to real industrial practice the researchers are collaborating closely with aircraft industry and vendors of computer modeling applications. Postdoctoral scholars, graduate students and undergraduates are integrated into this crossdisciplinary program. A new generation of researchers and practitioners is being educated who are equally at ease with questions of mathematics, computer science, and mechanical engineering.
Funding for this activity will be provided by the Division of Mathematical Sciences, the MPS Office of Multidisciplinary Activities, and by DARPA.
|
0.915 |
1998 — 1999 |
Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
U.S.-Japan Joint Seminar: Robust Virtual Engineering @ California Institute of Technology
9726331 Doyle This award supports the participation of American scientists and engineers in a U.S.-Japan seminar on Robust Virtual Engineering, to be held in Pasadena, California, from October 5-10, l998. The co-organizers are Professors John Doyle of the California Institute of Technology and Hidenori Kimura of the University of Tokyo in Japan. The seminar will undertake a study of Robust Virtual Engineering. The objective is to focus on constructing a theoretical basis for virtual engineering and complex systems. The use of "virtual engineering" suggests both a broader and more focused concept than "modeling and simulation." It also makes explicit the role of analysis, insight, understanding, design, and decision-making. The seminar will focus on physics-based complex systems. Physicists typically do not develop their models so that they can be incorporated naturally as elements of a larger system. Engineers typically design systems using methods which are cautious and robust, but without relying on detailed physical descriptions of the components. The session will focus on encouraging the incorporation of more detailed physical descriptions at just the right time and in just the right place for a much more realistic picture of the ultimate limitations associated with the performance of the system. It is expected that the interactions between the engineers and physicists should assist in better planning for the future of virtual engineering and complex systems. The exchange of ideas and data with Japanese experts in this field will enable U.S. participants to advance their own work, and will set the stage for future collaborative projects. Seminar organizers have made a special effort to involve younger scientists. The proceedings of the meeting will be made available on the World Wide Web. ***
|
0.915 |
2000 — 2006 |
Kimble, H. Kitaev, Alexei (co-PI) [⬀] Preskill, John [⬀] Schulman, Leonard (co-PI) [⬀] Scherer, Axel (co-PI) [⬀] Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Institute For Quantum Information @ California Institute of Technology
EIA -0086038 Kimble, H.J. California Institute of Technology
Title: Information Technology Research: Institute for Quantum Information
An interdisciplinary team of researchers in Physics, Applied Physics, Electrical Engineering and Computer Science are establishing an Institute for Quantum Information (IQI) to facilitate the investigation of quantum information science to provide new capabilities in the revolutionary field of quantum computing. To this end, efforts are being made to develop new algorithms for the manipulation, processing, and distribution of quantum information (including information capacities of communication channels, reliable schemes for distributed computation, efficient quantum error correcting codes). Investigations of physical systems for the implementation of quantum computation and communication, as well as coherent nanotechnology, principally by the way of theoretical models and analysis, are being performed. The team is also pursuing techniques to develop active control of quantum effects in nanoscale integrated circuits involving systematic approaches to the suppression of unwanted quantum effects via on-chip feedback networks and methods for stabilizing and exploiting emergent quantum behaviors in the context of analog/hybrid VLSI.
|
0.915 |
2001 — 2004 |
Low, Steven [⬀] Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Si(Cise):Optimal and Robust Tcp Congestion Control @ California Institute of Technology
Internet is undergoing an overhaul unprecedented in size, diversity, and reach, with profound im- pact in all aspects of our scientific, social, economic and political life through the integration of networks of communication, transportation, entertainment, utilities, and finance. The stability and robustness of this vital infrastructure demands a rigorous theory to understand the current protocols and evolve them to meet emerging challenges. We propose to develop such a theory for TCP congestion control, and use it to drastically improve the stability, robustness and optimality of the current protocols. A key insight is to view congestion control as a distributed asynchronous computation to maxi- mize aggregate source utility over the Internet; different TCP and active queue management (AQM) schemes correspond to different utility functions and different algorithms to maximize them. Our research hastwo components. First, we will develop a new theoretical model of TCP congestion control based on duality in optimization and multivariate robust control. The theory will clarify the role of source algorithms, such as Tahoe, Reno and Vegas, and active queue management, such as DropTail, RED and REM, in the control of networks and establish performance limits of the current protocols; it will explain the effect on stability when delay, topology, capacity, and load scale up; and it will provide conditions under which the feedback stability ofTCP/AQM algorithms are invariant to these effects. Indeed, such a theory is already emerging from our recent works. Even in its currently preliminary stage, it already provides a fundamental understanding on some widely observed performance and fairness behavior of the current protocols, and uncovers new and surprising stability problems. For example, it shows that the current protocols become unstable and exhibit bifurcation when network capacity increases. Moreover, maintaining stability as capacity scales up arbitrarily imposes severe constraints on how sources adjust their rates (TCP) and what congestion information is fed back (AQM). The current protocol does not satisfy the condition for such stability invariance, and hence may be ill suited for future networks where, pulled by application demand and pushed by technological advances, the capacity will be large. The second component of our research is the design of practical TCP and AQM protocols based on the theory, and the development of prototypes and experiments to demonstrate their effectiveness. We will use the theory to identify the sources of instability in the current protocols when delay, network size, capacity, and traffic load scale up. We will design both enhancements that incrementally evolve the current protocols, and drastically new protocols that have the strongly robust stability property promised by theory. As a concrete application of our algorithms, we will apply them to improve TCP performance over wireless links, both because they are ubiquitous and because they are likely to remain the most important bottlenecks in future networks.
|
0.915 |
2002 — 2005 |
Low, Steven [⬀] Newman, Harvey (co-PI) [⬀] Doyle, John Bunn, Julian (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sti: Multi-Gbps Tcp: Data Intensive Networks For Science & Engineering @ California Institute of Technology
The project will develop and deploy "Multi-Gbps TCP" to enable Terascale data transfers across shared networks for data intensive science and engineering. This builds on a foundation of several years of theoretical development and in-depth simulations at Caltech. The testing, optimization and deployment will be accomplished using the regional, national and transoceanic research and production networks available to (and in some cases co-managed and operated by) the Caltech High Energy Physics (HEP) group together with their partners in Abilene, DataTAG, the Starlight-CERN-SURFNet "wavelength triangle", TeraGrid, AMPATH, CANARIE, CalREN and Pacific Light Rail. Work with IETF, ISOC and GGF, and submission of appropriate RFCs and documents during the course of the project, will ensure that the new techniques developed are consistent extensions of existing standards, and available to the worldwide research community.
The goal of this project is to develop practical TCP and AQM (Active Queue Management) schemes, with loss recovery tuning, that maintain stability, high utilization and negligible loss and delay as the network increases in capacity, size, and load. We aim to validate and deploy these techniques on HEP networks, the TeraGrid and other US backbones including Abilene, ESnet, CalREN, and MREN.
It is well-known that the additive-increase-multiplicative-decrease strategy with which TCP probes available capacity performs poorly at large window sizes due to serious dynamic and equilibrium problems. These problems must be solved to scale TCP to the high bandwidth regime. The proponents have developed a mathematical theory that provides a fundamental understanding of the current protocols and exposes their stability problems in high latency, high capacity environments and derived a new class of TCP / AQM algorithms that solve these problems. The central thrust of this work is to compensate for delay, capacity, and routing using the correct scaling. Fortunately, the network structure is such that there is sufficient information to allow the right scaling to be applied in a distributed and decentralized manner, while remaining compliant with the current protocol. These new algorithms, with TCP recovery tuning, will automatically rescale the parameters so as to maintain stability and optimize performance as capacity, delay, routing or loads change.
In simulation the new algorithms in ns-2 and have achieved 98% utilization with an equilibrium queue of less than 100 packets (0.08 ms queuing delay). In this project, we will implement and demonstrate these algorithms in the global HEP research and production networks. In addition, work will occur with both with the standards bodies to help drive emerging standards, and with Grid software developers to deploy them.
The project will leverage the theoretical work on TCP/AQM of Low and Doyle, the leadership and experience of Bunn and Newman in the development and operation of the international network for HEP (and the broader scientific community) over the last 20 years, and the extensive infrastructure of HEP networks to make a quick impact. The project's success will not only benefit directly the HEP community, but will influence research methods in several fields, by enabling the effective use of Grids, globally distributed Terascale computing, and distributed Petascale databases for the first time.
NSF recognizes that transfers of data files of 1018 bytes (exabytes) is important. This project addresses the issue of how to do this in real operations.
|
0.915 |
2003 — 2008 |
Doyle, John Chandy, K. Mani |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr Collab: Theory and Software Infrastructure For a Scalable Systems Biology @ California Institute of Technology
Project Summary: The now well-known vision and challenge in post-genomics biology is to make the entire process of research scalable to large networks using high-throughput techniques and large-scale computation. Computational biology and bioinformatics have focused attention on the need for sophisticated methods for handling large databases and tools for modeling and simulating complex networks. Not as widely recognized is that the scalability of the more subtle processes of drawing meaningful and reliable scientific, medical, and biological inferences from the wealth of data and computation is equally important and requires the development of fundamentally new theory and software.
The research objective of this project is to develop the theoretical foundation and information technology infrastructure necessary to accelerate progress in systems biology, with concrete demonstrations on a variety of biological experiments. This ambitious goal requires augmenting bioinformatics and current modeling and simulationapproaches with greater understanding of the organizational principles underlying network complexity, including connections with molecular details, and exploiting this understanding to advance mainstream experimental biology. Building on recent breakthroughs in theory and scalable algorithms for systematic robustness analysis and model (in)validation of nonlinear network models with uncertain rate constants, the project maps out a research path that will (1) develop the necessary rigorous and practical mathematical theory; (2) embody it in a software environment that supports the complex iterative processes involved in going from raw data to modeling, analysis, and inference, with tight feedback to experimentation and modeling throughout; and (3) apply the theory and software to specific experimental studies in biology as a way of grounding the entire endeavor.
The intellectual merit combines immediate practical impact and conceptual depth. Automating and computationally augmenting scientific and mathematical inference from noisy and incomplete data for uncertain models has long been an elusive goal. Achieving it in the context of complex biological systems is for the first time both a necessity and an achievable goal. To do this, data and modeling assertions and questions must be described in a common framework that is biologically natural, yet can be stored, manipulated, shared, and ultimately turned over to powerful algorithms for resolution. Our objective is to create tools which make it possible to systematically answer questions such as: Is a proposed model consistent with experimental data? If so, is it robust to additional perturbations that are plausible but untested? Are different models at multiple scales of resolution consistent? What is the most promising experiment to refute or confirm a model? Traditionally, such network-level questions that arise naturally in biology have been considered computationally intractable, since they are typically stochastic, nonlinear, nonequilibrium, uncertain, involve multiple scales, and hybrid (mixing continuous and discrete mathematics), limiting approaches to heuristic and brute-force methods, or to extreme simplification. Recently this situation changed profoundly, based on new methods developed by the research team and their collaborators. A crucial insight is that evolution favors high robustness to uncertain environments and components, yet allows severe fragility to novel perturbations, and this "robust yet fragile" feature must be exploited explicitly in scalable algorithmic approaches.
The broader impact lies in the synergistic links this work forges with similar challenges that exist throughout science and technology, such as the Internet, aerospace systems design, materials science, multiscale physics, stochastic multiscale chemistry, and disturbance ecology. The theoretical foundations build broadly on robust control theory, dynamical systems, numerical analysis, operator theory, real algebraic geometry, computational complexity theory, duality and optimization, and semi-definite programming. The results will be made accessible to the broadest possible audience, both with representative and challenging experimental biology and the connections with other examples of complex systems. The preliminary progress already made by this team is striking and has been applied to understanding, for example, the robustness of complex control systems, the performance of internet protocols, and bacterial chemotaxis and stress response. The work is creating new mathematics and algorithms, beginning to appear in the highest-impact journals, and concretely demonstrating that this research can help experimental biologists. Diversity and breadth appear at every level. In the research group of the lead PI (Doyle), 6 of 11 graduate students and 2 of 4 postdoctoral scholars are women, and include a broad racial and ethnic diversity. The other 5 co- PIs are from a broad spectrum of disciplines and diverse but elite academic institutions, 3 are women, and all PIs have strong and very concrete commitments to integrative, multidisciplinary research, diversity, educational innovation, and outreach at every level including K-12. The team members are frequent featured speakers at integrative conferences and in interdisciplinary colloquia at premier universities, and speakers and organizers of workshops and short courses in systems biology. This program both directly involves leading mainstream biology, and has broad contact with it through additional collaborations, creating conduits to broad dissemination of the research results in biology. The team's algorithms and software infrastructure are becoming de facto standard tools empowering research in multiple disciplines, and forming a solid foundation upon which this program builds.
|
0.915 |
2003 — 2009 |
Newman, Harvey Doyle, John Psaltis, Demetri Low, Steven [⬀] Yip, Steven |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Wide-Area-Network in a Laboratory @ California Institute of Technology
The development of robust and stable ultrascale networking, at 100 Gbps and higher speeds in the wide area, is critical to support the new generation of ultrascale computing and petabyte to exabyte datasets that promise to drive discoveries in fundamental and applied sciences of the next decade. Continued advances in computing, communication, and storage technologies, combined with the development of national and global Grid systems, hold the promise of providing the required capacities and an effective environment for computing and science. A key challenge to overcome is that some of the current network control and resource sharing algorithms cannot scale to this regime. The goal of the Caltech FAST Project is to develop theories and algorithms for the future ultrascale networks, implement and demonstrate them in state-of-the-art testbeds, and deploy them in communities that have a clear and urgent need today.
A critical component of this effort is an experimental infrastructure on which to develop, test, and demonstrate new algorithms at both high speed and large delay. It is impossible to conduct this research on network simulators or in the high performance wide-area networks (WAN) available today. The propose project will build such a facility: WAN in Laboratory.
WAN in Lab will consist of four major building blocks: high speed servers, programmable routers, electronic crossconnects, and WDM equipment including long-haul fibers with associated optical amplifiers, dispersion compensation modules and optical multiplexers and demultiplexers.
This "wind tunnel" of networking has five features that combine to make it a truly unique facility in the world: 1. It is literally a WAN, not an emulation, that provides both high speed and large distance, critical for developing protocols for ultrascale networking. 2. It can be easily configured into logical networks of different topologies, link speeds and delays, and can stay cutting-edge by incorporating latest advances in optical technologies, routers, or servers, as need arises and funds become available. 3. WAN in Lab will have built-in passive monitoring facilities that will complement end-to-end monitoring tools such as Web100. These are critical to resolve many performance problems that arise only in live networks but that are hard to debug from just end-to-end measurements. 4. It is a network that can break and is suitable for risky experiments, possibly including those that involve reprogramming the routers. 5. It will be integrated with global high performance research networks and will be a shared resource for the networking community.
Broader impacts: WAN in Lab will be part of the Federated Emulab led by University of Utah's Jay Lepreau and will be managed on a uniform software platform. It complements nicely the existing facility in Emulab, which focuses on large scale (>200 servers) but low speed (100Mbps network interface card). Participation in Federated Emulab makes WAN in Lab instantly available to the Emulab user community and useful to those projects that need both high capacity and large distance, as FAST does. This facility is expected to serve much more projects than the few that motivated it.
Wan in Lab is a key element in the overall process of research/in-lab development/experimental networks for field trials/production networks. We are engaged in this process on an ongoing basis to get the next generation(s) of protocols into production, in the service of science and engineering. We have been working with the HENP community and their partners to achieve the transition to production. We are partnering closely with SURFnet, Starlight and in the future UKlight to provide a global 10 Gbps UltraLight testbed. These activities and WAN in Lab complement and leverage on each other.
|
0.915 |
2003 — 2007 |
Mabuchi, Hideo [⬀] Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Complexity and Robustness in Quantum and Biomolecular Information Processing Systems @ California Institute of Technology
This is an interdisciplinary project of theoretical research on design of quantum and biomolecular information processing systems. Specifically, the rate at which increases in the size/complexity of an information processing system can lead to improvements in robustness against environmental perturbations and systematic implementation errors are being investigated, starting with utilization of model reduction methods from control theory to develop general methods for scrutinizing the robustness of large but finite quantum and biomolecular systems and proceeding to detailed analyses of the efficiency of various known error-correction schemes. The overall goal is to be able to elucidate fundamental limits on the degree of redundancy required to achieve a given level of robustness. An integral part of this research is development of a new course on multi-scale design of information processing systems.
|
0.915 |
2004 |
Doyle, John C |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Continued Support and Development of Sbml @ California Institute of Technology
DESCRIPTION (provided by applicant): Molecular biologists are increasingly realizing that computational modeling of sub-cellular mechanisms is crucial for making sense of human biology and advancing medical research in the context of the vast quantities of complex experimental data that are now being collected. The biology community needs agreed-upon information standards if models are to be shared, evaluated and developed co-operatively. The Systems Biology Markup Language (SBML) is becoming a de facto standard exchange format for representing formal qualitative and quantitative models of systems of biochemical reactions. The long-term objectives of this proposal are to support and further develop SBML and related software infrastructure for the benefit of the computational biology community. The specific aims are the following: 1. Work with the community of software developers and users to (a) continue evolving SBML and extending the language into SBML Level 3 with features developers have been requesting, and (b) formalize the SBML language development process and transfer its control to the community. 2. Maintain and extend existing software libraries for working with SBML with facilities for (a) reading and writing SBML Level 3, (b) reading model definitions in other formats including CelIML, and (c) transforming model definitions into other formats such as CelIML, MATLAB, Mathematica, HTML, and others. 3. Produce documentation, including a "best practices" guide to working with SBML, and performing training functions to help educate software developers and modelers in the use of SBML and the software tools available for working with the language.
|
0.936 |
2006 — 2009 |
Doyle, John C |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Mathematical Foundations For Nonlinear, Stochastic &Hybrid Biochemical Networks @ California Institute of Technology
[unreadable] DESCRIPTION (provided by applicant): This proposal is for continued development and enhancement of existing and successful theory and software infrastructure developed at Caltech for the systems biology community, and builds on experience with SBML and engineering softwared. (1) Next-generation, multiscale, deterministic/stochastic simulation software. Computational models in biology are continually growing in complexity and size. Their accurate and effective simulation requires new algorithms and new software. Collaboration between the PI Doyle and Drs. Petzold (UCSB, creator of DASSL) and Dan Gillespie (creator of the stochastic simulation algorithm) have led to the development of combined deterministic/stochastic simulation algorithms that are much more efficient than existing stochastic algorithms, and can automatically determine the appropriate scale for different subsystems of a model. This program will support Dr. Gillespie's continued research and implementation of new algorithms in production-quality open-source software modules that will be made widely available. (2) Extension of SOSTOOLS. Recent Caltech research has developed mathematics for analyzing models, such as "this model cannot explain the data for any set of plausible parameters" and "this model is robust as parameters are varied." The theory builds on advances in several areas, including robust control and dynamical systems theory, computational complexity, real semi-algebraic geometry, semidefinite programming, and duality. The result of this work has been a new class of scalable algorithms for model analysis and (in)validation and iterative experimentation for large-scale, stochastic, nonlinear, nonequilibrium, hybrid (containing both continuous and discrete mathematics) networks with multiple time and spatial scales. The recent progress is implemented in SOSTOOLS, an open-source (GPL) MATLAB toolbox. This program will enhance and extend SOSTOOLS to exploit biological specific structure, treat stochastic models to complement simulation, make connections with Savageau's S-system formalism, perform model (in)validation from data, and develop provably correct model reduction for nonlinear biochemical models. (3) Integration of stochastic simulation and SOS analysis. Analysis of complex stochastic biochemical networks will require a blend of simulation and SOS analysis and model reduction, and this program will create an integrated suite of software tools with rigorous theoretical foundations. [unreadable] [unreadable] [unreadable] [unreadable]
|
0.936 |
2006 — 2007 |
Low, Steven [⬀] Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Nbd: Optimization and Games in Inter-Domain Routing @ California Institute of Technology
0520349 0520318
Inter-domain routing can be viewed as a means to implement economic relationships among competing ISPs. Examination of technical details of the BGP routing protocol alone is insufficient to understand the current behavior, or predict future requirements, of global connectivity without taking into account the dynamics of the economic relationships they implement.
A full account of the shortcomings of BGP, and proposals to modify it, must be developed rigorously with a clear understanding of the kinds of economic relationships and service models that it can and cannot implement well. The goal of this project is to develop a theoretical framework together with experimental capability to understand, predict, and design the interplay between economics and technologies that implement Internet connectivity. The principal investigators' (PIs) unique angle is an optimization perspective to inter-domain routing where profit maximization, physical connectivity, and AS pricing interact with routing decisions, peering structure, resource constraints, and traffic matrix. The PIs will develop mathematical models that capture the interactions between routing and economics, characterize the basic structures of BGP equilibrium and dynamics, discover and verify these properties in the operational Internet, and derive practical design and operation guidelines and algorithms.
Broader Impact: Through this research, the PIs will educate a new generation of network researchers with not only strong practical skills, but also the ability and the habit of applying rigorous mathematical techniques to solve a wide range of engineering problems. They can make unique and broad impact to both the academia and the industry.
|
0.915 |
2008 — 2011 |
Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Planning Future Research in Network Science and Engineering @ California Institute of Technology
Proposal Number: 0850898 PI: John Doyle Institution: University of Pennsylvania
The top level goal of this project is to advance the developing relationship between "network science" and "network design" by contributing to the development of a cross-cutting research agenda in these areas. For this purpose workshops and tutorials will be organized. Critical issues, open questions, and research challenges in the topic areas of (i) the state of network science, (ii) the relationship between network science, other theories, and network design, and (iii) identifying and enabling new research methodologies and tools for the filed, will be carefully examined. The potential of this project is to not only connect diverse and previously fragmented mathematical and application domains but newly integrate both education and research based on the theoretical insights themselves, creating layered real and virtual organizations with greater scope and evolvability. The success of this project should be both immediate and long-lasting, from practical network architecture design in all areas of engineering, to providing a much-needed conceptual framework for systems and synthetic biology, to radically transforming and unifying the teaching of systems and complexity.
|
0.915 |
2009 — 2013 |
Low, Steven [⬀] Murray, Richard (co-PI) [⬀] Chandy, K. Mani Doyle, John Candes, Emmanuel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Netse: Large: a Theory of Network Architecture @ California Institute of Technology
This project is developing a theoretical foundation for the design of network architecture, which is essential to understanding highly evolved, organized, and complex networks, inspired by and with application to technological, biological, ecological, and social networks, and with strong connections to real-world data. Architecture involves the most universal, high-level, and persistent elements of organization, usually defined in terms of protocols ? the rules that facilitate interaction between the diverse components of a network. Network technology promises to provide unprecedented levels of performance, efficiency, sustainability, and robustness in almost all technological, natural, and social systems. The ?robust yet fragile? (RYF) feature of complex systems is ubiquitous, and a theoretical framework to manage the complexity/fragility spirals of our future infrastructures is critical.
The intellectual merit of this project is both theoretical and practical. A theory of architecture is crucial for the design of future networks and is at the heart of sustainability. Comparison of architectures from biology, ecology, and technology has identified a variety of common characteristic organizational structures. These observations will form the basis for a mathematical theory of architecture.
The broader impact of this project is through the application of network architecture design to multiple areas of engineering and through the development of a unified approach to teaching systems and complexity. The project involves a diverse collection of researchers, including women and underrepresented minorities. Research results are disseminated not only through domain-specific journals, but also in the broad-interest high-impact literature (e.g. Science, Nature, PNAS).
|
0.915 |
2014 — 2017 |
Doyle, John Ho, Tracey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Small: Collaborative Research: Dynamic Forwarding and Caching For Data-Centric Networks: Theory and Algorithms @ California Institute of Technology
Two fundamental trends in networking are clearly visible. First, the bulk of network traffic today, and of its projected enormous growth, consists mainly of content disseminated to multiple users. Second, network content is accessed increasingly in mobile wireless environments with dynamic and unreliable channel conditions. Traditional network protocols, designed originally for point-to-point communication over static wired networks, are fundamentally ill suited for such scenarios. Motivated by these trends, this project will develop dynamic and distributed algorithms which can fully exploit network resources (both bandwidth and storage) for efficient and robust content dissemination under changing network conditions.
This project builds on recent active research efforts in data-centric networking, which places information content, rather than source-destination pairs, at the center of the network architecture. While there have been a number of significant results in data-centric networking research, the central problem of the joint design and optimization of dynamic caching and forwarding algorithms has yet to be thoroughly studied. This project will study the fundamental limits of caching and forwarding, as well as the design of practical and robust algorithms for optimizing the use of bandwidth and storage in data-centric content delivery. Unlike many existing works on centralized algorithms for static caching, this project will develop scalable, distributed, dynamic algorithms that can address large-scale caching and forwarding under changing content, user demands and network conditions.
To achieve this goal, the project will take two complementary approaches. The first approach is based on a stochastic model for distributed caching and forwarding recently developed by the PIs. This approach significantly expands on classical backpressure-based routing techniques to incorporate caching within a unified framework, leading to new algorithms which maximize user demand rate satisfied by the network. The second approach is based on a flow-based distributed convex optimization framework, in which content-specific routing and caching are carried out on a distributed node-by-node basis to minimize a global cost objective such as delay.
This project addresses both practical and theoretical issues, and consists of the following main thrusts: (1) the design of jointly optimal forwarding and caching algorithms for minimizing delay; (2) the design of scalable, robust, hierarchical dynamic caching and forwarding algorithms which operate with dynamically adjusted name resolutions; (3) the development of algorithms which combine dynamic caching and forwarding with congestion control for fairness and enhanced performance; (4) the exploration of coding techniques in storage and transmission for obtaining practical advantages in performance and reliability, as well as for enabling the study of fundamental performance limits in caching and forwarding; (5) the development of low-complexity, dynamic forwarding and caching algorithms delivering lower user delay and greater resilience to multi-user interference and channel fading in mobile wireless environments; and (6) development of algorithms for querying and caching which lead to optimal decision making in a sensing context.
The broader significance of this work will include: (1) direct and long-term impact on network architectures for big data applications used in national security, commercial enterprise, scientific exploration and research, health services, and other important social projects; (2) impact on undergraduate and graduate education, with particular emphasis on involving female and minority students, through a planned course segment on 'theory and algorithms for data-centric networking' and active hands-on projects involving testbed investigation and validation; (3) enhancement of infrastructure for research and education through active partnering with other university departments, government research institutions and industry; and (4) broad dissemination to enhance scientific and technological understanding by participation in multi-disciplinary conferences and workshops, and exposure to broader media.
|
0.915 |
2015 — 2017 |
Doyle, John Hucka, Michael [⬀] Hucka, Michael [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Cataloging Software Using a Semantic-Based Approach For Software Discovery and Characterization @ California Institute of Technology
When scientists need to find software for a task, they often use ad hoc methods such as asking their colleagues, searching the web, or using whatever others appear to be using. This unsystematic approach often results in unproductive effort and poorer scientific results because scientists cannot find the software that is best for their needs. If a comprehensive software index were available it would help enable more efficient use of existing software resources and tools and contribute to more efficient and effective investment in the scientific research itself.
Further, software is created and evolves too rapidly for humans to monitor; thus creation of the index should be automated. Also, while today's social-coding movement is putting more software into open-source repositories, thus offering greater opportunities to find and characterize software automatically, a significant obstacle remains: the lack of automated indexing methods that can produce results acceptable to humans. This proposed EAGER project seeks to investigate the use of innovative computing techniques for automating the creation of an index that can be used by scientists to find the software they need. Their test system (CASICS - Comprehensive and Automated Software Inventory Creation System) will be tested with an initial set of users and made available for public use.
In this project, the researchers will (1) extend an existing software ontology; (2) develop methods for source code analysis using the ontology; (3) adapt a repository crawler to apply the methods to projects in SourceForge and GitHub; (4) implement a browsing and search interface to the database of results; and (5) augment the search facility to use semantic similarity via the ontology. They will use the prototype system to explore variants of the code analysis methods, select the best, and assess the performance of inferring characteristics of software found in the repositories. For automated index creation to be feasible, software discovery and characterization algorithms need to improve, so that the index is complete and organized more meaningfully.
This project will explore the hypothesis that a deeper underlying knowledge structure, coupled with appropriate feature extraction and classification algorithms, can improve classification performance compared to past approaches. The use of ontologies to assist source code analysis has been explored in other work, but it has not been applied as proposed here. The project is expected to extend the state of the art in source code analysis and categorization.
The central question in this project is whether the ontology-based descriptions of software produced by the classification methods can compare with human labeling. To address this, and in addition to the intellectual merit of the classification approach described above, the researchers have also proposed unique methods for empirically evaluating the results of the classification, as follows: (a) assess the overlap between the output of their methods with the classifications already present in GitHub and SourceForge - the two software catalogs they propose to mine, (b) develop a test system to perform double-blind evaluation with human judges on the software that is classified and (c) compare the search in CASICS to Google using a set of benchmark search queries that users in astronomy and systems biology would issue for various scenarios, identified through a survey.
|
0.915 |
2017 — 2020 |
Doyle, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ncs-Fo: Collaborative Research: Integrative Foundations For Interactions of Complex Neural and Neuro-Inspired Systems With Realistic Environments @ California Institute of Technology
This project will integrate the capabilities of deep learning networks into a biologically inspired architecture for sensorimotor control that can be used to design more robust platforms for complex engineered systems. Studies of the flexibility and contextual aspects of sensorimotor planning and control will extend existing paradigms for human-robot interactions and serve as the foundation for creating personal assistants that are able to operate in natural settings. Growing understanding of how these layered architectures are organized in the brain to produce highly robust, flexible, and efficient behavior will have many applications to rapidly evolving technologies in complex environments, including the Internet of Things, autonomous transportation, and sustainable energy networks.
The nervous system is a layered architecture, seamlessly integrating high-level planning with fast lower level sensing, reflex, and action in a way that this project aims to both understand more deeply and mimic in advanced technology. The central goal is to develop a theoretical framework for layered architectures that takes into account both system level functional requirements and hardware constraints. There are both striking commonalities and significant differences between biology and technology in using layered architectures for active feedback control. The most salient and universal hardware constraints are tradeoffs between speed, accuracy, and costs (to build, operate, and maintain), and successful architectures cleverly combine diverse components to create systems that are both fast and accurate, despite being built from parts that are not. Recent progress has made it possible to integrate realistic features and constraints for sensorimotor coordination in a coherent and rigorous way using worst-case L-infinity bounded uncertainty models from robust control, but much remains to explore to realize its potential in neuroscience and neuro-inspired engineering. Another application of this framework will be to software defined networking, which explicitly separates data forwarding and data control, and provides an interface through which network applications (such as traffic engineering, congestion control and caching) can programmatically control the network. This makes it a potential "killer app" for a theory of integrated planning/reflex layering; research collaborators will be eager to deploy new protocols on testbeds and at scale.
|
0.915 |
2021 |
Doyle, John C Lois, Carlos (co-PI) [⬀] Lubenov, Evgueniy Vassilev Siapas, Athanassios [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Stability and Robustness of Hippocampal Representations of Space @ California Institute of Technology
PROJECT SUMMARY How does the brain balance the need to preserve prior knowledge with the necessity to continuously learn new information? The tradeo? between stability and plasticity is inherent in both biological and arti?cial learning systems constrained by ?nite resources and capacity. The hippocampus is a brain region critical for memory formation and spatial learning, which can provide a powerful experimental system for characterizing this tradeo?. The role of the hippocampus in spatial cognition is supported by the ?nding that pyramidal neurons in this area (place cells) ?re in speci?c locations in an environment (place ?elds). The population of place cells active in an environment is believed to form a neural representation or cognitive map of that environment. Spatial learning is critical for survival and involves two competing constraints: representations of space must be plastic to enable fast learning of new environments and changes in behavioral contingencies, and stable over time to enable recognition of familiar environments, reliable navigation, and leveraging of previous learning. How do these competing constraints a?ect the stability of place ?elds across time? The experimental characterization of the long-term stability of spatial representations in the hippocampus has been challenging as it requires tracking the activity of multiple place cells across extended periods of time (days to weeks). We propose to use novel approaches in large-scale electrophysiology and imaging in behaving rodents to characterize which neurons change their spatial tuning and how these changes depend on behavior. Furthermore, we will use recordings and circuit perturbations to characterize the activity patterns that predict changes in tuning stability. Our analysis will be carried out in the context of a theoretical framework for understanding the interplay between plasticity and stability of hippocampal representations. Characterizing the evolution of neural representations is of fundamental importance in understanding how information is maintained across brain circuits and how such maintenance is perturbed in brain disorders.
|
0.936 |