1995 — 2001 |
Kennedy, Ken Symes, William (co-PI) [⬀] Burrus, C. Sidney (co-PI) [⬀] Zwaenepoel, Willy [⬀] Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Infrastructure: Multi-Processor Cluster Computing, (a Research Infrastructure Proposal) @ William Marsh Rice University
9502791 Zwaenepoel This award provides support for the acquisition of a cluster of shared-memory multiprocessor nodes interconnected by a high-speed network to support research in parallel programming systems, and algorithms and applications from a wide variety of disciplines. The infrastructure will be used as a research vehicle for computer systems designers as well as computational scientists. Research projects to be supported include work on compiler and runtimes for multiprocessor clusters, high-performance I/O, and performance visualization; and algorithms and applications in mixed integer programming, molecular dynamics, reservoir simulation, genetic linkage analysis, and seismic modeling. The infrastructure will also be used in new graduate and undergraduate courses in multidisciplinary design optimization, and parallel computing.
|
0.915 |
1997 — 1998 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Application of Tree Automata in Rewriting, Logic and Programming in Frankfurt, Germany, October 20-24, 1997 @ William Marsh Rice University
This award provides travel for 10 U.S. computer scientists to attend a workshop on the applications of tree automata held in Dagstuhl, Germany. The purpose of the workshop is to bring together researchers from three different communities-automated deduction, term-rewriting, computer- aided verification and compiler construction who have been utilizing tree automata, in an attempt at cross fertilization of ideas among the disciplines and forging new research directions.
|
0.915 |
1997 — 2001 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Variable-Confined Logics in Finite-Model Theory @ William Marsh Rice University
Finite-model theory is the study of first-order logic and its extensions on classes of finite structures. In recent years, several extensions of first-order logic have been investigated in the context of finite-model theory. Fixpoint logic and the finite-variable infinitary logic IL have turned out to be of particular importance. The study of fixpoint logic generated interactions with both database theory and complexity theory, while the finite-variable infinitary logic IL proved to be a useful tool for analyzing the expressive power of fixpoint logic. In addition to being a proper extension of fixpoint logic, IL enjoys a game- theoretic characterization and possesses interesting structural properties, such as the 0-1 law. This project studies variable-confined logics, that is, logics with a fixed finite number of variables. This is motivated by the fact that the number of variables is considered a logical resource in descriptive complexity theory and has computational implications also in the evaluation of database queries and in program verification. Various aspects of variable-confined first-order, fixpoint, and infinitary logics, ranging from expressive power, algorithms for deciding truth and validity, and asymptotic probabilities, are studied.***
|
0.915 |
2000 — 2005 |
Felleisen, Matthias (co-PI) [⬀] Vardi, Moshe Barland, Ian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Educational Innovation: "Collaborative Research:" Integrating Logic in the Computer Science Curriculum @ William Marsh Rice University
Institution: William Marsh Rice University Proposal Number: EIA 0086264 PI: Moshe Y. Vardi Title: Collaborative Research: Integrating Logic in the Computer Science Curriculum
This CISE Educational Innovation (EI) proposal requests funds to develop a series of modules that seamlessly integrate logic and logic-based software tools into existing, widely taught computer science (CS) courses. Few undergraduate computer science curricula prepare students adequately in logic. The typical student sees a few weeks of truth tables and propositional logic in discrete practical work. These modules would allow CS departments to easily modify their curricula to rectify this situation. By supplying modules, complete with lecture notes, presentations, problem sets, and tools, the investigators hope to facilitate curricular treatment of applied logic at all levels of college education, particularly in CS departments with scarce resources. This project has the potential to have a major impact on the way that CS is taught.
|
0.915 |
2000 — 2008 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Developing Linear-Time Model-Checking Technology @ William Marsh Rice University
Proposal Number: CCR-9988322 PI's Name: Vardi, Moshe Y. Institution: Rice University Title: Developing Linear-Time Model-Checking Technology Project-summary-paragraph:
Model checking is a key technology in formal verification. As it is generally practiced today, model checking is basically an efficient symbolic search procedure based on fixed-point computation that checks whether a model of the design satisfies given CTL specifications. In spite of the phenomenal success of CTL-based model checking, CTL suffers from several fundamental limitations as a specification language, all stemming from the fact that CTL is a branching-time formalism: the language is unintuitive and hard to use, it does not lend itself to compositional reasoning, and it is fundamentally incompatible with dynamic validation. In contrast, the linear-time framework is expressive and intuitive, supports compositional reasoning and semi-formal verification, and is amenable to combining enumerative and symbolic search methods. Surprisingly, the linear-time framework, even though it preceded CTL, and does not suffer from its weaknesses, has not enjoyed the same intensive research focus as CTL, resulting in a much less mature technology. This projects builds upon the theoretical foundations of the linear-time framework that have been laid over the last 20 years, and focuses on generating a body of knowledge that industry can convert into a robust technology.
|
0.915 |
2000 — 2005 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Constraint Satisfaction, Database Query Evaluation, and Information Integration @ William Marsh Rice University
This interdisciplinary research is carried out in a collaboration between Professors Phokion Kolaitis at the University of California, Santa Cruz and Moshe Vardi at Rice University. Constraint satisfaction and conjunctive query evaluation are two fundamental and ubiquitous problems in artificial intelligence and database systems, respectively. Since these problems are known to be computationally intractable in the worst case, researchers in artificial intelligence and database systems have sought to discover tractable cases of constraint satisfaction and conjunctive query evaluation, as well as to design heuristic algorithms for solving these problems. Thus far, however, these studies have been largely carried out in parallel and with relatively little or no interaction between the two areas. Recent research work has established that there are strong and exact connections between constraint satisfaction and conjunctive query evaluation. The main goal of this project is to further explore the connections between constraint satisfaction and conjunctive query evaluation, and to examine the applicability of techniques developed for each of these problems to the other problem. To this effect, a novel game-theoretic framework is being developed to unify seemingly unrelated results, identify additional tractable cases of these two problems, and design heuristics for the general case. Moreover, algorithms and heuristics developed by the artificial intelligence community are being evaluated in regards to their efficacy in conjunctive query evaluation, and vice versa. Results obtained in the course of this project will enhance the interaction between artificial intelligence and database systems, and will advance knowledge transfer between these two areas.
|
0.915 |
2000 — 2004 |
Osherson, Daniel Cook, William Ensor, Katherine (co-PI) [⬀] Vardi, Moshe Hartley, Peter (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Constructing Probability Models For Large Corpora of Well-Informed But Probabilistically Incoherent Judgments @ William Marsh Rice University
This interdisciplinary team (Psychology, Computational and Applied Mathematics, Statistics, Economics, Computer Science) has a goal of overcoming the weaknesses in inference engines in expert systems, decision support systems or in knowledge discovery systems that often work in environments with uncertain characteristics. The current techniques rely on Baysian theory and do not perform well in situations in which conditional independence cannot be guaranteed, or the probabilities provided by experts may not be sound. Since inferences based on probability calculations offer the best guarantee of sensible assessments of chance, efficient schemes have been developed for computing probabilities over complex event spaces. Underlying all such algorithms is a "probability model," i.e., a representation of the chances of various combinations of events. In turn, probability models are constructed from an initial set of facts about uncertainty in the environment. These facts can sometimes be extracted from databases, using relative frequency as probability. Often, however, the needed probabilities must be obtained from an expert, who responds intuitively. Reliance upon experts raises the specter of incoherence, i.e., judgments that cannot be reconciled with any probability model at all. Indeed, maintaining coherence across a large set of judgments is both computationally and psychologically taxing, and seldom achieved. Incoherent judgment on the part of a single judge is compounded when it is desired to integrate the opinions of several judges. To exploit potentially incoherent and inconsistent judgments, special optimization algorithms are used to construct a compact probability model that best approximates all the judgments in play. The algorithms are tested by applying them to a body of expert opinion in some complex domain. Development of the algorithms will facilitate the automatic construction of artificial expert systems. Whenever a body of expert judgment can be assembled, the algorithms can be applied in view of creating a compact representation of the collective wisdom of the judges. The results of the theoretical research on the probability models will be applied to the analysis of air quality policy in Houston. A large probabilistic database will be established by culling measurements from air quality control stations around the city, expert judgements in environmental science and medicine, and the output of econometric and environmental models of the region. The project has the potential to have a significant intellectual impact in probability, applied learning, and datamining research communities and also provide a useful tool to environmental researchers and Houston decision-makers.
http://www.ruf.rice.edu>/~osherson
|
0.915 |
2002 — 2005 |
Kennedy, Ken Johnson, Don (co-PI) [⬀] Zwaenepoel, Willy (co-PI) [⬀] Vardi, Moshe Mellor-Crummey, John (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of Citi Terascale Cluster (Ctc) @ William Marsh Rice University
0216467 Moshe Y. Vardi Don H. Johnson; Ken W. Kennedy; John M. Mellor-Crummey; Willy Zwaenepoel MRI: Acquisition of CITI Terascale Cluster \(CTC\)
This proposal, requiring access to the kinds of experimental computational resources needed for scalability experiments, aims to support scalability to thousands of processors. Achieving this goal requires experimentation of computational facilities of sufficient size to establish that solutions will scale to large systems. A high-performance computational cluster with a peak performance of approximately one teraflop, supporting both compute- and data-intensive science and engineering, will enable researchers to make fundamental advances in diverse areas such as biochemistry, biology, chemistry, computational mathematics, computer science and engineering, earth science, economics, physics, political science, and psychology. Experiments planned include: a. Scalability of compiler techniques for systems with hundreds of processors and deep memory and communication hierarchies; b. Development, simulation, and testing of scalable Web services on hundreds of processors; c. Simulations of ad hoc multihop wireless networks scaling to thousands of nodes; d. Scalable algorithms for Monte-Carlo studies of the physics of heavy ion collisions; e. Design and evaluation of scalable optimization algorithms based on component frameworks; f. Extraction and analysis of data on hundreds of millions of international events, to better predict and understand international conflicts (extend the Kansas Data System); and g. Scalability tests and practical application of new algorithms for modeling and simulation of biomolecular interactions using several thousand flexibility parameters.
By integrating the equipment in the existing curriculum the educational impact is expected to be large, going beyond a course in parallel programming. Several programs are already in place addressing diversity.
|
0.915 |
2003 — 2007 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Automata-Theoretic Approach to Design Verification @ William Marsh Rice University
The objective of the proposed research is the development of algorithmic methods for design verification. Today's rapid development of complex and safety-critical systems requires reliable verification methods. Formal verification is the study of algorithms and structures applicable to the verification of hardware and software designs. It combines theoretical and experimental aspects. In the last few years, this area has seen a dramatic expansion of activities. Verification tools are incorporated into industrial development of new designs, forming an active and exciting area of research where theory and practice stimulate each other.
The intellectual merit of this poject is the application of the automata-theoretic approach to design verification, which uses the theory of automata as a unifying paradigm for design specification, verification, and synthesis. The automata-theoretic perspective considers the relationships between designs and their specifications as relationships between languages. By translating design and logical specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of designs with respect to their specifications can be reduced to questions such as non-emptiness and containment of automata. The automata-theoretic approach separates the logical and the combinatorial aspects of reasoning about systems. The translation of specifications to automata handles the logic and shifts all the combinatorial difficulties to automata-theoretic problems, yielding clean and asymptotically optimal algorithms. Furthermore, automata are very helpful for implementing temporal-logic based verification methods, and are the key to techniques such as on-the-fly verification that help coping with the ``state-explosion'' problem. Automata-theoretic methods have been implemented in both academic and industrial automated-verification tools.
Many questions in the theory of automata on infinite objects are still open, and more fruitful applications of automata theory in design verification are possible. This projects aims at solving several automata-theoretic problems with clear application to design verification, and at develoing automata-based verification methodologies. The broad impact of this project is the contribution to the efforts of developing and improving formal verification methods, constituting an additional step toward formal verification of industrial real-life designs.
|
0.915 |
2006 — 2011 |
Kavraki, Lydia Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr/Ehs: a Robotics-Inspired Approach For the Verification of Hybrid Systems @ William Marsh Rice University
Hybrid systems play an increasingly important role in transportation networks, robotics and in fields such as medicine and biology. Today hybrid systems are found in sophisticated embedded controllers used in the airplane industry, but also in medical devices that monitor serious health conditions. As hybrid systems are often part of devices operating in safety-critical situations, the verification of their safety properties becomes increasingly important. A hybrid system is considered safe if it is not possible for the system during its evolution to enter an unsafe state starting from an initial safe state. This project develops modeling algorithms and analysis techniques for the verification of safety properties of hybrid systems. Its main goal is the design and implementation of a probabilistically complete framework for computing trajectories that take the system from a safe to an unsafe state, or witness trajectories. Existence of a witness trajectory indicates that the hybrid system is not safe. When a witness trajectory is not found, the repeated application of the proposed methodology can increase the user's confidence in the safety of the system. Since hybrid systems have discrete and continuous aspects of control, a witness trajectory consists of one or more continuous parts interleaved with discrete transitions. This project blends in a novel way search methods for continuous spaces inspired by research in robot motion planning, with search techniques for discrete spaces and formal tools. The problem addressed is of central importance for the creation of high-confidence hybrid and embedded systems with reliability guarantees.
|
0.915 |
2006 — 2009 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sod:Hcer: a Theory of Automated Design @ William Marsh Rice University
ABSTRACT 0613889 Moshe Y. Vardi William Marsh Rice University
SoD-HCER: A Theory of Automated Design
As computerized systems are becoming larger, more complex, and increasingly distributed, an increasing portion of the design effort goes into the validation and verification effort. There is a growing need for formal methods that guarantee systems reliability, correctness, and efficiency by design. This project will address this challenge by contributing to the the establishment of a theory of automated design of computing systems. The focus of this project is on moving algorithmic techniques for assertion-based automated verification into assertion-based automated design, building on emerging standard temporal assertion languages such as OVL, PSL, and SVA. The new techniques will enable the development of systems of higher quality within shorter design cycles and with lower costs. The vision for Science of Design underlying this proposal is that of design automation, in which the process of converting formal specification to implementation is, to a major extent, automated. The implication is that a major portion of the manual design effort should go into the development of high-level specification, since much of the implementation effort can then be automated. The ultimate goal of this project is a demonstrable improvement in design productivity and quality.
|
0.915 |
2007 — 2010 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
An Automata-Theoretic Approach to Design Synthesis @ William Marsh Rice University
TF07: An Automata-Theoretic Approach to Design Synthesis
As computerized systems are becoming larger, more complex, and increasingly distributed, a larger and larger portion of the design effort goes into the validation and verification effort. There is a growing need for formal methods that guarantee systems reliability, correctness, and efficiency by design. The investigators will address this challenge by contributing to the the establishment of a theory of automated design synthesis of computing systems. The new techniques will enable the development of systems of higher quality within shorter design cycles and with lower costs. The goal of the project is the development of a fundamental theory for automated system design synthesis. The ultimate goal is a demonstrable improvement in design productivity.
The focus of this project will be the development of algorithmic tools for assertion-based design synthesis. The investigators will develop automata- and game-theoretic approach to assertion-based intentional system design. While the main focus of games and automata has traditionally been on finite objects (or very simple infinite ones), it is necessary to extend this approach to the more complex situations that we face in most practical applications, and for which the method is not yet adequately developed or exploited. The intellectual merit of this project is the interplay between games, automata, and logic. Our aim is to make fundamental progress in this area, aimed at the development of automated design techniques. These are crucial for the development of reliable, robust, and scalable computing systems.
|
0.915 |
2010 — 2014 |
Kavraki, Lydia Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: a Synergistic Multi-Layered Approach For Falsification of Specifications For Hybrid Systems @ William Marsh Rice University
Hybrid systems, which combine discrete and continuous dynamics, provide sophisticated mathematical models for automated highway systems, air traffic control, biological systems, and other applications. A key feature of such systems is that they are often deployed in safety-critical scenarios and hence designing such systems with provable guarantees is very important. This is usually done through analysis of such systems with regard to a given set of safety properties that assert that nothing 'bad' happens during the operation of the system. As more complex hybrid systems are considered, limiting safety properties to a set of unsafe states, as in current methods, considerably restricts the ability of designers to adequately express the desired safe behavior of the system. To allow for more sophisticated properties, researchers have advocated the use of linear temporal logic (LTL), which makes it possible to express temporal safety properties. This proposal develops algorithmic tools for safety analysis of embedded and hybrid systems operating under the effect of exogenous inputs and for LTL specifications. The problem addressed is the following: Given a hybrid system and a safety specification described using LTL, can a feasible trajectory be constructed for the system that violates the specification, when such a trajectory exists? The problem is called the falsification problem. The broader impact of the project is implemented through course development, involvement in research activities of undergraduate, graduate and postdoctoral students, efforts to mentor underrepresented groups, and dissemination of concepts through educational software developed at Rice.
|
0.915 |
2010 — 2011 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Support For the 2010 Federated Logic Conference @ William Marsh Rice University
Support for Federated Logic Conference
The Federated Logic Conference (FLoC)is a back-to-back event of all the major international conferences in the area of Formal Methods. These include: Conference on Computer-Aided Verification (CAV) International Conference on Logic Programming (ICLP) International Joint Conference on Automated Reasoning (IJCAR) IEEE Symposium on Logic in Computer Science (LICS) Conference on Rewriting Techniques and Applications (RTA) International Conference on Theory and Applications of Satisfiability Testing (SAT)
While the participating conferences are normally held individually by their local conference organization on an annual basis, they all agree to this joint and federated event to be held every three-four years. The number of participants is near one thousand, and a third to one half of the participants are expected to be young researchers. To ensure that US-based students are able to attend, this grant provides travel support. The Federated Logic Conference brings together the following major events in Formal Methods, ranging from theory to applications in computer science and software engineering. The support is provided through cooperation of CISE directorate's computer science theory and software science/engineering programs, as well as the MPS directorate's mathematical sciences division. Support for these foundational topics is important to enhance research in many areas, including the security, privacy, usability, and reliability of computing systems. Formal Methods have emerged as one of the primary approaches towards that goal. A federated conference provides an opportunity for accelerating research via synergy between the different strands of research in Formal Methods. It is also an outstanding training opportunity for young researchers, as it offers them a grand view of the whole area. Support for this grant is provided cooperatively from the NSF directorates of Computer & Information Science and Engineering and Mathematical and Physical Sciences.
|
0.915 |
2010 — 2014 |
Vardi, Moshe Johnson, Don (co-PI) [⬀] Burrus, C. Sidney (co-PI) [⬀] Baraniuk, Richard [⬀] Embree, Mark (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Ci-Team Implementation Project: the Signal Processing Education Network @ William Marsh Rice University
This project is addressing a crisis throughout engineering education, and cyberinfrastructure education regarding decreasing enrollments, under preparation and disengagement of students, and the pressure of faculty to cover increasing volumes of material. Curricula have become stove-piped and disconnected, in spite of research indicating that science and engineering education best resonates with women and underrepresented minority students and when clear connections are drawn to transformative applications and other fields of study. Industry routinely lobbies for better engineering graduates that are at ease when collaborating on teams, and eager to attack hands-on design challenges.
The academic/industrial/professional society partnership between Rice University, Georgia Institute of Technology, Rose-Hulman Institute of Technology, the University of Texas at El Paso, National Instruments, Texas Instruments, Hewlett-Packard, and the Institute for Electrical and Electronics Engineers Signal Processing Society directly attacks these issues by aiming to revolutionize the way they teach and learn about cyberinfrastructure. They are guided by a common vision: to prepare the cyberinfrastructure leaders of tomorrow, to break away from the traditional textbook, lecture, homework-based approach to education, and to build a new framework where a vibrant network of educators, students, and field practitioners continually interact, collaborate, connect, and explore interactive content.
The innovative aspects and scientific merits of this collaborative project lie in their new approach to building and sustaining virtual educational communities around interactive content and applying the results to the full spectrum of engineering education venues: university undergraduate and graduate courses, industrial training and continuing education, just-in-time on the job learning, and high-school laboratories. Their research focuses on one strategic discipline in engineering, signal processing, and involves and balances education, community development, technology development, marketing and business planning, and impact assessment. The partnership is: 1. Implementing a light-weight Technology Framework that enables faculty and student users to exploit and expand upon the existing signal processing education content; 2. Building a signal processing Education Network of champions from faculty, students, and industry leaders nationwide that continually expands, improves, and diversifies the materials and that promotes the use of the framework both at network member institutions and at institutions in the wider engineering education community; 3. Assessing the effectiveness of the framework and network for accelerating adoption and use as well as the value of the mentoring and support provided by the network of champions; 4. Widely Disseminating the results and lessons learned.
Broader impacts of this research include the development of people-resources and technologies that will substantially increase the performance and capabilities of engineering educators, effectively opening up engineering education for motivated self-learners in all parts of the nation as well as the world. In particular, education in digital signal processing and related technologies is critical in sustaining many high-tech industries. Finally, digital signal processing educators, practitioners and students will be brought together to form dynamic knowledge sharing communities that greatly impact education not only on their home campuses but around the world.
|
0.915 |
2010 — 2013 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Automated Synthesis For System Design @ William Marsh Rice University
Synthesis is the process of computing an implementation from a specification of the desired behavior, performance, and security and privacy properties. This ideal form of system design has been a long-standing dream in computer science. The goal of this project is to realize the dream of synthesis, in the same way that current tools for program analysis and model checking realize the dream of verifying program correctness. This is feasible today due to the enormous computing power of today's platforms and due to the recent significant technical advances in the underlying technologies for program analysis and verification. Synthesis, supported by powerful computational tools, and integrated into system design, can have a transformative effect by enabling the construction of more complex and more robust systems than are currently possible or cost effective.
The focus of this project is the development of algorithmic tools for assertion-based design synthesis. We will develop automata- and game-theoretic approach to assertion-based intentional system design. The intellectual merit of this project is the interplay between games, automata, and logic. Based on deep theoretical foundations, prototype synthesis tools will be developed. These are crucial for the development of reliable, secure, and scalable computing systems.
|
0.915 |
2011 — 2015 |
Padley, Paul Vardi, Moshe Johnson, Don (co-PI) [⬀] Burrus, C. Sidney (co-PI) [⬀] Baraniuk, Richard [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dip: Collaborative Research: a Personalized Cyberlearning System Based On Cognitive Science @ William Marsh Rice University
Investigators from Rice University and Duke University will build a Personalized Cyberlearning System, designed around three principles from cognitive science (retrieval practice, spacing, and enhanced feedback), that leverages advances in machine learning and makes use of an existing instructional content material and problem set database aimed at undergraduate engineering students. The system will use artificial intelligence methods to optimize practice and feedback for students. Research will seek to advance knowledge, in a real-world setting, about a range of issues concerning how feedback facilitates learning, how individual differences come in to play, as well as those more specifically aimed at the development of the learning technology system itself.
The project is important as part of the effort to harness the vast quantities of information on the web to personalize instruction for a wide range of learners. Moreover, the development of such cyberlearning technologies holds promise for opening up STEM education for motivated self-learners while also allowing access to a large volume of material for a range of students who might not otherwise have it.
|
0.915 |
2012 — 2017 |
Kavraki, Lydia Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Expeditions in Computer Augmented Program Engineering (Excape): Harnessing Synthesis For Software Design @ William Marsh Rice University
ExCAPE: Expeditions in Computer Augmented Program Engineering Lead PI/Institution: Rajeev Alur, University of Pennsylvania
Computers have revolutionized our daily lives, and yet the way we program computers has changed little in the last several decades. Software development still remains a tedious and error-prone activity. ExCAPE aims to change programming from a purely manual task to one in which a programmer and an automated program synthesis tool collaborate to generate software that meets its specification.
A distinguishing feature of the ExCAPE approach is that the program description can involve a wide range of artifacts that are best-suited to the particular development task: incomplete programs; declarative specifications of high-level requirements; positive and negative examples of desired behaviors; and optimization criteria for selecting among alternative implementations. This diversity is aimed at allowing a programmer flexibility to express insights through a variety of formats, leading to a more intuitive and less error-prone way of programming.
The synthesis tool uses a range of computational approaches and developer interaction to compose these different views about the structure and functionality of the system into a unified, concrete implementation. The computational techniques include decision procedures for constraint-satisfaction problems; iterative schemes for abstraction and refinement; and data-driven learning. The methodology for programmer interaction moves verification from the back-end of the design cycle to the front-end, with the promise of a more reliable software product.
To develop the theory and practice of the proposed paradigm, the ExCAPE team brings together expertise in theoretical foundations (computer-aided verification, control theory, program analysis), design methodology (human-computer interaction, model-based design, programming environments), and applications (concurrent programming, network protocols, robotics, system architecture). Research will focus on developing new computational engines for transformation and integration of synthesis artifacts, and effective methods for programmer interaction and feedback.
While the benefits of the ExCAPE approach will apply broadly to software development, the ExCAPE team will focus its efforts by initially targeting four challenge problems: developing efficient concurrent data structures; developing protocols for on-chip interconnection networks; developing distributed routing network protocols; and end-user programming for autonomous robots. The ExCAPE approach will be a radical departure from the way these problems are solved today. For example, for the challenge problem on concurrent programming, the planned design tool will provide smart assistance for expert programmers to produce efficient and correct code, while the proposed tool for the robotics challenge problem will let end users program robots by demonstrating example behaviors. As ExCAPE aims to affect industrial practice, design tools for all four challenge problems will be developed and evaluated in close collaboration with industrial partners.
The technology developed by ExCAPE also has the potential to revolutionize the way computing concepts are taught. Building on the core technology used in program synthesis, the ExCAPE team plans to develop smart tutoring software that can analyze students? answers for conceptual errors and generate additional problems tailored to that student.. This tutoring software will be developed for representative high-school and undergraduate courses and will be made widely available. This outreach effort is aimed at attracting more students to computing disciplines by promoting a new and more appealing vision of what it means to program. ExCAPE will also nurture an inter-disciplinary community of researchers in computer-augmented programming, via an annual workshop, a biannual summer school, and a competition for synthesis tools, with associated challenge problems and benchmarks.
For more information visit http://excape.cis.upenn.edu
|
0.915 |
2013 — 2016 |
Allen, Genevera (co-PI) [⬀] Kavraki, Lydia Vardi, Moshe Bradshaw, Stephen Veeraraghavan, Ashok (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of Big-Data Private-Cloud Research Cyberinfrastructure (Bdpc) @ William Marsh Rice University
Proposal #: 13-38099 PI(s): Vardi, Moshe; Allen, Genevera; Bradshaw, Stephen; Karaki, Lydia; Veeraraghavan, Ashok Institution: Rice University Title: MRI/Acq.: Big-Data Private Cloud Research Cyberinfrastructure (BDPC) Project Proposed: This project, acquiring a novel cyber-infrastructure instrument for big data cloud computing designed as a loosely coupled computations system with large memory requirements, enables a significant range of application domains as well as research into infrastructures for cloud computing. The domain sciences addressed range from development of big-data enabling software technologies spanning fundamental computer science work, to the analysis of electronic medical records, twitter streams, and hurricane evacuation strategies. Additional benefits are expected in understanding disease and therapeutic treatments, and in the development and application of mathematical models in the areas of machine learning, optimization, compressed sensing, image processing, and statistical analysis and data mining. The instrument will also help bridge the gap between numerical models and observations in astrophysics. Broader Impacts: The broader impacts on society, and especially in education and training (including for members of underrepresented groups) are all compelling. The instrument will directly impact the educational experience for all students taking classes in computing and computational problem solving. The targeted research communities are diverse and broad, including the underrepresented groups, with strong empirical and experimental components. The proposed instrument is highly suitable for training and education.
|
0.915 |
2013 — 2017 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Pushing the Frontier of Linear-Time Model-Checking Technology @ William Marsh Rice University
Model checking is a technique for verifying the correctness of computer systems. Different implementations are in wide industrial usage today. There are still, however, many large gaps in our understanding of the algorithmic issues involved in model checking, and the technology is still greatly challenged by industrial designs. For example, in hardware-design verification, it is rarely possible to apply existing tools to complete units with clear functionality. This project is pushing the frontier of this technology with the goal of scaling its applicability to functional system units by developing novel scalable algorithms for model checking. The result will be increased reliability of computer systems.
This project will explore the mathematical approach to design verification that uses automata theory as a unifying paradigm for design specification and verification. The automata-theoretic approach separates the logical and the combinatorial aspects of reasoning about systems. The translation of specifications to automata handles the logic and shifts all the combinatorial difficulties to questions about automata, yielding clean and asymptotically optimal algorithms. While the fundamental theory is well understood, there are still many challenging gaps and improved algorithms can enhance the scalability of this approach significantly. This project investigates ways of improving automata-theoretic algorithms so they are more suitable for model checking at scale.
|
0.915 |
2014 — 2017 |
Vardi, Moshe Duenas Osorio, Leonardo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Unraveling the Limits of Computation in Structural and Infrastructure Engineering @ William Marsh Rice University
This project identifies structural and infrastructure engineering (SIE) metrics and explores analytical and simulation methods that quantify these metrics. SIE metrics studied in this research include: reliability, robustness, redundancy, resilience, risk, sustainability, and life-cycle cost. Given that these metrics have different semantic perspectives their quantification has not been explored using computational and algorithmic thinking while this research does so. The research also studies the computational limits for each metric and takes advantage of the opportunities for exact and approximate formulations. Computable metrics inform the practice of designing, building, maintenance, and restoration of SIE systems. In addition, the research involves a diverse pool of students for which computational algorithms will be a mechanism for the design of SIE systems.
A taxonomy of metrics for SIE applications is explored, while demonstrating via mathematical proofs the membership of these metrics to a computational complexity hierarchy. Proofs are mainly achieved via reducible chains in polynomial time, special subroutines such as oracles, while combinatorial arguments reveal which metrics to pursue using diverse strategies, either exactly or approximately. For instance, the research explores recursive formulations for metrics at the bottom of the hierarchy, and additional proofs are pursued to determine why practical constraints, such as correlations in reliability, break the structure of the problem into exponential time algorithms. The project explores symmetries, periodicities, and other features that support probabilistic algorithms with tolerances and confidence levels that are acceptable for large SIE problem instances.
|
0.915 |
2014 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Student Support For the 2014 Federated Logic Conference @ William Marsh Rice University
The award supports the attendance of fifty US-based students to attend the sixth Federated Logic Conference (FLoC 14) which will be held in Vienna, Austria, in July 2014, as part of the Vienna Summer of Logic (VSL 14). The federated logic conference is an event which brings together ten major international conferences in the area of Formal Methods, and around 75 satellite workshops. While the participating conferences are normally held individually by their local conference organization on an annual basis, they all agree to this joint and federated event to be held every three-four years.
Enhancing the security, privacy, usability, and reliability of computing systems is widely accepted as one the grand challenges facing the computing-research community. Formal Methods have emerged as one of the primary approaches towards that goal. A federated conference provides an opportunity for accelerating research via synergy between the different strands of research in Formal Methods. It is also an outstanding training opportunity for young researchers, as it offers them a grand view of the whole area.
|
0.915 |
2015 — 2018 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iii: Small: Sampling Techniques in Computational Logic @ William Marsh Rice University
Constrained sampling and counting are two fundamental problems in data analysis. In constrained sampling the task is to sample randomly from among possible solutions to a Boolean formula. A related problem is that of constrained counting, determining the number of possible solutions to a Boolean formula. These problems have applications in machine learning, probabilistic reasoning, and planning, among other areas In particular, te project looks at the electronic-design-automation industry to determine what practical solutions to the problems require. Both problems can be viewed as aspects of one of the most fundamental problems in artificial intelligence, which is to understand the structure of the solution space of a given set of constraints.
This project focuses on the development of new algorithmic techniques for constrained sampling and counting, based on a universal hashing - a classical algorithmic technique in theoretical computer science. Many of the ideas underlying the proposed approach go back to the 1980s, but they have never been reduced to practice. This project builds on recent progress in Boolean reasoning to develop methods to reduce these algorithmic ideas to practice. Methods for approximations with formal guarantees provide opportunities to scale what is fundamentally a computationally intractable problem. Pruning techniques can also reduce "waste" in hashed solutions, but introduce challenges in ensuring samples are independently distributed. This work has potential for breakthrough results in constrained sampling and counting, providing a new algorithmic toolbox in machine learning, probabilistic reasoning, and the like.
|
0.915 |
2016 — 2017 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Conference On Humans, Machines and the Future of Work @ William Marsh Rice University
This project supports a conference on "Humans, Machines, and the Future of Work" to be held at Rice University, in Houston, TX. The conference will focus on issues created by the impact of information technology on labor markets over the next 25 years, addressing questions such as: What advances in artificial intelligence, robotics and automation are expected over the next 25 years? What will be the impact of these advances on job creation, job destruction, and wages in the labor market? What skills are required for the job market of the future? How can education prepare workers for that job market? What educational changes are needed? What economic and social policies are required to integrate people whose skills do not match the needs of future labor markets? How can social mobility in such an environment be preserved and increased? The conference will feature 16 renowned speakers and panelists from academia, industry and leading think tanks with expertise in technology, economics, social sciences, and the humanities. The goal of the conference is to start a conversation between many academic disciplines on the future of work in order to make this topic a subject of ongoing academic inquiry, as well as a subject of public policy discussion.
The current understanding of the Information Technology Revolution is somewhat similar to the 1970s' understanding of global warming. Facts are known with some level of certainty. Computers are eliminating some jobs involving structured tasks in manufacturing, clerical work, and some other mid-skill occupations. At the same time, computers are creating new jobs in many other occupations, particularly for technically skilled people. Beyond these facts lies a broad landscape of speculation. Not much is known about the Information Technology Revolution's net effect on employment and wages. Equally important, not much is known about the speed at which the revolution is proceeding. To the extent this uncertainty can be reduced, it will require a joint effort by computer scientists, economists, sociologists, psychologists, and others. Addressing these issues is an important national challenge.
|
0.915 |
2017 — 2020 |
Chaudhuri, Swarat [⬀] Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Medium: Collaborative Research: Formal Analysis and Synthesis of Multiagent Systems With Incentives @ William Marsh Rice University
The project develops automated methods for the tool-aided design and analysis of multi-agent systems with incentives. These systems are natural models for real-world situations in which collections of actors interact with one another in an autonomous and self-interested manner. For example, such a system can model a set of software agents that participate in an internet-based protocol, such as an advertisement auction or crypto currency; a collection of robots that share physical or digital infrastructure; or a set of cells that participate in an evolutionary process. The project combines game-theoretic and logical methods to develop techniques for formal modeling, analysis, verification, and synthesis of such systems. The end objectives of the project include reliable engineering of protocols that govern interactions among autonomous agents, and computer-aided understanding of naturally occurring game-theoretic interactions.
The technical approach of the project has three dimensions. The first is the development of new formal models, correctness requirements, and abstraction and reasoning principles for multi-agent systems with incentives. Research directions include richer notions of equilibria in multi-agent systems, and effects of interaction and randomization on equilibria. Second, the project studies algorithmic tools for analysis and verification of multi-agent systems with incentives with respect to desired requirements regarding system behaviors and equilibria. The third direction is to automatically synthesize mechanisms so as to guarantee desired properties. Applications from a range of areas, including financial protocols, robotics, and biology, are used to guide and evaluate the research.
|
0.915 |
2017 — 2019 |
Cooper, Keith Padley, Paul Odegard, Jan Vardi, Moshe Jelinkova, Klara |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc* Networking Infrastructure: Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration @ William Marsh Rice University
Campus networks are required to protect information and inspect data flows to safeguard security and privacy. However, researchers need open and unfettered access to large data flows and instruments across the globe to reduce time to discovery. The Rice University network is a shared resource that not only needs to support the administrative and teaching functions but also enable scientists to use that network in new and innovative ways for research. Five key data-intensive application teams act as drivers of the new extension of network functionality and are providing feedback to the technical design staff. These application areas include earth and atmospheric sciences; urban data science; computational biosciences and neuroengineering; particle physics and distributed cluster computing. These applications build on long-term science investments aimed, amongst others, at understanding seismic events, the weather patterns in the Gulf regions and beyond, as well as urban trends in large, diverse cities such as Houston, TX.
The basic model adopted by the project is "the science DMZ." A Science DMZ is "a portion of the network, built at or near the campus local network perimeter designed so that equipment, configuration, and security policies are optimized for high-performance scientific applications rather than for general-purpose business systems." This approach allows Rice to aggressively upgrade its network capacity for greatly enhanced science data access. This project supports 100 GB/s flows between the data transfer facilities at our off-campus data center and national and international R&E data repositories and takes advantage of SDN (Openflow) mechanisms.
|
0.915 |
2018 — 2021 |
Kavraki, Lydia Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: Fnd: Robotic Collaboration Through Scalable Reactive Synthesis @ William Marsh Rice University
As human-robot collaboration is scaled up to more and more complex tasks, there is an increased need for formally modeling the system formed by human and robotic agents. Such modeling enables reasoning about reliability, safety, correctness, and scalability of the system. The modeling, however, presents a daunting task. This research aspires to formally model scenarios where the robot and the human can have varying roles. The intent is to develop scalable methodologies that will endow the robot with the ability to adapt to human actions and preferences without changes to its underlying software or hardware. An assembly scenario will be used to mimic manufacturing settings where a robot and a human may work together and where the actions of the robot can improve the quality and safety of the work of the human. The project is a critical step towards making robots collaborative with and responsive to humans while allowing the human to be in control.
This research will develop a framework for human-robot collaboration that integrates reactive synthesis from formal methods with robotic planning methods. By tightly combining the development of synthesis methods with robotics, it will pursue the development of a framework that is intuitive and scalable. The focus is on task-level collaboration as opposed to physical interaction with a human. The framework takes as input a task specification defined in a novel formal language interpreted over finite traces: a language suitable for robotics problems. It produces a policy for a robotic agent to assist a human agent regardless of which subtask or execution order for this subtask that the human agent chooses. The policy includes both high-level actions for the robotic agent as well as corresponding low-level motions that can be directly executed by the actual robot. One key novel component of the approach is the automated construction of abstractions for robotic manipulation that can be used by synthesis methods. The scalability of the proposed work will be investigated along different dimensions: the extent to which symbolic reasoning can be applied, the development of new synthesis algorithms, and the proper use of abstractions including their automatic refinement and the construction of factored abstractions. The trade-offs in using a combination of partial policies and replanning will be investigated as well as how to account for incomplete information due to incomplete observations. The theoretical contributions will be implemented on real robot hardware and demonstrated in experiments that are analogous to real-world assembly tasks.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2020 |
Heller, H. Craig Genesereth, Michael Vardi, Moshe Baraniuk, Richard [⬀] Grimaldi, Phillip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Convergence Accelerator Phase I (Raise): Scalable Knowledge Network to Enable Intelligent Textbooks @ William Marsh Rice University
The NSF Convergence Accelerator supports team-based, multidisciplinary efforts that address challenges of national importance and show potential for deliverables in the near future.
Properly educating the STEM leaders of tomorrow requires moving beyond knowledge being transmitted from teacher to learner via paper textbooks and lectures. The broader impact and potential social benefit of this Convergence Accelerator Phase 1 project is to launch the Textbook Open Knowledge Network (TOKN). The TOKN project will be executed by a multidisciplinary team of cognitive scientists, machine learning and artificial intelligence engineers, and educators from Rice University's OpenStax and Stanford University who are committed to creating a library of intelligent textbooks, engaging more partners as the project progresses. To begin, this project will build knowledge graphs that capture the complex relationships between educational concepts. A knowledge graph enables artificial intelligence algorithms to provide enhanced and personalized functionality to intelligent textbooks. Consequently, the intelligent textbook can provide more robust courseware functionality (text, videos, simulations, etc.), learning analytics, and personalized tutoring, such as automatically generating summaries of textbook content, generating useful practice exercises for students, providing interactive dialogues with students to help them better understand and master the underlying source material, and more. Integrating this intelligent technology into the full OpenStax free and open library has the potential to impact academic outcomes for millions of students in both secondary and higher education, while significantly advancing the state of education worldwide.
Intelligent textbooks provide an opportunity to facilitate better learning for students. However, they require major investments of time, money, and expertise. An appropriate knowledge graph is at the heart of an intelligent textbook and is often the biggest challenge to intelligent textbook creation due to the need for human subject experts to develop the semantic connectivity of terms and ideas. TOKN aims to develop new, scalable processes and supporting technologies for generating high-quality and extensible knowledge graphs for intelligent textbooks. The proposed research aims to lower both the cost and time required to produce high-quality knowledge graphs. In contrast to using subject matter experts, this project proposes to use a combination of machine learning algorithms and crowdsourcing of knowledge from students. Crowdsourcing will not only provide data for knowledge graphs, but it will also provide an opportunity to evaluate the pedagogical effectiveness of concept mapping on student learning. Phase 1 of this project will include a proof of concept to construct and validate a knowledge graph for one chapter of OpenStax Biology, a free and open-source text used by more than 30% of students in college biology programs. The overarching goal is to eventually apply this approach at scale during Phase 2 to generate knowledge graphs for the entire OpenStax library of 38 general educational textbooks, transforming them into intelligent open textbooks for society.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2023 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ccri: Medium: Collaborative Research: Open-Source, State-of-the-Art Symbolic Model-Checking Framework @ William Marsh Rice University
Safety-critical and security-critical systems are entering our lives at an increasingly rapid pace. These are the systems that help fly our planes, drive our cars, deliver our packages, ensure our electricity, or even automate our homes. Especially when humans cannot perform a task in person, e.g., due to a dangerous working environment, we depend on such systems. Before any safety-critical system launches into the human environment, we need to be sure it is really safe. Model checking is a popular and appealing way to rigorously check for safety: given a system, or an accurate model of the system, and a safety requirement, model checking is a "push button" technique to produce either a proof that the system always operates safely, or a counterexample detailing a system execution that violates the safety requirement. Many aspects of model checking are active research areas, including more efficient ways of reasoning about the system's behavior space, and faster search algorithms for the proofs and counterexamples.
As model checking becomes more integrated into the standard design and verification process for safety-critical systems, the platforms for model checking research have become more limited. Previous options have become closed-source or industry tools; current research platforms don't have support for expressive specification languages needed for verifying real systems. This project will fill the current gap in model checking research platforms: building a freely-available, open-source, scalable model checking infrastructure that accepts expressive models and efficiently interfaces with the currently-maintained state-of-the-art back-end algorithms to provide an extensible research and verification tool. This project will create a community resource with a well-documented intermediate representation to enable extensibility, and a web portal, facilitating new modeling languages and back-end algorithmic advances. To add new modeling languages or algorithms, researchers need only to develop a translator to/from the new intermediate language, and will then be able to integrate each advance with the full state-of-the-art in model checking. This community infrastructure will be ideal for catapulting formal verification efforts in many cutting-edge application areas, including security, networking, and operating system verification. This project will particularly target outreach to the embedded systems (CPS) community as the proposed new framework will make hardware verification problems from this community more accessible.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2022 — 2023 |
Vardi, Moshe |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Conference: Cise: Ccf: Shf: Support For the 2022 Federated Logic Conference @ William Marsh Rice University
The award supports the attendance of 50 US-based students to attend the eighth Federated Logic Conference (FLoC 22) which will be held in Haifa, Israel, in August 2022. The federated logic conference is a large event which brings together nine major international conferences in the area of Formal Methods, and around 80 satellite workshops. While the participating conferences are normally held individually by their local conference organization on an annual basis, they all agree to this joint and federated event to be held every four years.
Enhancing the security, privacy, usability, and reliability of computing systems is widely accepted as one the grand challenges facing the computing-research community. Formal Methods have emerged as one of the primary approaches towards that goal. A federated conference provides an opportunity for accelerating research via synergy between the different strands of research in Formal Methods. Attending a large international conference with such diverse topical themes within this domain is a formative experience for graduate students, enabling them to build international collaborative relationships early in their research careers. Attending FLoC helps ensure that future generations of US scientists and engineers gain professional international research experiences early in their careers.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |