1999 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Travel Support For Ieee Infocom'99 Conference @ Institute of Electrical & Electronics Engineers, Inc.
The IEEE INFOCOM Conference Computer Communications will be held in New York, New York, during March 21-25,1999. This preeminent technical conference is the primary venue for presenting new research results in the area of computer communications, and is widely attended by researchers and practitioners in the field. Attending conferences such as INFOCOM is of paramount importance for the development of graduate students, post-doctoral researchers, and faculty members. Participants have the opportunity to present their work, attend panel and keynote sessions, and interact with hundreds of others performing leading-edge research in the field. This proposal requests funding to aid approximate twelve gradute students, post-docs, and junior faculty in the United States in attending this premiere conference.
|
0.915 |
2005 — 2007 |
Peterson, Larry [⬀] Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Proposal: Infrastructure For Experimental Network Architecture Research
The Internet is one of the great technology success stories of the twentieth century, enabling greater access to information and providing new modes of communication among people and organizations. Unfortunately, the Internet's very success is now creating obstacles to innovation in the networking technology that lies at its core. In order to free the global communications infrastructure from stagnation, the nation must find ways to enable its continuing renewal.
This planning project is aimed at creating a blueprint for a global experimental infrastructure needed to support a research program in network architectures and distributed systems. The goal of the research program combined with experimental infrastructure is to greatly increase the functional capabilities, robustness, flexibility, and heterogeneity of the global communications network in the face of modern application requirements, and a rich, competitive commercial environment. The key is to re-architect or re-invent the Internet to be more evolvable-to enable the research community to address the key challenges facing the Internet, and in the process, to build an Internet that is worthy of our society's trust.
Re-architecting the Internet would require substantial experimental infrastructure. The PIs propose to write a comprehensive plan to build this infrastructure. The proposal identifies the major architectural initiatives that address the challenges facing the Internet, outlines the empirical research process the community will use to pursue these initiatives, describes the experimental infrastructure needed to support this research, and highlights the process of putting in place a management structure for the large infrastructure.
|
1 |
2005 — 2009 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Nbd: a Revolutionary 4d Approach to Network-Wide Control and Management
IP networking is a spectacular success, catalyzing the diffusion of data networking across academic institutions, governments, businesses, and homes, world-wide. Yet, despite the fundamental importance of this infrastructure, today's networks are surprisingly fragile and increasingly difficult to configure, control, and maintain. Using a clean-slate approach, the research team will explore a number of fundamental questions related to network control and management. The focus of the research agenda is on IP (layer-3) networks, though the principal investigators (PIs) intend to create networking primitives and services that apply equally well to other technologies, such as layer-2 networks (e.g., Ethernet networks). The starting point for the work is a small set of principles, guiding the control of the network: network-wide views, network-level objectives, and direct control. These principles lead the PIs to a refactoring of network functionality into four components---the data, discovery, dissemination, and decision planes. Via this architecture, which the PIs term the 4D approach to network control and management, the team intends to create, prototype, and demonstrate breakthrough mechanisms that will dramatically simplify and strengthen data networking.
INTELLECTUAL IMPACT: The proposed research will address fundamental questions that are key to improving IP control and management: How to go from networks that blend decision logic with specific protocols and mechanisms to an architecture that abstracts and isolates the decision logic and admits a range of efficient implementations? How to go from networks that consist of numerous uncoordinated, error-prone mechanisms, to ones where the low-level mechanisms are driven in a consistent manner by network-level objectives? How to go from networks where people set parameters (twist knobs), hoping to coax the system to reach a desired state, to one where network designers can directly express controls that automatically steer the system toward the desired state? How to go from networks where human administrators leverage network-wide views and box-level capabilities at slow timescales in decision-support systems, to one where the network itself leverages this information in real time?
BROADER IMPACT: The proposed research seeks to produce fundamental knowledge that will advance the state-of-the-art in large-scale network architecture, control and management. It intends to lay the groundwork for data networks (in academic campuses, data centers, enterprises, metro areas, backbones) that are more robust, more evolvable, and less prone to security breaches. The involvement of industrial partners in this project will accelerate the transfer of the research results into operation.
The general educational impacts are in the training of students, postdocs and researchers, allowing them to cross boundaries between theoretical and system-oriented research. It offers students the unique opportunity to pursue systems work, guided by a deep understanding of fundamental principles, and to use their knowledge creatively to conceive and design innovative software tools and systems. The research team will use existing institutional programs to recruit and involve students from underrepresented groups into the research program from its earliest stages. The results of the research project will be integrated into the undergraduate and graduate computer science program. The software tools to be developed will provide the basis for class projects in the graduate-level courses of the participating institutions. The PIs will create graduate-level courses based on the project's research priorities and goals. The findings will also be used as case studies in undergraduate studies to enhance students' understanding of the main architectural design challenges and control issues of large-scale networks
|
1 |
2005 — 2010 |
Rexford, Jennifer Chiang, Mung (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Nbd: Network X-Ities - Foundations and Applications
0519880 0519998
From the early days of the ARPAnet to today's global Internet, most research on network protocols has focused on traditional performance metrics such as delay, loss, and throughput. However, it is becoming increasingly important that a network not only provides good performance, but also do so in the face of a complex, uncertain, error-prone, and ever-changing environment. In today's networks, operating conditions may change as a result of user behavior (e.g., a shift in traffic to a newly popular Web site) or the underlying infrastructure (e.g., an equipment failure). In all such cases, the network and its operators must respond in a robust fashion, continuing to provide good performance despite changing conditions.
The need for "robust" network operation leads to a set of design considerations that the principal investigators (PIs) refer to as the "X-ities" (since they all end in "ity"): non-fragility, manageability, diagnosability, optimizability, scalability, and evolvability. Intuitively, we know that these X-ities are crucially important if we are to design and analyze robust networks and protocols. Yet, compared with standard performance metrics, these X-ities often lack theoretical foundations, quantitative frameworks, or even well-defined metrics and meaning. The goal of this project is to build a rigorous, quantitative foundation for explicitly considering the X-ities in the design and analysis of network protocols. The PIs consider a number of specific problems, broadly in the area of routing protocols, that concretely address several of the X-ities---with particular emphasis on non-fragility and manageability---and to begin to draw larger lessons from commonalities among the problems studied.
The proposed research focuses on the X-ities in the context of the routing protocols that ensure that each computer has paths through the network to send data to other computers. There are several reasons for this choice. First, routing protocols are a crucial part of the network architecture---they are the very glue that holds the disparate parts of the Internet together. Second, the X-ities of IP routing have not received significant formal attention. Third, routing protocols expose key issues of incomplete information (e.g., across networks run by different institutions) and interacting levels of control (e.g., between applications and the underlying network)---concerns that should arise in any thorough treatment of network X-ities. Finally, routing provides a compelling context in which the X-ities can be quantitatively studied. For example, we can quantify the performance trade-off between a fragile routing solution that has been optimized for narrow, well-defined operating conditions, versus a solution that will perform well of over variety of operating conditions. The contributions of the proposed research are three-fold:
A first quantitative study of X-ities: The intellectual challenges in rigorously understanding the X-ities are many. The PIs define specific metrics and develop mathematical models to quantitatively study each X-ity.
Solutions to specific problems: To make the study of the X-ities concrete, the PIs consider a set of research problems broadly in the area of routing that are of interest in their own right.
The beginnings of a foundation for studying X-ities: The PIs believe that the study of network X-ities is a crucially important area for long-term research in networking.
The X-ity research will lead to a deeper quantitative understanding of how to develop robust network architectures and protocols---technology that is playing an increasingly crucial role in our daily lives. The broader impacts of the research will include enhanced teaching, training, and learning for our students, development and dissemination of new educational materials, and dissemination of X-ity research results throughout the technical community.
|
1 |
2006 — 2008 |
Peterson, Larry [⬀] Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Facility For Experimental Network Architecture Research
The Directorate for Computer and Information Science and Engineering (CISE) and the CISE research community are planning an initiative called Global Environment for Networking Innovations or GENI to explore new networking capabilities that will advance science and stimulate innovation and economic growth. The GENI Initiative responds to an urgent and important challenge of the 21st Century to advance significantly the capabilities provided by networking and distributed system architectures. To have significant impact, innovative research and design ideas must be implemented, deployed, and tested in realistic environments involving significant numbers of users and hosts. The initiative includes the deployment of a state-of-the-art, global experimental GENI Facility that will permit exploration and evaluation under realistic conditions. The GENI Facility will permit a range of researchers, including network engineers, policy analysts, protocol designers, system architects, and economic modelers to contribute to and study innovative new capabilities for the global network of the future. Assuming the concept proves to be as promising as currently anticipated, GENI construction will be considered for funding from NSF's MREFC account.
In support of making the case for GENI as a MREFC project, the PIs propose to undertake a set of tasks to advance the GENI project definition from the Conceptual Design, through the MREFC Readiness Stage, to Preliminary Design. This will involve addressing a set of design issues; taking the definition of various components of the facility to the next level of specificity; creating a detailed work breakdown structure (WBS), bottom up budget, schedule, contingency, and critical path analysis for each component and the facility as a whole; and taking the project management definition for construction and operation to the next level of specificity with due considerations to special requirements of GENI.
|
1 |
2006 — 2008 |
Rexford, Jennifer Chiang, Mung [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Towards An Analytic Foundation For Network Architectures
In large and complex communication networks, architectural decisions regarding functionality allocation are extremely important. The time is ripe for building a scientific foundation for network architectures, both to capitalize on unique clean-slate design opportunities (such as GENI and MANET) and to guide the evolution from existing network architectures to new ones. Such a foundation can lead to highly efficient, robust, and scalable protocols that could have a significant impact on the communications industry.
The recent successes of understanding protocols as optimizers and layering as mathematical decompositions offer a promising starting point for such an analytic foundation one that is conceptually unifying, mathematically rigorous, and practically relevant. However, there is still much work to be done in developing an analytic foundation for network architectures. This research focuses on three main thrusts:
Alternative architectural choices: Past mathematical results have focused on one architecture derived from a particular decomposition. There is in fact a wide range of alternative decompositions that result in different scalability, convergence, and complexity tradeoffs. This research systematically explores architectural choices using appropriate decompositions.
Stochastic network dynamics: This research develops new architectural designs taking into account stochastic (rather than deterministic) network dynamics, which are critical in modeling real systems and in developing high-performance network architectures.
Non-convexity and robustness: Non-convexity persists in real networks, which could lead to instability, poor performance, and impractical computational complexity. Nonetheless, most past results have been derived only for the convex case. This research explores architectural choices that are robust to non-convexity.
|
1 |
2006 — 2010 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Find: Collaborative Research: Cabo: Concurrent Architectures Are Better Than One
In today's Internet, a single service provider rarely has purview over an entire end-to-end path; this situation hinders the deployment and adoption of new network services. The PI propose to design and prototype a new network architecture, Cabo (Concurrent Architectures are Better than One), which resolves this problem by separating infrastructure providers (who own and manage the physical network infrastructure) from service providers (who deploy end-to-end services to users). Using Cabo, service providers will reserve network resources (i.e., virtual nodes and links) on equipment that may span one or more infrastructure providers. The separation of network service providers from infrastructure providers also allows a single service provider to construct end-to-end services; it also allows a service provider to operate multiple virtual networks, each of which is tailored to a specific application. For example, one virtual network may provide strict security guarantees but may not provide complete reachability to all destinations, while another virtual network may guarantee global reachability for applications that do not require strong security guarantees. The outcome of this project will be: (1) the design and implementation of a substrate that service providers can use to deploy new network architectures and services, and (2) investigation of example scenarios where Cabo can provide better security, robustness, and end-to-end performance guarantees. Cabo's initial design leverages virtualization, tunneling, and new experimental platforms for network protocols and architectures, including the PIs' own Virtual Network Infrastructure (VINI), which the PIs plan to use to prototype and evaluate their ideas. Successful completion of this project will have significant impact on Future Internet architecture development and deployment.
|
1 |
2006 — 2010 |
Peterson, Larry [⬀] Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: the Development of Vini: Virtualized Network Infrastructure
This project, evolving the Internet to address new threats, accommodating emerging applications and technologies, and fostering the spread of the network throughout the physical world, aims at developing a Virtualized Network Infrastructure (ViNI) responding to the many challenges faced by today's Internet. In 25 years, the Internet has moved from an obscure research facility, to a critical piece of the national communication infrastructure. To appreciate the significance of this transformation, recall that a bug in the Internet' core routing algorithm inconvenienced several thousands in 1989 and the SQL slammer attack in 2003 grounded commercial airline flights, brought down thousands of ATM machines, and caused damages of approximately $1 billion dollars. As our dependence on the Internet grows, so do both risks and opportunities. Hence, it is imperative that we address new threats, accommodate emerging applications and technologies, and foster the spread of the network throughout the physical work, precisely the goals addressed by this work. The instrument must meet the following requirements. ViNI must
-Provide realism through an experimental environment that reflects closely real-world conditions. -Experiments must have access to realistic network topologies, high-speed forwaring engines, real users, and high volumes of real traffic, and dedicated link bandwidth and node resources (CPU, memory, disk) allocated on relatively small time scales. -Give experimenters control over their experiments by making it possible to replicate specific conditions for study. Researchers need tools to easily specify and start experiments, and to inject network events (e.g., failures, packet loss) in a predictable fashion. -Be shared among multiple simultaneous experiments running on the same hardware. PlanetLab, the starting point of ViNI, supports multiple simultaneous experiments by running each in a virtual machine and its virtualization needs to be extended to support the goals and requirements of ViNI. Moreover, new resources, such as IP address blocks, must be globally managed.
Hence, to address the challenges that face the Internet today, this experimental infrastructure must reconcile these non-orthogonal requirements to maximize the value of the enabled networking research. ViNI is envisioned serving as a microcosm for the next generation Internet.
|
1 |
2007 — 2008 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nextworking 2007 Workshop On Future Internet Architecture
This award funds a joint COST-IST (EU) and NSF (USA) NeXtworking 2007 Workshop on Future Internet Architecture held on April 19-20, 2007, in Berlin, Germany. There are three main objectives for the workshop: . Identifying networking research challenges: The workshop focuses on identifying the main research challenges that must be addressed to lead to a future Internet architecture that addresses the many challenges of today's networks and the promising capabilities of tomorrow's technologies. . Identifying requirements for experimental facilities: A closely related goal is to investigate the role of experimental facilities in supporting the research and the capabilities these facilities should have, as well as documenting the strengths and limitations of existing research testbeds. . Facilitating collaboration between EU and US researchers: By bringing together researchers from the EU and US, the workshop will build stronger ties for joint research including organizing joint efforts on the design, and particularly the federation, of future experimental facilities. The workshop targets key research areas in networking represented by premier researchers from the EU and USA, with an emphasis on the needs of future network architectures rather than summaries of mature research results. The workshop organizers are Christophe Diot (Thomson), Serge Fdida (U. Paris), Anja Feldmann (TU-Berlin), Jennifer Rexford (Princeton University), and Ioannis Stavrakakis (University of Athens), Jennifer Rexford will handle the NSF funding and Scott Kirkpatrick is handling the 30,000 Euros that COST is providing for the workshop. After the conclusion of the workshop, the organizing committee will furnish a report to the COST-IST (EU) and NSF (USA) and will make it publicly available to participants and others via the workshop Web site and any relevant funding-agency Web sites.
Intellectual Merit The workshop focuses on the intellectual challenges of designing a future Internet architecture that is worthy of society's trust (in terms of important metrics such as scalability, reliability, security, manageability, usability, and other so-called X-ities) and expands the capabilities available to end users (including support for mobility and large numbers of wireless and sensor devices). At the NSF, these research challenges lie at the heart of the FIND (Future INternet Design) initiative. In addition, the workshop will include discussion of tools and techniques for evaluating new architectural ideas, including modeling, simulation and experimental tools and techniques for connecting multiple experimental facilities to enable larger-scale evaluation.
Broader Impact: The importance of the Internet for the economic, political, and social well-being of the nation, and the world, cannot be overstated. The workshop will play an important role in outlining a larger research agenda for the design of a future Internet. In addition, the workshop will enable a strong collaboration between EU and US researchers in networking research and the design and operation of global experimental facilities for evaluating new network architectures.
|
1 |
2007 — 2010 |
Rexford, Jennifer Chiang, Mung [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Find: Collaborative Research: Towards An Analytic Foundation For Network Architectures
In large and complex communication networks, architectural decisions regarding functionality allocation are often more important than the details of resource allocation algorithms themselves. This NSF-funded project aims to develop a scientific foundation for designing network architectures by building upon recent successes in understanding protocols as optimizers and layering as mathematical decompositions. In particular, the PIs at five institutions collaborate to conduct a wide range of closely-connected research activities that substantially improve upon the state-of-the-art. Starting from a convex optimization formulation of the architecture design problem, the project investigates a wide range of alternative decompositions that provide different scalability, convergence, and complexity tradeoffs. The PIs then determine whether the properties of these alternative architectures continue to hold under stochastic network dynamics and non-convex objectives and constraints, and develop new architectural designs from a careful study of such dynamics. Mathematically, this project leads to a long-overdue union between network optimization and stochastic networks theory, and enables a systematic approach to leverage advances in general non-convex optimization.
Broader Impact: This project has clear synergy with the NSF's GENI initiative. The research provides a strong, analytic foundation for the design of future network architectures, including clean-slate solutions that deviate from todays Internet. The exploration of new ways to decompose functionality, with the influence of network dynamics and non-convexity in mind, will result in new protocols and mechanisms that can be evaluated in the GENI infrastructure.
|
1 |
2008 — 2013 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Neco: Collaborative Research: Fixing the Reliability Problem in Network Software From Its Root
Most of the Internet's complexity resides in software running on Internet routers. Bugs in this software are a highly critical problem, leading to a number of recent high-profile attacks and outages, and are increasingly becoming a bottleneck in building highly reliable networks. The PIs are designing and evaluating techniques to make the Internet resilient to software bugs. Their approach consists of two key components. First, they are building a highly reliable single instance of a network router. This involves performing a characteristic study of bugs in router software, by using static and dynamic code analysis and by taxonomizing publicly disclosed vulnerabilities. They also apply and extend techniques such as rollback, reordering inputs, microreboots, and automated debugging to construct a software router resilient to implementation bugs. Second, the PIs are developing and building an architecture for highly-available bug-resistant networks. Their design leverages the principle of "control and data diversity", which simultaneously runs multiple functionally-equivalent instances of a piece of software or data. Each instance is changed from the others in a way that makes it unlikely multiple copies will simultaneously undergo the same bug, for example by randomizing the execution environment, having each instance be responsible for a subset of routes, or by having different programmers implement each instance. In addition to producing designs and algorithms that enable these networks, the PIs will also make available tools and implementations to enable their use. Successful completion of this project will significantly improve the Internet's ability to avoid and recover from failures.
|
1 |
2009 — 2013 |
Rexford, Jennifer Freedman, Michael [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: a Scaffold For Service Centric Networking
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
This research proposes a new network architecture, SCAFFOLD, that directly supports the need of wide-area services. SCAFFOLD treats service-level objects (rather than hosts) as first-class citizens and explores a tighter coupling between object-based naming and routing. A clean-slate, scalable version of the federated SCAFFOLD architecture is being designed and prototyped. System components include programmable routers/switches, resolution services for object-based lookup and forwarding, and integrated end-hosts.
The center of people's "digital lives" today are online services -- not the networks or computers on which they run. The research ultimately explores what abstractions and mechanisms that will make the future network a powerful, flexible hosting platform for wide-area services (the so-called ``cloud''). In doing so, SCAFFOLD would lower the barrier to deploying networked services that are scalable, reliable, secure, energy-efficient, and easy to manage.
The project includes a summer-camp outreach activity with schools serving under-represented groups to build services on top of SCAFFOLD, new special course development, and technology transfer with industry.
|
1 |
2010 — 2011 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop For Geni Experiments
The goal of this workshop is to bring together researchers interested in running novel network experiments at scale and under realistic conditions with the designers and developers of the current suite of infrastructure available through GENI, which is rapidly taking shape in prototype form across the US. GENI has the goal of becoming the first laboratory for exploring networks of the future through innovatations in network science, security, technology, services and applications.
The workshop attendees will consist of 20 to 30 two-person teams, typically, a researcher and his or her graduate student, selected through a white paper competition. They will be given technical tutorials on the various GENI components and resources and how to launch experiments that span more than one part of the infrastructure (e.g., bridging PlanetLab, Proto-GENI and OpenFlow.) The researchers will also have opportunities to propose experiments and describe the associated operational, workflow, instrumentation, measurement, and security requirements for that experiment.
This workshop will develop a common understanding of the GENI network infrastructure and build a community of researchers using GENI for their experiments. It is expected that some of these teams will propose their experiments to NSF, which will result in the demonstration of new research ideas and experiments not possible on any other network testbed.
|
1 |
2010 — 2014 |
Peterson, Larry [⬀] Pai, Vivek Freedman, Michael (co-PI) [⬀] Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of a Virtual Cloud Computing Infrastructure
Proposal #: 10-40123 PI(s): Peterson, Larry L. Freedman, Michael J.; Pai, Vivek; Rexford, Jennifer Institution: Princeton University Title: MRI/Dev: Development of a Virtual Cloud Computing Infrastructure Project Proposed: This project, building VICCI, a programmable cloud-computing research testbed, enables a broad research agenda in the design of network systems that requires both multiple point-of-presence and significant processing/storage capabilities on the sites. VICCI, a distributed instrument with a point-of-presence at Princeton, GeorgiaTech, Stanford, and U Washington, along with international clusters in Europe and Japan, encompasses both a distributed set of virtualized compute clusters and networking hardware and the software that enables multiple researchers to innovate both at and above the infrastructure layer. It is designed to support research both into the design and deployment of large-scale distributed services that use an environment. VICCI enables research in - Building block services (addressing issues of replication, consistency, fault-tolerance, scalable performance, object location, and migration) designed to be used by other cloud applications, - Developing new cloud programming models designed for targeted application domains, and - Studying cross-cutting issues at the foundation of the cloud?s design and how to build a trusted cloud platform that ensures confidentiality and integrity of computations that are outsourced to the cloud. Plans include bootstrapping VICCI with working software from PlanetLab with an ultimate goal of folding the results into VICCI itself, thus creating an even more effective platform for research into scalable network systems. Broader Impacts: This project, strongly influenced by the experience with PlanetLab that has demonstrated the importance of deploying experimental network services on realistic platforms (i.e., platforms that are realistic enough to attract the real user community), provides a realistic environment to evaluate and deploy scalable new network services. VICCI supports deployment studies of prototype systems. Thus, it accelerates research and teaching processes by supporting seamless migration of scalable services and applications from early prototypes. Moreover, it offers a path to re-energize the innovative process that has led to new network services, widespread consumer adoption, and generation of new economic and social value. It also provides graduate students with extensive experience in building large-scale distributed systems and enables the design of more courses taking advantage of the instrument.
|
1 |
2010 — 2014 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fia: Collaborative Research: Architecting For Innovation
A Platform for Internet Innovation
The architectural stability of the Internet was crucial in fostering the development of new applications and networking technologies by giving the former a stable base upon which to build and giving the latter a fixed set of requirements to support. However, in recent years this architectural stability has become a liability, as there are areas of increasing importance ? most notably inadequate support of security and availability, lack of adequate mechanisms for privacy, mobility, middleboxes, and data-oriented functionality ? where the original Internet architecture falls short. The persistence of the Internet's architectural deficiencies is not because they are intellectually intractable, but because they are beyond the reach of incrementally deployable changes. Based on this observation, the research team takes a different approach than recent clean-slate designs, focusing not on a new fixed architecture but instead on providing a platform to enable architectural innovation through incrementally deployable changes, without massive disruption in the infrastructure.
In this research project, the research team focuses on the ?hardware-defined functionality? challenge and proposes a ?platform for innovation? that allows the network infrastructure to support new architectures without changes to the underlying hardware. In particular, this approach will enable forwarding hardware to support a wide range of alternative designs. In addition, so that changes can be introduced alongside the current design, hardware will also be able to support multiple designs simultaneously.
The proposed platform will use a newly developed paradigm called Software-Defined Networks (SDN), currently embodied in the OpenFlow and NOX projects. OpenFlow is an open hardware forwarding interface. NOX is an open-source software platform that provides global abstractions to network management software and in turn communicates the decisions made by this software to the individual forwarding boxes. This effort will provide a solid foundation for more general SDN designs that are open, comprehensive and can meet long-term needs.
The research team will also explore and demonstrate applicability of the SDN approach including abstractions and programming model for different domains of network use. These include enterprise, WAN, home, and wireless. To demonstrate the ability of the proposed platform to support innovation in radically new network mechanisms, the research team will deploy prototype novel architectures on SDN.
If successful, the proposed approach would allow the use of known approaches and design proposals currently available in the literature to address many of the Internet's current problems, as these solutions would be incrementally deployable, without major disruption to the underlying infrastructure. Furthermore, current commercial efforts to address Internet?s deficiencies are disjointed, proprietary, and tailored for short-term needs. The next generation of SDN technology provides a solid basis for coordinated, long-term efforts to address critical needs in areas of security, mobility and support of content-centric application and services. More importantly, the proposed approach would allow the Internet to meet future requirements as they arise through incrementally deployable modifications, relieving network designers of the burden of predicting what these future requirements might be.
|
1 |
2011 — 2016 |
Walker, David [⬀] Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tc: Large: Collaborative Research: High-Level Language Support For Trustworthy Networks
Computer networks are now, arguably, the United States' most critical infrastructure. They control all communication amongst our citizenry, our businesses, our government, and our military. Worryingly, however, today's networks are remarkably unreliable and insecure. A significant source of vulnerability is the fact that the underlying network equipment (e.g., routers and switches) run complicated programs written in obtuse, low-level programming languages, which makes managing networks a difficult and error-prone task. Simple mistakes can have disastrous consequences including making the network vulnerable to denial-of-service attacks, hijackings, and wide-scale outages.
The goal of this research is to transform the way that networks are managed by introducing a new class of network programming languages with the following essential features: (i) network-wide, correct-by-construction abstractions; (ii) support for fault-tolerance and scalability; (iii) coordination with end-hosts and independently-administered networks, as well as mechanisms for establishing trust between them; (iv) formal verification tools based on rigorous semantic foundations; and (v) compilers capable of generating efficient and portable code that runs on heterogeneous equipment. To demonstrate how to build a language with these features, the researchers are designing a language for OpenFlow networks called Frenetic, and evaluating it on several novel security applications. This project will have broad impact by (i) discovering key techniques for increasing the reliability of our networks, (ii) opening up the interfaces used to program networks, thereby enabling grass-roots innovation where it was previously not possible, and (iii) educating a new community of researchers with advanced skills in both networking and programming languages.
|
1 |
2012 — 2017 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: Collaborative Research: Optimizing Network Support For Cloud Services: From Short-Term Measurements to Long-Term Planning
Online Service Providers (OSPs) host a wide range of application services, including email, Web search, video streaming, and multiplayer games, on servers in data centers all over the world. Each service has specific performance requirements. For example, Web search and multiplayer games need low latency, whereas video streaming and bulk file transfers need high throughput. Clients access these services from a wide variety of geographic locations over access networks with wildly different performance. Offering good performance to these diverse clients at a reasonable cost is the life blood of any OSP.
OSPs affect client performance by controlling content routing (selecting which data center should serve a client request) and network routing (selecting interdomain paths to clients, or paths within the OSP's own backbone), and by longer-term planning of future data centers and relationships with upstream ISPs. Unfortunately, OSPs have relatively poor visibility into end-to-end performance and do not adapt both content and network routing to maximize performance; in addition, OSP operators lack good models for deciding where to place the next server or data center, or which ISPs to select as neighbors.
To address the wide-area networking needs of online services, this project is designing, implementing, deploying, and evaluating practical techniques that allow OSPs to perform content and network routing (and make longer-term placement decisions), based on timely and accurate information about end-to-end performance and transit costs. The project is developing techniques to help OSP operators measure, control, and plan the wide-area connectivity between distributed services and their clients, and between the servers themselves. The project tasks include: (1) designing performance-measurement techniques and conduct measurement-driven studies of OSP traffic management; (2) designing, modeling, and prototyping protocols for joint optimization of content and network routing, and traffic management within an OSP backbone; and (3) driving long-term planning of server placement and ISP peer selection based on models of transit costs.
To evaluate our algorithms together, and "in the wild", the project will use experimental platforms for network monitoring (BISmark, M-Lab, and, where available, measurement servers in ISP backbone networks), content and network routing (DONAR and Transit Portal), cloud computing (VICCI), and programmable networking (OpenFlow).
Broader Impact: The PIs are working with industry to evaluate and deploy the solutions on operational networks. They will also continue their close collaboration on graduate networking curriculum development to include the research topics and experimental platforms in this project. As part of the project outreach, the PIs are organizing "summer camps" (drawing on their earlier experiences with summer camps for the VINI and BISmark projects) to bring under-represented students to their institutions for summer internships. The PIs will also work with under-represented regions and institutions to deploy their infrastructure and engage faculty and students in research projects using the platforms.
|
1 |
2013 — 2016 |
Rexford, Jennifer Raychaudhuri, Dipankar [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ears: Savant - High Performance Dynamic Spectrum Access Via Inter Network Collaboration @ Rutgers University New Brunswick
This project is aimed at achieving significant spectrum efficiency gains through inter network collaboration in radio resource management. The proposed SAVANT (spectrum access via inter network collaboration) architecture is based on a new protocol interface for dissemination of spectrum usage information, policies and algorithms between neighboring networks to enable spectrum coexistence algorithms that reduce interference and improve spectrum packing efficiency. A new inter-domain spectrum coordination protocol (ISCP) is being developed to enable independent networks to negotiate radio resource management policies and optionally merge radio resource controllers for joint optimization.
The scope of research to be conducted includes ISCP protocol design/validation, evaluation of alternative algorithms involving network collaboration, prototype implementation and performance evaluation. The methodology for the project involves a mix of analysis, simulation and experimental prototyping. Generalized analytical models for radio localization, propagation and interference are developed and incorporated into simulation studies of inter-network cooperation using the ISCP protocol framework. These simulation models are expected to provide insight into the type of collaborative radio resource optimization algorithm to be used along with quantitative evaluation of ISCP overhead, complexity and performance. The project also includes an experimental prototyping track in which emerging software-defined network (SDN) technology is used to develop a proof-of-concept system with multiple collaborating networks.
The proposed ISCP inter-network protocol has the potential for large gains in wireless spectrum utilization, and could thus influence future industry standards. The project will also produce educational materials for training of graduate students in software-defined networking and wireless systems.
|
0.951 |
2014 — 2017 |
Turk-Browne, Nicholas (co-PI) [⬀] Tully, Christopher (co-PI) [⬀] Hillegas, Curtis (co-PI) [⬀] Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc*Iie Engineer: a Software-Defined Campus Network For Big-Data Sciences
Scientific researchers on university campuses create, analyze, visualize, and share large and diverse datasets from experimental devices like brain scanners, particle colliders, and genome sequencers. However, these "big data" applications place strain on traditional campus networks, due to rapidly increasing volumes of data, the need for either predictably low latency (to adapt experiments in real time) or high throughput (to transfer large data sets between locations), and sophisticated access-control policies (to protect the privacy of human subjects). To enable the next wave of scientific advances, university campuses must find effective ways to meet these challenging demands, at reasonable cost. The emerging technology of Software-Defined Networking (SDN) lowers the barrier to innovation in network management, and can substantially reduce cost through (i) inexpensive commodity network switches, (ii) greater automation of network configuration, and (iii) novel network-management applications that optimize bandwidth usage. Yet, existing innovation in SDN focuses primarily on the needs of commercial cloud providers, rather than the unique requirements of university campuses and scientific researchers. Princeton University is creating a software-defined campus network that can enable the next generation of data-driven scientific research. The initiative brings together big-data science researchers, computer scientists who are experts in SDN, and the campus Office of Information Technology. Princeton is deploying an open-source SDN platform for monitoring and configuring the network, conducting trials of new ways to support big-data applications, and bridging with the larger community, on and off campus, to support the sharing of scientific data, SDN software, and operational experiences.
|
1 |
2014 — 2017 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: Collaborative Research: a Software Defined Internet Exchange
The Border Gateway Protocol(BGP) is the protocol used to administer and control the flow of traffic between the separately administered networks that connect together to form the Internet. Because many of the current failings of the Internet are due to BGP's poor performance and limited functionality, this project aims to explore incrementally deployable ways to leverage Software-Defined Networking's (SDN) power to improve interdomain routing. These improvements will facilitate higher return on investment via load balancing and traffic engineering, increased capabilities to respond to denial-of-service attacks, and new services such as application specific peering where two networks exchange traffic only for certain applications (e.g., video). Additionally, the project will improved the ability of network operators to track and engineer peering relationships based on traffic volume.
This project exploits the re-emergence of Internet eXchange Points (IXPs) to create Software Defined eXchanges (SDXs) that fundamentally change network control. The project has two major themes: (1) near-term solutions that coexist with BGP; and (2) long-term solutions that replace BGP entirely, using IXPs as the dominant mode of interconnection. In terms of near-term solutions, the central intellectual question explores the improvements that are possible when a single IXP deploys SDN-based technology. Longer term, assuming that SDXes will one day become more prominent, the project is developing solutions that replace BGP entirely with an SDX-mediated Internet, where all peering takes place at these interconnection points. Such a design would make policy only relevant to the endpoints (the sending and receiving domains) and would eliminate policy complications from intermediate providers. The project is also investigating how these endpoint policies might emerge, how the inter-SDX routing is done, how the longer-term design might be incrementally deployed, and what its impact might be in the provider ecosystem. The SDX design may point the way to a more stable, secure, and economically sound Internet.
|
1 |
2015 — 2019 |
Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Aitf: Full: Collaborative Research: Compact Data Structures For Traffic Measurement in Software-Defined Networks
Software-Defined Networking (SDN) is changing the way networks are designed and managed, by separating the "control plane" (which decides how to handle the traffic) from the "data plane" (which actually forwards each packet). Many large companies---like Google, Microsoft, and Facebook---have already deployed SDN technology, and many equipment vendors support open interfaces for programming their switches. While most work on SDN focuses on how to control the network, measuring the traffic in the network is equally important. Traffic measurement is useful to identify congested links, denial-of-service attacks, performance problems, and configuration mistakes, and also drives decisions of how the network should forward traffic in the future. However, the support for traffic measurement in today's commodity switches is quite primitive. In this proposal, the PIs bring algorithmic research on so-called "compact data structures" to bear on the problem of programmable traffic measurement in SDNs. Compact data structures can give approximate answers to measurement questions with limited overhead in terms of switch memory and processing resources.
The project is interdisciplinary, bringing together researchers in computer networking and theoretical computer science to match practical problems with novel solutions. The proposed research starts with designing new query abstractions for collecting traffic statistics on existing SDN switches, and then progresses to identifying new compact data structures so that future switches can support much richer traffic measurement at reasonable overhead. The researchers have close ties with network administrators and switch vendors, allowing them to ground the project in a strong understanding of both operational requirements and hardware constraints, and also influence future SDN technology.
This project aims to identify a switch data-plane architecture for collecting diverse traffic statistics, as well as a small set of programmable sketches and samples for variety of analyses to trade-off accuracy and resources. The architecture will include a measurement control API between the controller and the switch, and this needs a communication-efficient interface, along with a high-level language for specifying traffic queries, and with that, a run-time system on the controller that compiles these queries into commands to the switches with suitable CDSs. These challenges will be addressed using OpenFlow API that is widely popular for SDNs and in new redesigns. This is a conversation between the networking and algorithmic communities, mutually informing each other on what is possible, what is required, and ultimately what is effective and useful.
|
1 |
2017 — 2018 |
Feamster, Nicholas Rexford, Jennifer |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop On Self-Driving Networks
This workshop brings together leading researchers from a range of disciplines across computer science to define a new research agenda in network measurement and data analytics with the goal of exploring how to design networks that manage themselves. These experts will explore taking advantage of advances in disciplines including machine learning, distributed systems, and formal methods to address growing requirements and constraints of modern networking applications.
Because of the proliferation of applications and services that now run over the Internet ranging from video streaming to Internet-connected smart home devices to augmented reality---the expectations for the performance, reliability, and security of our communications networks are greater than ever, as the number and diversity of applications that run on top of the network continue to proliferate, and as the volume of traffic on the network continues to grow. To meet these expectations, network operators work tirelessly to continuously collect troves of heterogeneous data from the network, analyze this data to infer characteristics about the network, and decide whether to change the network's configuration in response to network conditions (e.g., a shift in traffic demand or a cyber attack). Today, these three steps are decoupled: operators perform them separately, on different timescales, often in a slow or manual fashion that relies on intuition, as opposed to data, analysis, and inference. The vision for this workshop is that networks might one day be able to largely manage themselves through a combination of query-driven network measurement, automated inference techniques, and programmatic control.
Intellectual Merit: The research agenda lends itself to research problems that will foster advances in computer science, including the following areas: 1. Distributed systems that optimize the use of limited resources for complex tasks, including support for multiple simultaneous queries; New architectures to support programmable measurement in hardware; Algorithms that partition a network analytics query across a centralized stream processor and the distributed switches and network middleboxes. 2. New measurement techniques (beyond "ping" and "traceroute") that leverage the capabilities of P4-capable data planes (e.g., in-band telemetry); Software/hardware co-design for better network measurements; Clean-slate, problem-driven designs for new network measurement tools that might tackle problems in network measurement that have proved evasive (e.g., application quality of experience); Measurement of unified compute, storage, and networking infrastructure, including monitoring of container-based systems 3. Machine Learning and new algorithms for automated troubleshooting and "what-if" scenario evaluation; Development of parsimonious models that could be implemented (at least partially) at line rate on switch hardware; Prediction and inference over non-stationary datasets to changing traffic patterns. 4. Security and privacy through scalable algorithms and systems for detecting a broad range of attacks, from denial of service to data exfiltration; Better ways to monitor application performance without having to perform man-in-the-middle attacks on traffic.
Broader Impacts: Results from this workshop will be broadly distributed so that researchers in all of the areas noted above will benefit from the discussions, conclusions and recommendations resulting from the workshop. Research inspired by the workshop could have broad societal impacts by helping network operators envision how to integrate measurement, data analysis, and configuration decisions and move toward automated network control.
|
1 |
2017 — 2021 |
Rexford, Jennifer Feamster, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: From Packets to Insights: Programmable Streaming Analytics For Networks
The ability to monitor Internet traffic on our communications networks is of critical importance to our nation's economic prosperity and national security. For communications networks to run well, however, network operators must be able to manage them: they must be able to detect, diagnose, and fix problems that degrade the performance of the applications we use, and they must be able to detect and mitigate attacks against the infrastructure. To ensure that computer networks are secure and perform well, network operators need to gather measurements to detect attack traffic, diagnose performance problems, identify flaky equipment, drive traffic-engineering decisions, and more. Although network devices provide reasonable mechanisms for monitoring the control plane -- that part of the network that is responsible for routing packets/information through the network. Tools and mechanisms for monitoring the flow of network traffic remain primitive (e.g., ping and traceroute for active measurement, Netflow and sFlow for passive measurement). These measurements provide coarse statistics about network traffic or conditions, but they provide at once both too little information (because they obscure important details about the flows, such as packet timings, queue sizes, and loss rates) and too much information (because, for any particular question about performance or security, the operator needs detailed information about a few flows, as opposed to coarse information about all of them). This project aims to develop measurements that are "just right" for each of the above tasks and to design a data-analytics platform for querying the data that can be used to diagnose and mitigate network problems. Two technological trends enable fundamentally new paradigms for network measurement. The first trend is the rise of programmable network hardware -- including reconfigurable application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and network processors -- that are fast and inexpensive enough for use in commodity switches, and also programmable in target-independent languages like P4. The second trend is the emergence of scalable streaming analytics platforms, such as Spark Streaming and Apache Storm. These platforms make it possible to express queries based on streams of tuples and efficiently filter and aggregate the data. Using the programmable functionality from switches, one can define the types of tuples that a switch exports, and even perform simple computations over the tuples directly in the data plane. Given input tuple streams from one or more switches, the stream processor can compute the answer to a high-level query. This project is developing a streaming analytics framework that addresses these challenges. The researchers will develop a query language with familiar programming paradigms from existing streaming analytics platforms, which they will extend to support domain-specific primitives. They are also developing a runtime system that partitions this query across the stream processor and the switches in the data plane. Queries will entail network-wide aggregation, iterative "drill down" capabilities, and joins with external data sources (e.g., routing, application identification). The researchers are evaluating the feasibility and usability of this platform in the context of a wide-range of security and performance diagnosis queries that arise in operational networks.
|
1 |
2018 — 2022 |
Rexford, Jennifer Walker, David (co-PI) [⬀] Gupta, Aarti [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fmitf: Openrdc: a Framework For Implementing Open, Reliable, Distributed, Network Control
Computer networks, whether connecting servers across a data center or users across the globe, are an important part of society's critical infrastructure. However, existing network protocols and services are simply not worthy of the trust society now places in them. Today's networks suffer from poor performance, cyberattacks, configuration errors, software bugs, and more, leading to serious consequences for consumers, businesses, and governments alike. The goal of this project is to enable the design and operation of better networks which requires enabling both innovation (to create better protocols and services) and verification (to ensure these services work correctly). A major part of the functionality of the network depends on the software running in the control plane, which computes routes, collects and analyzes network measurement data, balances load over multiple paths or servers and even hosts in-network applications. This project involves the theory, design, and implementation of OpenRDC, a new platform constructed for programming reliable, distributed network control planes.
The technical core of OpenRDC centers around computations of Stable Information Trees (SITs) that communicate information (e.g., traffic conditions, failure information, available external routes, end-host job statistics, etc.) across a network, and then perform local actions to change network functionality or record information gathered. These structured computations suffice to express core control plane algorithms and yet can also be converted into logical representations that can be used to verify a variety of important properties of operational networks ranging from reachability to access control to multi-path consistency. The OpenRDC platform will simultaneously: (1) allow researchers to develop new control-plane algorithms, (2) enable automatic verification of network properties, and (3) make use of emerging programmable switch capabilities. The project involves acceleration of the development of new control-plane algorithms, via new abstractions for network programming. The project will also define new compiler technology for translating these abstractions to programmable network hardware. In addition, its open source infrastructure will lay a foundation for academic and industrial engagement and for the training of students. The project will also have impact on formal methods, with new algorithms for the verification of graph-oriented programming languages based on abstraction and modular decomposition.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2020 — 2022 |
Rexford, Jennifer Brassil, John |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc* Integration-Small: Science Traffic as a Service (Staas)
Future advances in scientific research will require computing on massive datasets and high bandwidth streaming scientific instrument data. New experimental research infrastructures will be required to advance the understanding of the networks capable of supporting these increasingly demanding science data flows. Testing advances in networking technologies and protocols with actual high-speed science data traffic is vital to networking experimenters, scientific instrument users, and data scientists. To address this need, this project will develop a prototype of a decentralized computing and networking system to create, collect and distribute a diverse collection of real and synthetic science traffic flows to the experimental research infrastructure user community. The proposed work will first develop and deploy the Science Traffic as a Service (STAAS) prototype on the Network Programming Initiative testbed connecting two US universities, and then prepare STAAS for later nationwide deployment on the FABRIC midscale networking research infrastructure now under development. The students exposed to research on networking testbeds with demanding science traffic workloads will learn skills to help strengthen a workforce prepared to advance the global-scale computing cloud application service platforms that are increasingly central to the US economy. All documents, software, presentations, and other artifacts created under this project will be made publicly available at http://www.cs.princeton.edu/~jbrassil/public/projects/staas/
The key project insight is that many science flows are already in transit at any moment on or between campuses. Using new campus cyberinfrstucture including passive optical Test Access Points, Network Packet Brokers, and data-plane programmable ethernet switches, STAAS will safely tap and forward copies of these flows onto the experimental testbed, while preserving both the timing integrity of the flows and the data privacy of their payloads. Large scale, high bandwidth experiments will be achieved by enlisting participation of many or all STAAS edge nodes on multiple campuses. By introducing a service-based model, STAAS can help advance the networking research community's transport of emerging science data, and help the operators of scientific instruments increase the amount and quality of data collected by their instruments.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2022 — 2024 |
Rexford, Jennifer Walker, David [⬀] Kim, Hyojoon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Imr: Mt: Tools For Programming Distributed Data-Plane Measurements
Understanding the flow of traffic across key networks---what it is composed of and how it changes---is critical for improving modern information services. Traditionally, however, it has been difficult for researchers to develop new tools for dissecting this traffic and analyzing its characteristics, while taking care to maintain user privacy. Recently, though, the development of relatively cheap programmable switches has made it possible to develop diagnostic tools and place them directly inside the network, on the path through which traffic flows. In such a position, new tools have the potential to see all the internet traffic as it flows by, from a university campus to the broader internet, for instance, or along a corporate wide-area network or data center. Unfortunately, while it is possible to develop such tools, doing so is currently an incredibly difficult and error-prone process. To ameliorate this situation, the research team will develop Lucid, a new programming language and system that will facilitate the process of developing, debugging, and deploying network measurement tools in live programmable networks. The research team will deliver a compiler that translates high-level Lucid programs into lower-level code that execute in multiple places---directly on programmable switches, or in support, on servers connected to the network in question. In addition, the team will deliver a collection of reusable components that network measurement researchers can plug together to get started on a new idea quickly. To help teach researchers how to use the new language, the team is developing tutorials for major conferences in networking. To summarize, this project will impact the performance, reliability, and security of critical networks by facilitating the development of new measurement tools that can discover network optimization opportunities, detect failures, and rapidly recognize attacks that disrupt online services.<br/><br/>Traditional measurement tools and datasets, while incredibly useful, have significant limitations in scale and coverage. Measurement researchers should capitalize on the exciting advances in programmable data planes to analyze Internet traffic and performance as packets traverse the network. Analyzing traffic directly in the data plane (e.g., network switches, routers) enables sophisticated analysis without sacrificing efficiency or divulging sensitive user information, and enterprise networks, such as university campuses, provide an excellent opportunity to use these programmable data planes in practice. However, programming the data plane is not easy. Existing languages, such as P4, are very low-level, have an extremely steep learning curve, and are notoriously difficult to work with (with seemingly legitimate programs often failing to compile). This project addresses these pain points by delivering new programming support in the form of Lucid, a high-level language designed to support cooperative measurement across multiple locations and device types. More specifically, the research team is developing compilers that will target both Intel Tofino programmable switches (via P4) and software servers (via eBPF). Using both kinds of devices, researchers will be able to develop and deploy a range of different kinds of distributed measurement tools. The research team will also develop an interpreter for the language so that interesting new research ideas may be developed and debugged prior to deployment. The infrastructure developed by the research team will also include a suite of libraries that implement key data structures and utilities useful in network measurement and in support of data privacy. To teach the community how to use our language, libraries, tools, and infrastructure, the team will develop documentation and tutorials.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |