2006 — 2010 |
Engler, Dawson (co-PI) [⬀] Levis, Philip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Csr---Ehs: Improving Sensor Network Software Reliability Through Language, Tool, and Os Co-Design
Wireless sensor networks enable fine-grained, real-time information collection from the real world. Sensor net software must be reliable because it is long-lived, large scale, and deeply embedded. This research project addresses the challenge of improving the reliability of component-based wireless sensor network software through the parallel co-design of an operating system, its language, and supporting program analysis tools. The project focuses on TinyOS and seeks to solve difficult component composition problems that even expert developers encounter. The long-term vision is to make creating robust applications largely from existing components feasible for non-expert developers. The research is based on three complementary approaches. First, the PIs are developing tool support for giving developers advice about how to meet time constraints. Timing problems are difficult to deal with in TinyOS because they cut across component boundaries in non-intuitive ways. Second, the PIs are adding support for component interface contracts to TinyOS. Contracts verify that the "rules" for using a component are respected, pinpointing errors when developers misunderstand or misuse an interface, avoiding difficult debugging sessions. Finally, based on their experiences with the tradeoffs between static and dynamic timing and contract checking, the PIs are revisiting the basic abstractions and structure of TinyOS, redesigning them to be more easily checkable, and therefore more reliable. The intent is to improve reliability by rendering many classes of bugs impossible by design, rather than relying on heroic analysis and testing techniques
|
1 |
2008 — 2012 |
Hanrahan, Patrick (co-PI) [⬀] Koltun, Vladlen Levis, Philip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Anet: a Network Architecture For Federated Virtual/Physical Worlds
This research asks how one might design a network architecture to support three-dimensional virtual worlds as a dominant application platform. The architecture is based on three key design principles. First, rather than being centralized or peer-to-peer, the architecture is based on federation: cooperative but not necessarily collaborative interaction between multiple parties. This enables providers to enforce local administrative and security policies, yet requiring new support for discovery, messaging, and migration between and within domains. Second, application communication is grounded in three-dimensional coordinate spaces: objects can only communicate after being introduced through proximity. This geometric addressing decouples applications from their physical locations on hosts, and introduces interesting security protections from unwanted communication. Third, by using this communication model, the architecture can directly interface with and connect the physical world, leading to new possibilities for virtual interactions.
Much as the Internet was designed with a layered communication model, this research designs a new layered approach for virtual worlds: from a high-level object layer providing a rich programming environment for immersive virtual worlds, to the narrow waist of geometry-based communication, and down to the underlying service layer that implements computation, storage, and communication mechanisms. With backgrounds across networking, systems, and graphics, the investigators have been previously developing a highly expansible and personalizable virtual world system, Meru. This new project will develop the network architecture necessary to enable the seamless interaction and interoperation between many different Meru-based virtual worlds.
Integrating virtual worlds is already a pressing issue and concern among providers. Research towards a unifying networked system architecture would improve these efforts and could lay the groundwork for a next-generation programming platform for the Internet. It would integrate the current divide between the logical, host-centric networks and the emerging sensor networks of tomorrow. By incorporating existing efforts towards building an open, scalable virtual world system, the research will have impact in all of the areas virtual worlds are already bringing change. Fundamentally, virtual worlds, even more so than the Internet, are a platform for inter-personal communication, affecting education, public services and planning, commerce, and social networks.
|
1 |
2009 — 2015 |
Levis, Philip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Visibility as a Wireless Sensor Network Design Principle
Tiny, embedded computers will collect real-time data on our homes, our cities, and our planet. Collecting and processing rich streams of sensor data will transform public health, medicine, natural resource management, science, engineering, and disaster response. But today, despite years of research and engineering, these sensor networks often fail to meet their performance goals. Data yields are low; networks last weeks, rather than months; long downtimes are common.
Professor Levis proposes a long-term agenda of research and education to improve the robustness, manageability, and scalability of low-power wireless sensing systems. The dominant design principle behind this agenda is network visibility. Described colloquially, visibility measures a user?s ability to identify the cause of a network event, such as a packet drop. We propose to research how to make networks more visible.
The research agenda is grounded in the exploration, development, and evaluation of the Mote Network (MNet) architecture, an open-source protocol suite and toolkit for sensor network application development and deployment. The protocol suite will include existing dominant protocols redesigned for improved visibility as well as novel protocols whose designs maximizes visibility. Our principal goal is to make long-lived sensornets significantly simpler to deploy and maintain. Our second goal is to apply our lessons learned to wireless meshes more generally.
|
1 |
2011 — 2014 |
Levis, Philip Katti, Sachin [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets: Medium: Full Duplex Wireless
Wireless networking assumes that radios are half-duplex. On a given frequency, a half-duplex radio can either transmit or receive, but not both at the same time. The project disproves this long-held assumption; it shows how a radio that can transmit and receive simultaneously on the same frequency can be built using commodity off-the-shelf components. The design is based on two key ideas. First, is the design of analog circuits that can perform adaptive signal inversion, i.e. take an input RF signal and produce its exact inverse, and programmatically adapt the attenuation and delay on the inverted signal to match the self-interference experienced by the received signal. This enables the design of wideband full duplex radios that can handle transmit powers upto 20dBm. Second, the project exploits the full duplex primitive to design a real-time bidirectional channel for control and data, as well as more complex patterns such as chains. By interspersing control and data information in a message, nodes can dynamically react to channel changes in real-time.
The above two primitives - full-duplex operation and a real-time bidirectional control channel - can help solve many long-standing fundamental problems in wireless networks, including hidden terminals, bitrate adaptation, network congestion, resource allocation and unfairness. The project will produce a prototype full duplex radio for WiFi style networks and show experimentally how it can improve their performance, further all designs will also be made public through research publications and open-source hardware designs.
|
1 |
2014 — 2018 |
Fedkiw, Ronald (co-PI) [⬀] Levis, Philip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Medium: a Computing Cloud For Graphical Simulation
Today, many graphical simulations run on a single powerful server or a small cluster of high-performance, high-cost nodes. This research aims to answer the question -- is it possible to run graphical simulations in the computational cloud? -- by designing and implementing Nimbus, a software for graphical simulation in the computing cloud. The goal is to be able to run large, complex simulations using on-demand cloud computing systems. Nimbus supports PhysBAM, an open-source graphical simulation package developed and maintained by Principal Investigator Fedkiw. The project will collaborate with existing PhysBAM users to support the Nimbus software for broader use and adoption.
Nimbus focuses on three important principles to support graphical simulations running on hundreds to thousands of cloud servers. First is decoupling data access and layout. Nimbus represents data in three layers: program, logical, and physical. These layers separate the units which a program operates on (program) from the units which the Nimbus software manages and transfers (logical) from how they are laid out in actual computer memory (physical). Second is non-uniform, geometry-aware data placement. Nimbus uses the fact that simulations have a basic underlying geometry to intelligently place data and computation. This geometry is explicit in the Nimbus software, which knows that nearby regions of the simulation should be placed on nearby computers. Third is dynamic assignment and load balancing: Graphical simulations today divide the simulation volume equally across computers, despite the fact that some regions require much more computation than others. Nimbus divides a simulation into a larger number of smaller partitions, which it dynamically assigns and moves as load changes to reduce running time while considering inter-partition communication. These three principles allow Nimbus to provide tremendous flexibility. The system breaks a simulation into small pieces that a controller computer sends to worker computers to compute. These worker computers decide when to schedule these simulation pieces and how to assign processors to different pieces. The runtime automatically moves data in the most efficient manner possible as needed, compressing data and replicating it when having multiple copies for different pieces increases performance. Discovering how these applications can be run on modern data center computing systems will help bring arithmetically intensive scientific computing to the cloud. As Exascale and other supercomputing efforts gain momentum, their scale will need to deal with the same issues cloud systems have been tackling for the past decade, stragglers, failures, and heterogeneity. By focusing on one particular compelling application, this work will establish an intellectual framework for future, broader efforts.
|
1 |
2015 — 2018 |
Boneh, Dan (co-PI) [⬀] Engler, Dawson (co-PI) [⬀] Winstein, Keith (co-PI) [⬀] Horowitz, Mark (co-PI) [⬀] Levis, Philip |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Synergy: Collaborative: Cps-Security: End-to-End Security For the Internet of Things
Computation is everywhere. Greeting cards have processors that play songs. Fireworks have processors for precisely timing their detonation. Computers are in engines, monitoring combustion and performance. They are in our homes, hospitals, offices, ovens, planes, trains, and automobiles. These computers, when networked, will form the Internet of Things (IoT). The resulting applications and services have the potential to be even more transformative than the World Wide Web. The security implications are enormous. Internet threats today steal credit cards. Internet threats tomorrow will disable home security systems, flood fields, and disrupt hospitals. The root problem is that these applications consist of software on tiny low-power devices and cloud servers, have difficult networking, and collect sensitive data that deserves strong cryptography, but usually written by developers who have expertise in none of these areas. The goal of the research is to make it possible for two developers to build a complete, secure, Internet of Things applications in three months.
The research focuses on four important principles. The first is "distributed model view controller." A developer writes an application as a distributed pipeline of model-view-controller systems. A model specifies what data the application generates and stores, while a new abstraction called a transform specifies how data moves from one model to another. The second is "embedded-gateway-cloud." A common architecture dominates Internet of Things applications. Embedded devices communicate with a gateway over low-power wireless. The gateway processes data and communicates with cloud systems in the broader Internet. Focusing distributed model view controller on this dominant architecture constrains the problem sufficiently to make problems, such as system security, tractable. The third is "end-to-end security." Data emerges encrypted from embedded devices and can only be decrypted by end user applications. Servers can compute on encrypted data, and many parties can collaboratively compute results without learning the input. Analysis of the data processing pipeline allows the system and runtime to assert and verify security properties of the whole application. The final principle is "software-defined hardware." Because designing new embedded device hardware is time consuming, developers rely on general, overkill solutions and ignore the resulting security implications. The data processing pipeline can be compiled into a prototype hardware design and supporting software as well as test cases, diagnostics, and a debugging methodology for a developer to bring up the new device. These principles are grounded in Ravel, a software framework that the team collaborates on, jointly contributes to, and integrates into their courses and curricula on cyberphysical systems.
|
1 |