1994 — 1999 |
Varian, Hal Mackie-Mason, Jeffrey (co-PI) [⬀] Shenker, Scott |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Economics of the Internet @ University of Michigan Ann Arbor
9320481 The Internet is a worldwide network of networks. As of May 1993 it connects over 10,000 different networks, comprising over 1.5 million separate computers. Traffic on the NSFNET backbone to the Internet (the largest but not the only backbone) has been growing at about 11% per month over the last five years; this means it has been doubling every seven months. The Internet is considered to be the model for the National Research and Education Network (NREN) that is the focus of the $5 billion HPCC program adopted by Congress in 1992. Important policy decisions are already being made about the commercialization and privatization of the Internet backbones, regulation of transmission providers and information service providers, the protection of intellectual property in electronic form, and public access to information infrastructure. These decisions are being made with little input from economists. This project will clarify the economic context for wide-area data networks: costs, pricing, investment, and growth. It examines competition and industry structure, and address the important regulatory questions that follow as data, telephone and video technologies and markets begin to merge. Furthermore it develops mechanisms for efficient resource allocation in such networks.
|
0.943 |
1998 — 2003 |
Shenker, Scott Friedman, Eric |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Learning and the Design of the Internet @ Rutgers University New Brunswick
ABSTRACT NCR-9730162 Eric Friedman Rutgers University with a subcontract to Scott Shenker Xerox PARC Learning and the Design of the Internet This project will study some foundational issues in the application of game theoretic ideas to the design of the Internet and other decentralized networks. Many researchers apply game theoretic ideas to the analysis of the modern Internet and in particular assume that Nash equilibria will arise in this setting. Based on recent theoretical analyses and simulations, the PIs on this project have found that convergence to Nash equilibrium is not guaranteed in Internet-like environments, due to limited information, noise, and asynchrony. This calls into question the application of the Nash Implementation" on the Internet and makes the design of mechanisms, which are robust to learning by adaptive agents, much more problematic. The goal of this project is to understand precisely what kinds of mechanisms are learnable on the Internet and use this to design protocols and price mechanisms that implement socially desirable outcomes under this constraint. For example, preliminary results show that the FIFO queuing protocol is not learnable while fair queuing can be learned quite easily by adaptive agents. In particular, the PIs previous work has begun to identify the set of outcomes that are attained by adaptive learners. This set contains the Stackelberg undominated actions and is contained within the serially unoverwhelmed set. Several important mechanisms work in these settings, e.g. fair queuing, the uniform mechanism, fixed path methods, and also many problems with enough players and capacity, in addition the PIs have just begun to analyze the general implications of these results for general design problems. For example only strictly coallitionally strategy-proof social choice functions are implementable on the Internet. However, there still remain many open questions, which this project will attempt to resolve. This project also has i mplications for Economics and Game Theory. Besides the Internet, there are many decentralized systems in Economics for which we believe these results are applicable. In particular, oligopolists, joint producers, and polluters operate asynchronously with limited observability and often with little knowledge of the underlying payoff matrix. This project should increase our understanding in these settings. This project will utilize three complementary methodologies: i) A continuing theoretical and analytical study of these issues ii) this will be complemented with computer simulations, and iiii) experiments with human subjects who will be studied under settings designed to mimic the scenarios faced by actual users on the Internet. The interplay of these three approaches should help create a robust and realistic theory for designing learnable mechanisms for the Internet.
|
0.957 |
2000 — 2004 |
Karp, Richard [⬀] Papadimitriou, Christos (co-PI) [⬀] Shenker, Scott |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Analysis of Internet Algorithms: Optimization, Game Theory and Competitive Analysis @ University of California-Berkeley
As the complexity of the Internet, the nature of its applications, and its socioeconomic framework evolve, new algorithmic and architectural ideas will be proposed, tested, and adopted. While the original Internet design principles will likely remain valid, the researchers believe that it is important to have in place a mathematical framework within which these design principles can be expressed and applied to the next generation of Internet algorithms and architectures. Building such a framework is the ultimate goal. The mathematical tools will come from optimization, game theory and competitive analysis. The researchers shall work on the following topics.
Multicast. The researchers shall seek to determine the relative efficiency, in terms of link usage, of multicast versus unicast, devise and analyze efficient methods of multicast error recovery, and determine how efficiently multicast can be simulated in the application layer by a coordinated set of unicasts.
Congestion Probing. The TCP congestion control protocol controls its window size with an additive-increase and multiplicative-decrease (AIMD) algorithm. One can think of this as a probing algorithm in which the flow attempts to discover the maximum rate of traffic that can be send under current conditions; if a packet drop is recorded it is assumed the bandwidth rate was too high and so the window size is reduced. The researchers shall develop efficient probing algorithms and theoretical limits on the efficiency of probing under different models of Internet congestion.
Cost Sharing. How are the recipients of a multicast transmission to share the network costs? The researchers assume that the information to be multicast is of a certain value to each possible recipient, but this value is private to that individual. The researchers shall investigate strategyproof cost sharing methods where each user is assured that their outcome is maximized if they truthfully reveal their value to the network. The researchers' goal is to characterize the set of protocols that are acceptable on both game-theoretic and complexity grounds.
Information Dissemination. While traditional databases require transactional consistency, many repositories of information require only the much weaker notion of eventual consistency. That is, in such cases we care only whether, and how quickly, the information is disseminated, but do not require global consistency during the dissemination. The researchers shall identify message-efficient strategies for selectively propagating information so that the network will eventually converge to a fully updated state.
|
1 |
2002 — 2009 |
Kaashoek, M. Frans [⬀] Shenker, Scott |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Robust Large-Scale Distributed Systems @ Massachusetts Institute of Technology
This project is a novel decentralized infrastructure, based on distributed hash tables (DHTs), that will enable a new generation of large-scale distributed applications. The key technology on which we build, DHTs, are robust in the face of failures, attacks and unexpectedly high loads. They are scalable, achieving large system sizes without incurring undue overhead. They are self-configuring, automatically incorporating new nodes without manual intervention or oversight. They simplify distributed programming by providing a clean and flexible interface. And, finally, they provide a shared infrastructure simultaneously usable by many applications.
The approach advocated here is a radical departure from both the centralized client-server and the application-specific overlay models of distributed applications. This new approach will not only change the way large-scale distributed systems are built, but could potentially have far reaching societal implications as well. The main challenge in building any distributed system lies in dealing with security, robustness, management, and scaling issues; today each new system must solve these problems for itself, requiring significant hardware investment and sophisticated software design. The shared distributed infrastructure will relieve individual applications of these burdens, thereby greatly reducing the barriers to entry for large-scale distributed services.
Our belief that DHTs are the right enabling infrastructure is based on two conjectures: (1) a DHT with application-independent, unconstrained keys and values provides a general purpose interface upon which a wide variety of distributed applications can be built, and (2) distributed applications that make use of the DHT-based infrastructure inherit basic levels of security, robustness, ease of operation, and scaling. Much of the thrust of the proposed research is an exploration of these two conjectures.
We will investigate the first conjecture, that the DHT abstraction can support a wide range of applications, by building a variety of DHT-based systems. Our recent work has used DHTs to support such varied applications as distributed file systems, multicast overlay networks, event notification systems, and distributed query processing. DHTs simplify the structure of these systems by providing general-purpose key/value naming rather than imposing structured keys (e.g., hierarchical names in DNS). These systems are early prototypes, but they suggest that DHTs may be as useful to distributed applications as ordinary hash tables are to programs.
The second conjecture relies on techniques for creating robust, secure, and self-organizing infrastructures out of many mutually distrustful nodes. Our initial work on robust DHT designs gives us confidence that such techniques are within reach. The bulk of our proposed research will be devoted to the in-depth study of these techniques, with the express aim of producing a sound and coherent design for the infrastructure. To investigate the real-world behavior of our design, we will create a large-scale open testbed for which we will distribute our infrastructure software, some enabling libraries, and a few key compelling applications.
In addition to its impact on the creation of distributed applications, our research program will have benefits in education and outreach. Given their current importance, security, robustness, and the design of distributed systems should become central topics in undergraduate computer science education. To this end, we are planning a new interdisciplinary course that will address these issues, and bring them into sharper focus early in the undergraduate course sequence.
Our testbed and research agenda is also a good vehicle for encouraging the participation of organizations not traditionally involved in networking and systems research. Participation in the testbed requires little cost (a PC and an Internet connection) and minimal levels of systems expertise and over-sight. Moreover, because the material is closely related to the P2P systems with which many students are familiar, the project might appeal to students who would not normally be attracted to research in this area. Based on this premise, we plan an active outreach program to underrepresented populations at non-research undergraduate institutions.
|
0.901 |
2002 — 2005 |
Feigenbaum, Joan [⬀] Shenker, Scott Krishnamurthy, Arvind (co-PI) [⬀] Yang, Yang (co-PI) [⬀] Yang, Yang (co-PI) [⬀] Yang, Yang (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Incentive-Compatible Designs For Distributed Systems
This project includes the research activities to obtain theoretical and practical results on mechanisms that are incentive-compatible, scalable and distributed. Specifically, distributed algorithmic mechanism design with insights from game theory is proposed for three related problems in networking: interdomain routing, web caching and peer-to-peer file sharing. The research program on interdomain routing will develop a fundamentally new approach in which many of the routing-related incentive issues are handled by incentive-compatible protocols rather than bilateral contracts; such protocols can more effectively address the system-wide issues of efficient routing and conflicting policy requirements. Within this project also the recently developed techniques for digital-goods auctions will be applied to the peer-to-peer file sharing problem and to the design of incentive-compatible caching mechanisms. This project will help to understand better the behaviors of large-scale, distributed information systems formed by autonomous components such as Internet, and develop incentive-compatible algorithms for these systems accordingly.
|
0.97 |
2003 — 2007 |
Shenker, Scott Govindan, Ramesh [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sensors: Robust and Efficient Data Dissemination For Data-Centric Storage @ University of Southern California
Sensornets will provide detailed measurements at fine spatial granularities over large geographic areas. Providing access to the data is a formidable challenge because the measured data are distributed across the entire sensornet and communication between sensornet nodes requires substantial expenditures of scarce energy. Data-centric abstractions are now seen as a fundamental aspect of sensornet systems that provide efficient access to sensor measurements. In prior work, we have suggested that sensornet applications would benefit from data-centric storage. Such systems enable efficient querying and search in large-scale sensornets. However, data-centric storage makes exacting demands on its routing infrastructure. In particular, data-centric storage is predicated on a robust and efficient routing primitive that allows storing data by name at a node in the sensornet.
This proposal seeks to investigate the design and development of geographic hash tables (GHTs), a routing primitive for data-centric storage. GHTs bring together two technologies, distributed hash tables (DHTs) and geographic ad-hoc routing, that were developed in two completely unrelated contexts, peer-to-peer systems and ad hoc wireless networks. Each technology has been extensively explored in its respective domain, but the combination of these technologies in the new, and more challenging, context of sensor networks raises many new design challenges.
|
0.976 |
2004 — 2008 |
Feigenbaum, Joan [⬀] Shenker, Scott Bergemann, Dirk (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
An Economic Approach to Security
Proposal Number: 0428422
Title: An Economic Approach to Security
PI: Joan Feigenbaum
Abstract
Internet security is universally seen as an extremely important problem. Moreover, technical solutions developed over the last three decades represent deep and elegant intellectual contributions. Yet few of these solutions are in widespread use. Clearly something is amiss. It has recently been argued, by Anderson and others, that the missing linkis economics: Only through understanding the incentives inherent in various security proposals can one decide which, if any, would actually lead to greater security. This research project is a three-year, multi-institutional, multi-disciplinary investigation of the economics of security in networked environments. Specific research topics include security of interdomain routing, adoptability of trusted platforms, and markets for private information. The intellectual merit and broader impact of the project are intertwined, both based on the potential not only to solve technical problems but also to develop general analytical techniques for evaluating candidate solutions to real security problems in a manner that gives adoption incentives their just due. If successful, it will lead to greater actual security, rather than simply to more available security technology. Educational activity that integrates security, networking, and economics is also a major goal one on which the investigators have experience on both graduate and undergraduate levels.
|
0.97 |
2004 — 2009 |
Shenker, Scott Culler, David [⬀] Stoica, Ion (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Nr: Creating a Wirelss Sensor Net Architecture @ University of California-Berkeley
Abstract:
Wireless Sensor Networks (WSNs) will benefit society by accelerating scientific research, increasing productivity, and enhancing security. Candidate solutions to particular scientific challenges, ranging from device design to distributed algorithms, exist with various assumed requirements, constraints, and relationships between the components. However, there is no consensus on an overall sensor network architecture. Deployments take a narrow vertical slice to achieve a operational network. The objective of this project is to formulate and evaluate a comprehensive WSN systems architecture, encompassing general design principles, broad functional decompositions, and detailed interfaces by which pieces fit together. The approach recognizes that the ``narrow waist" of this architecture -- playing the role of IP in the Internet architecture -- is a best-effort single-hop broadcast. The primary method for articulating the essential abstractions is iterative cycles of (candidate collection, design, integration, evaluation) with the outcome presented to the community for analysis to establish the right compromises between generality and performance. This process will articulate consistent programming interfaces that encompass collection, aggregation, dissemination, neighborhoods, data centric storage, and attribute-based routing over multiple link layers. The architecture will be tested on cross-cutting aspects of efficiency, coordination, power, management, and security. It should embrace heterogeneity and allow for application-specific optimization within a consistent framework. Creating a successful architecture for WSNs will reduce development effort, allow greater synergy and interoperability, enable more rapid innovation, and greatly broaden the sphere of applications from national security to scientific research and ecosystem management.
|
1 |
2004 — 2005 |
Shenker, Scott |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Virtual Networking - - Enabling Innovation in Networks and Services @ University of California-Berkeley
Proposal Number: 0439886 PI: Jonathan Turner Institution: Washington University
Proposal Number: 0440940 PI: Scott Shenker Institution: University of California, Berkeley
Proposal Number: 0439642 PI: Thomas E. Anderson Institution: University of Washington, Seattle
Proposal Number: 0439842 PI: Larry Peterson Institution: Princeton University
Title: Collaborative Research: Virtual Networking - Enabling Innovation in Networks and Services
Abstract:
The Internet is one of the great technology success stories of the twentieth century, enabling greater access to information and providing new modes of communication among people and organizations. Unfortunately, the Internet's very success is now creating obstacles to innovation in the networking technology that lies at its core. In order to free the global communications infrastructure from stagnation, the nation must find ways to enable its continuing renewal. This planning grant is developing a case for network virtualization as a means to enable innovation in networks and services. Virtualization allows multiple logically independent virtual networks to share a common physical infrastructure or substrate. This program is developing a plan for a major new research initiative in network virtualization that includes both basic research, the development of key technology components and the creation of an experimental testbed, to establish feasibility and provide a context in which networking researchers can develop innovative new network architectures and services. The program is articulating the case for network virtualization, soliciting input from the network research community and working with the community to develop recommendations to NSF for a major initiative in this area.
|
1 |
2005 — 2010 |
Shenker, Scott Stoica, Ion (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nets-Nbd: Internet Revolution Through Flat Resolution @ University of California-Berkeley
In the thirty-odd years since the advent of the Internet architecture, new uses and abuses, along with the realities that come with being a fully commercial enterprise, are pushing the Internet into realms that its original design neither anticipated nor gracefully accommodates. These pressures have revealed several limitations of the Internet architecture, and the Internet's increasing ubiquity and importance have made these flaws all the more evident and urgent. However, it is far easier to complain about the Internet architecture than it is to produce a better design. This proposal accepts that challenge, by proposing a clean-sheet redesign of the Internet. The proposed design is based on a new naming system that names services (or data) and endpoints (hosts) separately from network locations, and incorporates notions of delegation and indirection. Moreover, these service and endpoint identifiers are flat, with no hierarchical structure.
Such architecture naturally handles host mobility, multihoming, and data replication and migration, and incorporates middleboxes in an architecturally clean manner. In addition, it simplifies interdomain routing, by adopting a new global addressing structure that makes explicit the administrative domain to which a host belongs. The interdomain routing protocol exchanges routes based on these flat domain identifiers (instead of today's IP prefixes that cause many problems), together with flexible mappings of flat end-host identifiers, to enable more flexible routing policies for ISPs and hosts.
|
1 |
2012 — 2018 |
Shenker, Scott Bayen, Alexandre (co-PI) [⬀] Stoica, Ion (co-PI) [⬀] Franklin, Michael [⬀] Jordan, Michael (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Making Sense At Scale With Algorithms, Machines, and People @ University of California-Berkeley
Making Sense at Scale with Algorithms, Machines, and People University of California, Berkeley
The world is increasingly awash in data. As more and more human activities move on line, and as a growing array of connected devices become integral part of daily life, the amount and diversity of data being generated continues to explode. According to one estimate, more than a Zettabyte (one billion terabytes) of new information was created in 2010 alone, with the rate of new information increasing by roughly 60% annually. This data takes many forms: free-form tweets, text messages, blogs and documents; structured streams produced by computers, sensors and scientific instruments; and media such as images and video. Buried in this flood of data are the keys to solving huge societal problems, for improving productivity and efficiency, for creating new economic opportunities, and for unlocking new discoveries in medicine, science and the humanities. However, raw data alone is not sufficient; we can only make sense of our world by turning this data into knowledge and insight. This challenge, known as the Big Data problem, cannot be solved by the straightforward application of current data analytics technology due to the sheer volume and diversity of information. Rather, to solve it requires throwing away old preconceptions about data management and breaking down many of the traditional boundaries in and around Computer Science and related disciplines.
The Algorithms, Machines, and People (AMP) expedition at the University of California, Berkeley is addressing this challenge head-on. AMP is a collaboration of researchers with a wide range of data-related expertise, committed to working together to create a new data analytics paradigm. AMP will produce fundamental innovations in and a deep integration of three very different types of computational resources: 1. Algorithms: new machine-learning and analysis methods that can operate at large scale and can give flexible tradeoffs between timeliness, accuracy, and cost. 2. Machines: systems infrastructure that allows programmers to easily harness the power of scalable cloud and cluster computing for making sense of data. 3. People: crowdsourcing human activity and intelligence to create hybrid human/computer solutions to problems not solvable by today's automated data analysis technologies alone.
AMP research will be guided and evaluated through close collaboration with domain experts in key societal applications including: cancer genomics and personalized medicine, large-scale sensing for traffic prediction and environmental monitoring, urban planning, and network security. Advances pioneered by the project will be made widely available through the development of the Berkeley Data Analysis System (BDAS), an open source software platform that seamlessly blends Algorithm, Machine and People resources to solve big data problems.
For more information visit http://amplab.cs.berkeley.edu
|
1 |
2013 — 2015 |
Shenker, Scott Ratnasamy, Sylvia [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cc-Nie Networking Infrastructure: Extensible Cyberinfrastructure For Enhancing Extreme-Data Science (Exceeds) @ University of California-Berkeley
The data requirements of many current research projects -- ranging from cancer genomics to radio astronomy to brain imaging -- have far outstripped what most campuses can handle. Progress in these and other important fields is significantly hampered by the inability of researchers to rapidly transfer their extremely large datasets. The problem lies not only in the bandwidth in the underlying links, but also in the overall network architecture which limits the ability of network administrators to deploy new functionality.
The Extensible Cyberinfrastructure for Enhancing Extreme-Data Science (EXCEEDS) project is accelerating scientific discovery at the University of California, Berkeley by improving the university's ability to support extreme-data science. The infrastructure improvements being implemented include: increasing border bandwidth to 100Gb/s, arranging for an end-to-end 100Gb/s data path from Berkeley to the University of California at San Diego, establishing a modern Science Demilitarized Zone (DMZ) architecture, and deploying a 100Gb/s capable Bro Intrusion Detection System (IDS) -- currently a critical piece in Berkeley's security architecture -- on this new high-speed network.
In the short run, the improved infrastructure will speed progress at Berkeley in several scientific areas, most notably cancer genomics but also in radio astronomy and other fields. Longer term, the research on more extensible campus architectures will lead to more flexible network designs that can enhance scientific progress on campuses throughout the nation.
|
1 |