2004 — 2010 |
Robinson, Gene (co-PI) [⬀] Contractor, Noshir (co-PI) [⬀] Hollingshead, Andrea (co-PI) [⬀] Pena-Mora, Feniosky [⬀] Gupta, Indranil |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: It-Based Collaboration Framework For Preparing Against, Responding to, and Recovering From Disasters Involving Critical Physical Infrastructures @ University of Illinois At Urbana-Champaign
Abstract for 0427089
One of the most urgent and vital challenges confronting society today is the vulnerability of urban areas to extreme and unpredictable events such as terrorism, earthquakes and the like. For example, in 2002, a total of 608 million people across the globe were affected by disasters resulting in 24,500 deaths and damage to property and to the environment estimated at $27 billion dollars. These significant human and economical costs emphasize the urgent need to improve the efficiency and effectiveness of first responses to extreme events. The objective of this grant is to develop and to test a conceptual framework designed to improve collaboration among the key actors involved in disaster relief operations. These key actors include firefighters, police officers, medical personnel, experts, the original civil engineers and constructors involved in the construction of the affected infrastructure, and the physical and technological infrastructure itself, including sensors and systems of sensors embedded in it. Theoretically derived information technology (IT)-based solutions to prepare against, respond to and recover from disasters will be developed and tested based on the proposed framework. The research team is composed of civil engineers, computer scientists, entomologists, psychologists, communication scholars and first responder professionals. Each studies the technological and social processes of collaboration from a different viewpoint. All of these viewpoints will be represented in the conceptual framework, which will explore three phases of first response: preparation, response, and recovery.
First responders face many challenges in the chaotic and inhospitable environment of disaster relief operations; including information unreliability and overload; coordination and communication breakdowns; and threats to personal safety and the vulnerability of physical infrastructure. This grant seeks to reduce uncertainty and improve collaboration among first responders. It will advance theory, research and practice regarding efficient and effective first response in several important ways. First, previous research initiatives regarding collaboration have focused on supporting interactions among people, instruments and systems running on fixed computers and devices, in complex, large scale, but fairly stable operating conditions. This research investigates collaboration in chaotic, volatile, and complex disaster relief environments, which requires interaction among both stationary and mobile users and among users and technological devices such as sensors and communication media. Second, it explores the role of civil engineer as a vital member of the first responder team, providing key knowledge of and experience with the affected physical infrastructure. And finally, it will enable first responders with an IT-based components platform to address issues pertaining to critical physical infrastructure, such as security and vulnerability, along with the expertise to prepare before, respond to and recover after a disaster occurs.
|
0.957 |
2005 — 2010 |
Gupta, Indranil |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Systematic Design of Distributed Protocols - From Methodologies and Toolkits to Systems @ University of Illinois At Urbana-Champaign
PROPOSAL NUMBER: CNS-0448246 PRINCIPAL INVESTIGATOR: Gupta, Indranil INSTITUTION: University of Illinois at Urbana-Champaign PROPOSAL TITLE: CAREER: Systematic Design of Distributed Protocols - from Methodologies and Toolkits to Systems
Distributed protocols such as resource discovery, cooperative caching, replication, etc., are crucial to the scalability and reliability of large-scale distributed systems such as the Web and the Grid. Today, however, the critical activity of designing a distributed protocol relies on an ad-hoc approach - literature, experience, and basic Computer Science knowledge provide the only assistance. This often leads to complex protocols, increased design times and costs, and eventually to inefficient use of research potential.
This project is exploring systematic techniques for designing distributed protocols, assisted by the use of Design Methodologies for distributed protocols. Design methodologies augment the creative process of protocol innovation, without stifling it. The project is creating new methodologies that systematically convert naturally observed phenomena into protocols with predictable properties. One innovative methodology translates differential equation systems into equivalent distributed protocols. Composable methodologies are also being developed, allowing protocol designers flexibility to enrich protocols with desired properties.
The methodologies invented and discovered in this project will be used to design new system designs for cooperative web caching, adaptive Grid computing, persistent distributed file systems, and disaster response and recovery. The methodologies will be made available to the community as Integrated design Toolkits (IdTs) that automate methodologies and enable a designer to automatically generate compilable code for a protocol. While methodologies have been used with varying degrees of success in other fields of science, this project is the one of the first to study its benefits and drawbacks in certain focus areas of distributed systems.
|
0.957 |
2008 — 2010 |
Campbell, Roy Heath, Michael [⬀] Gupta, Indranil |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger: Acquisition & Operation of An Experimental Testbed For System-Level Research to Support Data-Intensive Computing Applications @ University of Illinois At Urbana-Champaign
The world is populated with enormous amounts of data from a wide variety of sources. There is a compelling human need to represent, analyze, query, manage, understand, and respond to such data for knowledge extraction and decision making. In collaboration with Yahoo! and Hewlett-Packard, we are creating an experimental testbed, the Cloud Computing Testbed (CCT), at the University of Illinois (UIUC) for data-intensive applications using distributed "cloud" computational resources to enable researchers to address this need by processing data at various levels of the system stack, from network, operating system, virtual machines, and distributed applications to the Web. The exploratory nature of CCT results from its focus on systems and networking related research issues within a data-intensive cloud computing environment. Other existing or proposed data processing clusters are focused on user-level applications for which a stable and thus fairly rigid environment must be maintained, whereas the proposed research with the CCT will go deep into the system software stack to explore new and better ways to provide system-level support for data-intensive computing. The UIUC research efforts cover a breadth of research areas including networking, operating systems, virtual machines, distributed systems, data-mining, Web search, network measurements, and multimedia. Access to the CCT is also being made available to external CISE researchers by way of an application process administered by UIUC.
The CCT will provide the academic community with the opportunity to do research in data-intensive computing spanning multiple research areas (OS, virtual machines, distributed systems, datamining, the Web, and online social networks), and in particular to explore powerful systems and networking research topics in a data-intensive environment. It will give the academic community access to resources that would otherwise be unavailable due to cost. The CCT is providing opportunities for multi-disciplinary research on large-scale, distributed computing projects. It is accelerating research for Internet-scale computing and will drive innovation for future systems.
|
0.957 |
2010 — 2015 |
Campbell, Roy Gupta, Indranil |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dc: Medium: Tackling and Understanding Intermediate Data in Cloud Applications as a First-Class Citizen @ University of Illinois At Urbana-Champaign
Cloud computing infrastructures involve thousands of servers, petabytes of storage, and hundreds of users running various applications that involve gigabytes to terabytes of data. This project focuses on intermediate data that is generated during the execution of parallelized dataflow programs in clouds. Such cloud intermediate data brings forth several unique characteristics: they are massive-scale, distributed, subjected to computational barriers, and prolong job run-times when subjected to server failures. Further, the size of intermediate data in a cloud application is often comparable to or larger than input or output data size, and it can thus range in terabytes. Thus, in spite of extensive existing work on traditional storage problems, there is a critical need for new algorithms and systems that target cloud intermediate data. This project is the first to treat cloud intermediate data as a first-class citizen. The project will involve new algorithm design and analysis, original systems building and implementation, deployment in real world testbeds, and performance of measurement studies. Concretely, this project will build a new system that explicitly manages intermediate data in cloud dataflow programs in order to improve their fault-tolerance, and design and realize barrier relaxation strategies to improve performance of cloud programs. We will implement using open software, deploy, and experimentally evaluate our systems atop the NSF infrastructure called the Cloud Computing Testbed (CCT) that is hosted at the University of Illinois. Finally, we will perform measurement studies of workload characteristics of cloud intermediate data. A fuller understanding of intermediate data in clouds can spawn research in managing cloud infrastructures, improve run-time performance of cloud applications, and lead to new cloud programming paradigms. Our contributions will directly improve the performance and fault-tolerance of applications that are run on the community infrastructure CCT, and positively impact design and deployment of existing and emerging industry clouds. Our results will be published and released in open software and datasets.
|
0.957 |
2010 — 2015 |
Campbell, Roy Heath, Michael (co-PI) [⬀] Abdelzaher, Tarek [⬀] Gupta, Indranil |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-New: Towards Green Data Centers: a Testbed For Thermo-Computational Dynamics @ University of Illinois At Urbana-Champaign
This project develops a testbed for experimentation with energy saving in data centers via holistic management of both the computing and cooling subsystems. It instruments a large computing cluster at the University of Illinois that reproduces representative dynamics of energy consumption in data centers. Understanding heat and energy dynamics in large computing systems requires detailed sensing and control on both the computing and cooling side. The goal is to produce models and algorithms that significantly contribute to energy optimization research leading to reductions in carbon footprint and operating costs of contemporary computing clusters. The testbed developed in this project focuses primarily on understanding and improving software-controlled mechanisms for energy optimization in systems that exhibit non-trivial coupled thermal and computational dynamics, called thermo-computational systems, towards better energy management of data centers. When complete, it is likely to become the first and largest open testbed geared for enabling high quality research on large thermo-computational systems. The project is motivated by the increasing energy cost of data centers, which is estimated at more than $4.5 billion annually and is expected to grow at a rate of 12% in the absence of intervention. According to the EPA, most of this cost is avoidable. If successful, the project will therefore contribute significantly to both the economy and the environment by resulting in savings in both energy cost and carbon footprint.
|
0.957 |
2013 — 2017 |
Gupta, Indranil |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Small: Online Global Reconfigurations in Key-Value and Nosql Cloud Storage Systems @ University of Illinois At Urbana-Champaign
Key-value and NoSQL storage systems are growing and predicted to become a multi-billion dollar industry sector within a few years. This project targets global reconfiguration operations in this new generation of storage systems. Today such operations largely involve exporting and then re-importing entire databases, thus making data unavailable for long periods of time. The project will first design efficient online algorithms for a variety of global reconfiguration operations, with the twin goals of making such operations efficient, while continuing to support fast read and write actions on the data at all times. We will then implement and build our solutions into real production code, with a focus on existing and widely-used open-source systems software. This work requires a carefully orchestrated mix of algorithmics with systems design and implementation. Our systems will be evaluated using industry benchmarks and traces, and in production clusters.
The project will augment existing widely-used key-value and NoSQL storage systems software with the much-needed ability to support online reconfiguration operations. Our work will produce open software and meaningful datasets. Thus our innovations and systems will be directly available to, and impact positively, both providers of key-value and NoSQL stores as well as a variety of customers ranging from small to large companies. On the educational front, the project will address the dearth today of learning materials for key-value and NoSQL stores by developing and disseminating course materials for this area. Additionally, we will incentivize entrepreneurial activities centered around key-value and NoSQL technologies.
|
0.957 |
2014 — 2017 |
Meseguer, Jose (co-PI) [⬀] Vaidya, Nitin Gupta, Indranil |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Medium: Availability-Consistency Tradeoffs in Key-Value and Nosql Storage Systems @ University of Illinois At Urbana-Champaign
Key-value/NoSQL storage systems are a key component of the cloud computing revolution. Today's key-value/NoSQL storage systems lie at different points on the tradeoff spectrum of availability (i.e., fast reads and writes) vs. data consistency (across multiple clients) vs. partition-tolerance. This project will better characterize what is achievable along this spectrum, to make these systems dynamically adapt along the spectrum to meet application requirements, and to benchmark the actual availability and consistency achieved by real systems under real conditions.
The project will follow two synergistic thrusts. The first thrust will use probabilistic models for availability, consistency, and partitions to analyze the tradeoffs among these. Then it will design adaptive techniques to meet an SLA (Service Level Agreement) or SLO (Service Level Objective) which specifies either an availability constraint or a consistency constraint, while optimizing the other metric. Finally, these techniques will be implemented in some of the leading key-value/NoSQL storage systems in use today in industry. Our second thrust will apply the large body of formal verification research to key-value/NoSQL systems. The work herein includes use of a formal modeling language to specify models for key-value/NoSQL stores, and use of standard as well as statistical model-checking to analyze and characterize the behavior of these systems.
This work will imbue existing key-value/NoSQL storage systems with the ability to adapt to the tradeoffs between consistency, availability, and partition-tolerance, as a function of provider and customer requirements, at run-time. It will lead to better SLAs and SLOs that combine both consistency models and availability models in a practical and achievable way. Thus, the project will directly impact the large developer and user communities of key-value/NoSQL storage systems. The project will produce open software and meaningful datasets.
|
0.957 |
2019 — 2022 |
Gupta, Indranil |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cns Core: Small: Got -- Groups of Things Abstractions For Distributed Iot @ University of Illinois At Urbana-Champaign
The Internet of things (IoT) market for smart homes, buildings, and cities, is expected to reach over a half trillion dollars in size by 2021. Today's IoT technology landscape is the culmination of many decades of research, but large-scale IoT deployments are still hard to setup, configure, and manage, and this process is often laborious and time-consuming. This project will develop a new distributed computing substrate called "Groups of Things (GoT)". GoT consists of several software building blocks on top of which IoT applications can be built easily and flexibly for environments like smart homes, buildings, campuses, and cities, bringing us closer to the vision of making these deployments truly robust, scalable, self-sufficient, and self-managing. GoT adapts, for IoT settings, three important distributed computing primitives that have been successful in datacenters and cloud computing, namely--membership, coordination, and storage. The applications built by this project atop GoT provide an opportunity to explore the approach of designing systems with a human-first philosophy, for the IoT setting. The project's activities include designing algorithms, analyzing them formally, implementing them in real devices including Raspberry Pis, and deploying them in real buildings on the University of Illinois campus. The project will produce open software and engage with industry in the IoT and wireless sector to maximize industry impact of ideas from the project. Educational contributions include developing modules for online MOOCs and working with students from under-represented groups in K-12, undergraduate school, and graduate school. Technically, this project will build a new abstraction for IoT Networks, called the "Groups of Things (GoT)" substrate. The work includes designing and analyzing new distributed algorithms, techniques, and systems and implementations, for three specific foundational building blocks: 1) distributed failure detection and membership in IoT networks; 2) distributed coordination in IoT networks; and 3) distributed IoT small object storage (iSoS). The solutions to these three components are tightly integrated, leveraging each other (latter ones build atop former ones). Because of the fluid and failure-prone nature of IoT environments, the project's philosophy is to design using probabilistic techniques--such techniques have been shown to work well in internet-based distributed systems (such as clouds and datacenters). These techniques are analyzable and have provable behavior in a variety of scenarios. The researchers will also implement applications atop the GoT substrate, including the first distributed command and control interface that realizes ACID abstractions (Atomicity, Consistency, Isolation, Durability) inspired by transactional databases. The project's implementations will be in state-of-the-art IoT platforms such as Raspberry Pis. Experiments will be performed via both real traces using a simulation and using real deployments inside buildings at the University of Illinois campus.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.957 |
2019 — 2022 |
Gupta, Indranil Koyejo, Oluwasanmi |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Secure, Private, and Resource-Constrained Approaches to Federated Machine Learning @ University of Illinois At Urbana-Champaign
In a world increasingly shaped by data-driven machine learning (ML), one of the emerging challenges is that data are often collected and stored in a distributed manner -- across multiple datacenters or devices. On the other hand, due to security and privacy concerns, there are often low levels of trust between the data owners. To this end, federated ML enables ML with distributed data, while avoiding the transfer of private data from distributed devices to a central datacenter. Towards the goal of democratizing ML, this project will design and implement new techniques to make federated ML secure and private. Of particular interest are new system designs that enable federated ML on devices with limited computational power or communication bandwidth e.g., smartphones, smart health monitors, and smartwatches, among others. The ideas, software, and results of this project will directly impact industry and real-world applications. This project will include curriculum development for federated ML and plans to involve participation by graduate students from underrepresented groups.
This project creates a transformative new direction for federated machine learning (ML) research, by enabling ML on devices that are untrusted or weak, and across organizations and for users who would like to maintain the privacy of their data. This project will include new work on theoretical foundations, systems design, implementation, and integration with popular ML software. Concretely, this project tackles three challenges in federated ML. The first challenge is fault-tolerant ML algorithms, i.e., new techniques to perform ML when workers act in arbitrarily malicious manners (called Byzantine failures) -- in particular, this project will show that by leveraging natural noise-tolerance in ML, it is possible to tolerate significantly more Byzantine workers than indicated by the traditional distributed computing literature. The second challenge is to develop privacy-preserving ML algorithms which introduce noise from workers to preserve the privacy of data owned by participants while leading to correct and fast ML at the global level. The third challenge is to investigate resource-constrained ML scheduling by including new techniques to allow large neural network models to run across multiple devices which have memory constraints. In addition to developing the algorithmic and theoretical frameworks for these directions, this project will also build and release open software.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.957 |