2007 — 2010 |
Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Modeling and Optimization of Thermal and Energy Efficient Processing in Multi-Core System-Chips @ University of California-San Diego
Proposal ID: 0702792 Title: Modeling and Optimization of Thermal and Energy Efficient Processing in Multi-Core System-Chips PI name: Rajesh Gupta PI Inst: UC SanDiego
Multi-Core System-Chips, or Chip Multi Processors (CMPs), represent both an opportunity as well as a challenge to the new generation of computer system architects and implementers. However, the exploitation of additional processing resources in an energy efficient manner is a challenge due to lack of scalability of the heat dissipation capabilities of the integrated chips. Use of best known methods for low power design at various circuit, logic or architectural design levels is not enough to realize the potential for multi-core performance improvement. Instead, systems must be architected to push the thermal limits to take the maximum advantage of the available heat dissipation limits, both statically as well as dynamically. This proposal seeks to build a framework for energy-efficient CMP systems that enables the system architect to make various power related decisions from the choice of shutdown and slowdown states to architectural support for speed scaling and load balancing -- in a systematic manner. The approach relies upon the formulation of the speed scaling problem and its different instances under various constraints as an optimization problem. The optimization problem takes into account not only package effects, but also spatial distribution of heat dissipation across the CMP die. Particular attention is paid to leakage power, given its growing importance due to the advancing processes, low voltage (e.g., sub-threshold) circuit operations and due to 'architectural leakage' caused by blocks that are not subject to shutdown or slowdown measures. The PIs treatment of leakage is based on two key concepts: efficient runtime determination of a critical speed that provides optimum energy efficient processing; and scheduling methods that takes a global view of the system level power consumption and exploit the flexibility to make power state changes to achieve effective load-balancing and latency hiding effects. The PI also seek to capture the application intent by enabling the application developer to explicitly program important application-specific 'metadata' using a power-aware API (that, for instance, may change algorithms, precision based on current energy availability profile). The intellectual merit of the project is in optimization techniques that seek a judicious balance of multiple slowdown and shutdown states for the most efficient utilization of energy while pushing the thermal limits supported by the physical placement and packaging constraints. The broader impact of the proposed research is its contribution to an infrastructure of various courses and projects related to the topic of thermally aware energy efficient embedded processing. The project seeks to provide students chance to learn and experiment with the OS internals, memory interfaces and time-bound computations.
|
0.915 |
2008 — 2011 |
Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Design and Run-Time Techniques For Physically Coupled Software @ University of California-San Diego
Proposal Number: 0820061/0820034/0820230
TITLE: Design and Run-time Techniques for Physically Coupled Software
PIs: Ramesh Govindan (USC), Rajesh Gupta (UCSD), Mani Srivastava (UCLA), and Paulo Tabuada (UCLA)
ABSTRACT:
Many real-world systems are deeply embedded in the physical world and their operational behavior is determined in large part by a tight coupling between the system components and the physical environment. This project seeks to establish the scientific principles governing software for such physically-coupled systems by focusing on four challenges in the context of distributed sensing and control applications: 1) Support for physical context in the form of programming structures that enable application software to explicitly capture the state of the physical world as an observable in an embedded computation; 2) Formal methods for composing software modules that indirectly interact with each other through the physical world, and a run-time safety supervisor that provably enforces correctness of composition; 3) Programming structures to enable design and verification of applications with resource provisioning that is driven by and adapts to physical-world dynamics; 4) System software support for sharing physically-coupled sensor and actuator resources in distributed settings. In addition, educational techniques targeting the teaching of topics in physically-coupled computational systems are being explored by creating shared educational content in the form of self-contained reusable modules.
|
0.915 |
2009 — 2010 |
Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cyber-Physical Systems Week (Cpsweek 2009) @ University of California-San Diego
This award is for the support of student participation in CPSWeek 2009 and the CPS Forum, San Francisco, California, April 13 through April 17, 2009, California. The objective is to attract a diverse student population to research careers in Science, Technology, Engineering and Math (STEM) areas contributing to the emerging field of Cyber-Physical Systems. This NSF grant will be used to ensure the broadest possible student participation in CPSWeek events.
|
0.915 |
2009 — 2012 |
Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps:Small:Collaborative Research:Localization and System Services For Spatiotemporal Actions in Cyber-Physical Systems @ University of California-San Diego
CPS: Small: Collaborative Research: Localization and System Services for SpatioTemporal Actions in Cyber-Physical Systems
The objective of this research is to develop models, methods and tools for capturing and processing of events and actions in cyber-physical systems (CPS) in a manner that does not violate the underlying physics or computational logic. The project approach uses a novel notion of cyber-physical objects (CPO) to capture the mobility and localization of computation in cyber-physical systems using recent advances in geolocation and the Internet infrastructure and supports novel methods for spatiotemporal resource discovery.
Project innovations include a model for computing spatiotemporal relationships among events of interests in the physical and logical parts of a CPS, and its use in a novel cyberspatial reference model. Using this model the project builds a framework for locating cyber-physical application services and an operating environment for these services. The project plan includes an experimental platform to demonstrate capabilities for building new OS services for CPS applications including collaborative control applications drawn from the intermodal transportation system.
The project will enable design and analysis of societal scale applications such as the transportation and electrical power grid that also include a governance structure. It will directly contribute to educating an engineering talent pool by offering curricular training that range from degree programs in embedded systems to seminars and technology transfer opportunities coordinated through the CalIT2 institute at UCSD and the Institute for Sensing Systems (ISS) at OSU. The team will collaborate with the non-profit Milwaukee Institute to explore policies and mechanisms for enterprise governance systems.
|
0.915 |
2010 — 2017 |
Zhou, Yuanyuan (co-PI) [⬀] Rosing, Tajana (co-PI) [⬀] Jhala, Ranjit (co-PI) [⬀] Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Variability-Aware Software For Efficient Computing With Nanoscale Devices @ University of California-San Diego
Abstract: The Variability Expedition Project: Variability-Aware Software for Efficient Computing with Nanoscale Devices
As semiconductor manufacturers build ever smaller components, circuits and chips at the nano scale become less reliable and more expensive to produce ? no longer behaving like precisely chiseled machines with tight tolerances. Modern computing is effectively ignorant of the variability in behavior of underlying system components from device to device, their wear-out over time, or the environment in which the computing system is placed. This makes them expensive, fragile and vulnerable to even the smallest changes in the environment or component failures. We envision a computing world where system components -- led by proactive software -- routinely monitor, predict and adapt to the variability of manufactured systems. Changing the way software interacts with hardware offers the best hope for perpetuating the fundamental gains in computing performance at lower cost of the past 40 years. The Variability Expedition fundamentally rethinks the rigid, deterministic hardware-software interface, to propose a new class of computing machines that are not only adaptive but also highly energy efficient. These machines will be able to discover the nature and extent of variation in hardware, develop abstractions to capture these variations, and drive adaptations in the software stack from compilers, runtime to applications. The resulting computer systems will work and continue working while using components that vary in performance or grow less reliable over time and across technology generations. A fluid software-hardware interface will thus mitigate the variability of manufactured systems and make machines robust, reliable and responsive to the changing operating conditions.
The Variability Expedition marshals the resources of researchers at the California Institute for Telecommunications and Information Technology (Calit2) at UC San Diego and UC Irvine, as well as UCLA, University of Michigan, Stanford and University of Illinois at Urbana-Champaign. With expertise in process technology, architecture, and design tools on the hardware side, and in operating systems, compilers and languages on the software side, the team also has the system implementation and applications expertise needed to drive and evaluate the research as well as transition the research accomplishments into practice via application drivers in wireless sensing, software radio and mobile platforms.
A successful Expedition will dramatically change the computing landscape. By re-architecting software to work in a world where monitoring and adaptation are the norm, it will achieve more robust, efficient and affordable systems that are able to predict and withstand not only hardware failures, but other kinds of software bugs or even attacks. The new paradigm will apply across the entire spectrum of embedded, mobile, desktop and server-class computing machines, yielding particular gains in sensor information processing, multimedia rendering, software radios, search, medical imaging and other important applications. Transforming the relationship between hardware and software presents valuable opportunities to integrate research and education, and this Expedition will build on established collaborations with educator-partners in formal and informal arenas to promote interdisciplinary teaching, training, learning and research. The team has built strong industrial and community outreach ties to ensure success and reach out to high-school students through a combination of tutoring and summer school programs. The Variability Expedition will engage undergraduate and graduate students in software, hardware and systems research, while promoting participation by underrepresented groups at all levels and broadly disseminating results within academia and industry.
|
0.915 |
2014 — 2019 |
Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Frontiers: Collaborative Research: Roseline: Enabling Robust, Secure and Efficient Knowledge of Time Across the System Stack @ University of California-San Diego
Accurate and reliable knowledge of time is fundamental to cyber-physical systems for sensing, control, performance, and energy efficient integration of computing and communications. This statement underlies the proposal. Emerging CPS applications depend on precise knowledge of time to infer location and control communication. There is a diversity of semantics used to describe time, and quality of time varies as we move up and down the system stack. System designs tend to overcompensate for these uncertainties and the result is systems that may be over designed, inefficient, and fragile.
The intellectual merit derives from the new and fundamental concept of time and the holistic measure of quality of time (QoT) that captures metrics including resolution, accuracy, and stability. The proposal builds a system stack ("ROSELINE") that enables new ways for clock hardware, operating system, network services, and applications to learn, maintain and exchange information about time, influence component behavior, and robustly adapt to dynamic QoT requirements, as well as to benign and adversarial changes in operating conditions. Application areas that will benefit from Quality of Time will include: smart grad, networked and coordinated control of aerospace systems, underwater sensing, and industrial automation.
The broader impact of the proposal is due to the foundational nature of the work which builds a robust and tunable quality of time that can be applied across a broad spectrum of applications that pervade modern life. The proposal will also provide valuable opportunities to integrate research and education in graduate, undergraduate, and K-12 classrooms. There will be extensive outreach through publications, open sourcing of software, and participation in activities such as the Los Angeles Computing Circle for pre-college students.
|
0.915 |
2015 — 2017 |
Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr:Small:Collaborative Research:Eds: Systems and Algorithmic Support For Managing Complexity in Sensorized Distributed Systems @ University of California-San Diego
Commercial buildings, the energy grid and transportation systems are examples of emerging distributed systems that are beginning to be instrumented with a large number of sensors and actuators for sensing ambient environmental conditions, user occupancy, state of energy use etc. The goal of such instrumentation is to improve safety, utility and reduce costs. This is a hard problem due to interaction of humans, devices and networks in an operating environment with uncertainties regarding veracity, timeliness, meaning and value of sensor data. A large number of sensors must be provisioned, monitored and maintained by system operators. This is currently a manual and error prone task. Deploying, managing and adapting a sensorized system at scale become nearly impossible. In the micro-grid testbed of networked buildings used by this project, there are over a hundred thousand alarms raised per day by the first fifty buildings under observation. In reality, despite thousands of reported sensors there are only a few hundred distinct types of sensors. The key is to reduce the complexity of sensorized distributed systems using automated or semi-automated methods to characterize sensors, determine their type based on the sensor data streams and make inferences about the quality of sensor data with minimal operator effort. This project will apply advances in unsupervised machine learning methods to compose, aggregate and interpret sensory data spatially and over time in order to enable robust derivation of semantically useful sensory information for applications and users resulting in better-utilized and robust systems.
The intellectual merit of the project lies in building an information flow model, with a systematic capture and use of sensor meta-data that enables algorithmic approaches to data composition and building inferences. Using the proposed learning based automation approach along with programming and runtime support, the project will devise a data-to-decision flow for distributed systems operating across timing and reliability constraints. The project outlines smart buildings as an application driver for the envisioned sensorized distributed system with a working real-life testbed. This research will directly contribute to methods for discovery of tele-connections, such as dependence and causal relationships, between various sensory data streams which are crucial for devising effective control of devices connected to these distributed systems.
The broader impacts of the project include advances in the design, deployment, management and programming methodologies for a new class of distributed computing systems that can deal with changing characteristics and topologies of the underlying sensor network. The particular testbed will demonstrate, how such methods can create energy-efficient, sustainable, and comfortable buildings for occupants. A number of educational and outreach activities have been planned to train the next generation talent for the emerging area of a data-driven internet of things. For the broader research community, the project will make available, SensorDepot, an open-source extensible architecture for implementing applications for sensorized distributed systems.
|
0.915 |
2016 — 2018 |
Altintas, Ilkay Mcauley, Julian Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bd Spokes: Spoke: West: Collaborative: Metroinsight: Knowledge Discovery and Real-Time Interventions From Sensory Data Flows in Urban Spaces @ University of California-San Diego
The MetroInsight project is building an end to end system for knowledge discovery from real-time data streams collected through a variety of sensors, data collection and aggregation methods. These data streams are highly dimensional with multiple sensors observing same or similar phenomena over multiple sensory spectrums and scales. These are also sometimes real-time and/or have strong timing relationships that is necessary to support metropolitan infrastructure through effective analytics and policy support. The project brings together a diverse number of partners utilities, universities, companies and cities with the ability to contribute novel tools and urban sensor data and to translate knowledge into actions. MetroInsight's unique combination of tools, data and partnerships, in part with the MetroLab Network , makes it well poised to set an example for the MetroLab programs across the nation as well as the rest of the municipal governments. The project will explore connections between multimodal datasets and urban infrastructure management to build a practical system consisting of integrated tools, as well as training a new generation of metropolitan workforce. As part of an ambitious plan for community building and workforce development, the project includes creation of new learning modules, certification programs on energy and sustainability, an online courses on sensor data analytics and new capstone projects in a new Data Science master's degree program.
To achieve project goals, MetroInsight is building infrastructure for managing data, networks and processing that will support design of new algorithms and tools in the project. Specifically, the project is developing algorithms to transform multimodal urban data to a lower dimensional data that reflects underlying physical and social phenomena. These low dimensional data may consist of population level data suitable for dynamic processing to support real time monitoring and visualization by cityscale operators of various lifelines from transportation, communications to emergency response. To address technical challenges in complex and subtle spatiotemporal dynamics of interdependent urban networks, MetroInsight will develop metadata methods and tools that support discovery of operational interdependencies, quantification of uncertainties for decision support and to provide assurances related to integrity and security of data, compliance related to ethical and legal privacy expectations.
|
0.915 |
2019 — 2021 |
Gupta, Rajesh |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Predictive Risk Investigation System (Prism) For Multi-Layer Dynamic Interconnection Analysis @ University of California-San Diego
The natural-human world is characterized by highly interconnected systems, in which a single discipline is not equipped to identify broader signs of systemic risk and mitigation targets. For example, what risks in agriculture, ecology, energy, finance and hydrology are heightened by climate variability and change? How might risks in, for example, space weather, be connected with energy, water and finance? Recent advances in computing and data science, and the data revolution in each of these domains have now provided a means to address these questions. The investigators jointly establish the PRISM Cooperative Institute for pioneering the integration of large-scale, multi-resolution, dynamic data across different domains to improve the prediction of risks (potentials for extreme outcomes and system failures). The investigators' vision is to develop a trans-domain framework that harnesses big data in the context of domain expertise to discover new critical risk indicators, holistically identify their interconnections, predict future risks and spillover potential, and to measure systemic risk broadly. The investigators will work with stakeholders to ultimately create early warnings and targets for critical risk mitigation and grow preparedness for devastating events worldwide; form wide and unique partnerships to educate the next generation of data scientists through postdoctoral researcher and student exchanges, research retreats, and workshops; and broaden participation through recruiting and training of those under-represented in STEM, including women and underrepresented minority students, and impact on stakeholder communities via methods, tools and datasets enabled by PRISM Data Library web services.
The PRISM Cooperative Institute's data-intensive cross-disciplinary research directions include: (i) Critical Risk Indicators (CRIs); The investigators define CRIs as quantifiable information specifically associated with cumulative or acute risk exposure to devastating, ruinous losses resulting from a disastrous (cumulative) activity or a catastrophic event. PRISM aims to identify critical risks and existing indicators in many domains, and develop new CRIs by harnessing the data revolution; (ii) Dynamic Risk Interconnections; The investigators will dynamically model and forecast CRIs and PRISM aims to robustly identify a sparse, interpretable lead-lag risk dependence structure of critical societal risks, using state-of-the-art methods to accommodate CRI complexities such as nonstationary, spatiotemporal, and multi-resolution attributes; (iii) Systemic Risk Indicators (SRIs); PRISM will model trans-domain systemic risk, by forecasting critical risk spillovers and via the creation of SRIs for facilitating stakeholder intervention analysis; (iv) Validation & Stakeholder Engagement; The investigators will deploy the PRISM analytical framework on integrative case studies with distinct risk exposure (acute versus cumulative) and catastrophe characteristics (immediate versus sustained), and will solicit regular input from key stakeholders regarding critical risks and their decision variables, to better inform their operational understanding of policy versus practice.
This project is part of the National Science Foundation's Harnessing the Data Revolution (HDR) Big Idea activity, and is jointly supported by HDR and the Division of Mathematical Sciences within the NSF Directorate of Mathematical and Physical Sciences.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2021 |
Gupta, Rajesh Ohno-Machado, Lucila Kumar, Arun Quer, Giorgio Shang, Jingbo |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf Convergence Accelerator Track D: Towards Intelligent Sharing and Search For Ai Models and Datasets @ University of California-San Diego
The NSF Convergence Accelerator supports use-inspired, team-based, multidisciplinary efforts that address challenges of national importance and will produce deliverables of value to society in the near future. A major goal of AI-driven applications is to discover the underlying patterns in domain-specific datasets, which typically requires tremendous field experience and interdisciplinary knowledge to design or even select suitable AI models. This project will develop a hub and portal for AI data sets and models. It will offer data and model matching recommendations, the use of domain knowledge to improve search strategies for data sets and models, and support for privacy. The hub and portal will engage a broad range of users (in STEM and non-STEM fields) creating AI-driven innovations in various domains that we can only imagine today. Successful execution will provide new tangible artifacts consisting of model and data schemas, software, systems, and services that would make the AI models and datasets easily discoverable, accessible, interoperable, and reproducible.
Four novel techniques will be used to realize the envisioned system: (1) A fine-grained privacy control technique with adaptive descriptive statistics, achieving a balance between the privacy needs of data owners and application-driven usability. All other components will have access to only the privacy-controlled data; (2) An automated metadata generation method that exploits various kinds of information about AI models and datasets (e.g., data values, model parameters, auxiliary descriptions) to incorporate domain logic into semantics. This metadata, together with the models and datasets, will be organized as a text-rich network; (3) A representation learning method that transforms information in the text-rich network into a latent space, where datasets/models with similar semantics would be close to each other. This learning over multimodal data will enable comprehensive understandings about models and datasets; (4) A learning-to-match model with constraints will be built to bridge datasets and models. The constraints are mainly induced from schema alignment between models and datasets, which can also filter out obvious non-compatible model and dataset choices, significantly expediting the search and matching process.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |