1999 — 2006 |
Smith, Stephen (co-PI) [⬀] Veloso, Manuela |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf-Cnpq Collaborative Research: Multiagent Collaborative and Adversarial Planning, Execution, Perception, and Learning @ Carnegie-Mellon University
Abstract
IIS-9900298 Veloso, Manuela Carnegie Mellon University $150,000 - 36 mos. (joint funding with CISE CNPq and International Program)
NSF-CNPq Collaborative Research: Multi-Agent Collaborative and Adversarial Perception, Planning, Execution, and Learning
This is a three-year standard award. This project aims to investigate a concrete spectrum of issues of relevance to the development of teams of complete robot agents in dynamic, real-time, and adversarial environments. The proposed research will investigate and develop agents that are: (1) autonomous - with on-board sensing, planning, and acting, (2) efficient - capable of achieving specific goals under time and resource constraints through the integration of deliberative and reactive planning, (3) cooperative - capable of collaborating with each other to accomplish tasks that are beyond individual's capabilities, and (4) adaptive - capable of learning from experience by refining their individual and collaborative action selection preferences.This multi-agent robotic research will be carried within the context of robotic soccer. The robotic soccer domain introduces many specific research topics, including: (i) complete integration of perception, action, and cognition in a team of multiple robotic agents; (ii) definition of a set of robust reactive planning behaviors for individual agents (each agent should be equipped with skills that enable it to effectively perform individual and collaborative actions); (iii) reliable, real-time and active visual perception, including tracking of multiple moving objects and prediction of object movement; (iv) multi-agent strategic reasoning. The proposed work will build upon the research experience of the U.S. research group on planning and learning and on the Brazilian research group on real-time perception.
|
1 |
2010 — 2017 |
Steinfeld, Aaron (co-PI) [⬀] Nourbakhsh, Illah Reza (co-PI) [⬀] Veloso, Manuela Simmons, Reid (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Large: Ssci-Misr: Symbiotic, Spatial, Coordinated Human-Robot Interaction For Multiple Indoor Service Robots @ Carnegie-Mellon University
Despite the significant advances in robotics research and development over the years, there are still no pervasive intelligent mobile robots coexisting with humans in daily environments. Among the many possible reasons as to why this is the case, this project addresses the challenge of an effective concrete interaction of mobile robots with humans, focusing on tasks which enable joint human and robot performance and require spatial interaction. The PI's vision is that project outcomes will make it possible to have multiple robots in, say, an office building available for different navigational and informational tasks, including accompanying daylong visitors through their schedule of meetings, giving tours to occasional visitors, fetching objects for and taking them to people in offices, and delivering the daily mail. To achieve this goal, she plans to transform the state of the art in robot technology for social service robotics, by introducing a novel symbiotic human-robot and robot-robot interaction paradigm that allows robots to help and be helped by humans and each other. A robot will ask humans for assistance based on self awareness of its own limitations and a utility analysis of the estimated cost and benefits of the assistance. The PI and her team will develop and evaluate a robot platform-independent and building-independent problem environment representation, along with algorithms for incremental map learning, localization and navigation, and asynchronous (multi-robot) task partitioning and planning under uncertainty with a utility analysis that includes human availability for robot helping. They will explore effective spatial interaction between mobile robots in spaces with humans, utilizing social conventions, so that people are not just obstacles from the robot's perspective. The robot science and development research will be seamlessly integrated with educational and outreach activities, as well as with principled evaluation which will include fielding a team of robots in campus buildings.
Broader Impacts: Aside from dramatically advancing the state of the art in robot technology, enabling multiple mobile robots to be part of the workspace of an office building environment will have significant educational impact relating both to robot technology and interaction with robots. Continuous, openly available robot presence in the computer science and robotics research spaces will change the nature of the relationship between researchers and their classroom research projects, by triggering synergistic collaborations and new, higher-risk experiments with lower setup cost. C Campus outreach tours will be transformed from a narrow view of the future of technology in laboratory settings to a sweeping exposure to the reality and implications of humans and robots coexisting throughout the built environment, significantly broadening inquiry and discussion about the role of interactive technology in our lives. Disseminated curricula incorporating low-cost mobile robots in the secondary school classroom will lift the robot-classroom relationship from one of build kits for very low-capability robots to one of high-level interaction design, industrial design, and discussions of human-robot relationships.
|
1 |
2012 — 2015 |
Veloso, Manuela |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Natural Language-Based Human Instruction For Task Embedded Robots @ Carnegie-Mellon University
This project will advance the scientific state of the art in social service robots by introducing a novel approach for performing, composing, and correcting tasks using spatial language, and for handling the challenges of long-term interaction with people. The team of investigators will leverage prior work on CoBot service robots as a scientific platform. CoBots can transport objects, deliver messages, escort people and go to places, continuously executing these tasks over multiple weeks in a multi-floor building. The team will collaborate to research, develop, and evaluate algorithms for learning, composing, and correcting the execution of tasks via natural language. The proposed research will enable any person to train the robot; we will use the CoBot robots to perform evaluation and testing of our proposed algorithms.
The vision of a continuously operating robot in a real-world environment that can update its behavior in response to human instruction will have a broad impact on the way students, faculty and visitors interact with and view the usefulness of robots. Some examples include: (1) Customizable intelligent robots will give people the creative power to simply and intuitively update robot behavior, making the system broadly accessible to non-experts. (2) Outreach to the community will transform the view that robots are static unchangeable systems by creating an awareness towards robots co-inhabiting our environment. We will invite children of different age groups and people from different cultures to interact with our co-robot through language-based instruction. (3) Synergistic activities across multiple research groups have and will continue to be explored and encouraged (e.g., continuous environmental measurement and monitoring).
|
1 |
2016 — 2019 |
Kitani, Kris Veloso, Manuela |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: a Cognitive Navigation Assistant For the Blind @ Carnegie-Mellon University
The focus of this project is on implementation of a navigation assistant that uses a collection of sensing modalities and algorithms to guide a blind person through the knowledge landscape (e.g., social context, visual landmarks, scene functionality) of an unfamiliar environment. The approach is based on a portfolio of complex processes that provide within a single framework a coherent account of the state of the world, with the help of novel techniques which meld information at various levels of abstraction. In the near term, project outcomes will directly improve the quality of life for those with visual impairments through public release of a smartphone app. In the longer term, the societal impact of this research will extend beyond improving sensory capabilities for the blind in that it describes an approach towards human augmentation through the use of machine intelligence. The work will directly shed light on the variety of environmental knowledge which can be automatically acquired using machine perception, and how that information can be conveyed through a physical co-robot interface. From an educational perspective, this work will develop important models for integrating knowledge obtained by intelligent machines into one source, and will also develop new theories regarding the translation of that rich knowledge in a manner which can be easily understood by the user.
Leveraging prior work, sensing modalities such as Bluetooth low energy beacons, depth sensors, color cameras and wearable inertial motion units will be used to enable continuous localization within a novel environment. An additional layer of higher-order algorithms will further build upon physical measurements of location to develop computational contextual awareness, enabling the navigation assistant to understand the knowledge landscape by identifying meaningful visual landmarks, modes of interaction (functionality) within the environment and social context. This knowledge structure will then be conveyed to the blind user to enable contextual hyper-awareness, that is to say a contextual understanding of the environment which goes beyond normative sensing capabilities, in order to augment the user's ability to navigate the knowledge landscape of the environment. The navigation assistant will be instantiated as two concrete manifestations: a compact wearable interface, and a physical robotic interface. The wearable interface will be a smartphone-based system that gives audio-based navigation feedback to facilitate the creation of a cognitive map. The robotic interface will be a wheeled hardware platform that guides the user through haptic feedback to further reduce the cognitive load of interpreting and following audio feedback. Both platforms will be refined and evaluated in real-world scenarios based on principles derived from rigorous user studies. Project outcomes will include a navigation assistant that can help a blind person walk a path through novel indoor or outdoor suburban environments to a desired destination. The two physical interfaces will also be used to develop working theories and models for co-robot scenarios that must take into account situational context and the preferential dynamics of the user.
|
1 |