2007 — 2011 |
Howard, Ayanna Kemp, Charles Blake, M. Brian Jacko, Julie Brown, Edward |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hri: Robot Learning From Teleoperative-Based Instruction and Multimodal Interaction @ Georgia Tech Research Corporation
Teleoperated assistive robots in home environments have the potential to dramatically improve quality of life for older adults and/or people who experience disabling circumstances due to chronic or acute health conditions. It could similarly aid clinicians and healthcare professionals providing treatment. The success of these applications though will critically depend on the ease with which a robot can be commanded to perform common manipulation tasks within a home environment. Thus, the focus of the proposed research is to addresses this key challenge in two significant ways. First, by learning from teleoperated manipulation (i.e. teleoperative-based instruction), robots can acquire the ability to perform elements of common tasks with greater autonomy and reliability. Second, by automatically mapping new modalities (e.g. voice and gesture commands) to the robot's user interface, a wider variety of people will be able to use the robot more easily. The resulting multimodal interfaces may be especially important for people who have difficulty using a single modality, such as vision. These two fundamental research components underlie the basis of our approach to enable manipulation of everyday objects in an unstructured human environment.
|
0.906 |
2008 — 2013 |
Mitchell, Tom [⬀] Just, Marcel (co-PI) [⬀] Kemp, Charles |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi-Type Ii: From Language to Neural Representations of Meaning @ Carnegie-Mellon University
This project seeks to develop a new understanding of how the brain represents and manipulates meaning, by bringing together the perspectives of brain imaging, machine learning and computational modeling, using converging approaches from behavioral psychology, linguistics, computer science and neuroscience. In particular, the brain activity that encodes the meanings of words, phrases and sentences is studied, along with how the brain encodes the meaning of individual words in terms of their component semantic features, how it modifies its encoding of an individual word when it occurs within a phrase or clause, and how it constructs the encoding of a phrase or clause from the encodings of its component words. This work builds on recent research showing (1) that repeatable patterns of fMRI activation are associated with viewing nouns describing concrete objects such as "hammer" or "toe," (2) that the neural patterns that encode the meanings of these words are similar across different people, and (3) that these encodings are similar whether the person views a word or a picture of the object. Whereas previous work has focused on the neural representation of single words in isolation, this project studies multiple word phrases and sentences, which comprise larger units of knowledge; for example how the neural encoding of a noun is influenced by its adjective (e.g., "fast rabbit" vs. "cuddly rabbit") and how the neural encoding of a proposition is related to the encodings of its component words (how "cut" and "surgeons" combine in the proposition "surgeons cut"). To address these questions, computational models are developed using a diverse set of training data including fMRI data, data from a trillion-word corpus of text that represents typical language use, and behavioral data from language comprehension and judgment tasks, as well as online linguistic knowledge bases such as VerbNet, and theoretical proposals from the cognitive neuroscience literature regarding how and where the brain encodes meaning. These perspectives are integrated into a theory in the form of a computational model trained from diverse data and prior knowledge, and capable of making experimentally testable predictions about the neural encodings and behavioral responses associated with tens of thousands of words, and hundreds of thousands of phrases and sentences.
This project potentially constitutes a significant scientific advance in understanding the relation between brain and mind, impacting a variety of scientific disciplines involved in the study of semantics, including linguistics, psychology, philosophy and cognitive science. A second impact comes from use of the methods and results to understand brain pathologies that involve language disturbances, such as aphasia, dyslexia, and autism. A third impact comes from the development of new statistical machine learning algorithms for analyzing and modeling cross-domain data sets to aid in scientific discovery. Finally, the emerging results and methods will have an educational impact through courses on Brain Imaging, Machine Learning, and Psychology taught by the Principal Investigators, and through a new course to be developed specifically on the topic of "Neural representations of meaning," with materials to be made available on the web.
|
1 |
2009 — 2012 |
Kemp, Charles |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Assistive Object Manipulation Via Rfid Guided Robots @ Georgia Tech Research Corporation
PI: Kemp, Charles C. in collaboration with Reynolds, Matthew S. Proposal Number: 0932592 in collaboration with 0931924
For millions of people, motor impairments diminish quality of life, reduce independence, and increase healthcare costs. Assistive mobile robots that manipulate objects within everyday settings have the potential to improve quality of life by augmenting people's abilities with a cooperative robot. In spite of this promising opportunity, autonomous mobile robots are not yet robust enough for daily operation in real homes. The proposed research directly addresses the challenges of achieving this competence through novel, innovative research that combines RFID-based sensing with autonomous mobile manipulation. Key insights of the proposed work are that the current range of ultra-high frequency passive RFID 'smart labels' makes it feasible for robots to read them from across a room and that their low cost (sub-$0.25) means they can be ubiquitously attached to important objects throughout the home. The unambiguous digital signals available from tagged objects could be profoundly enabling for an assistive robot in three ways which this research will investigate and exploit: (1) scanning a room to provide reliable unique identification of important tagged objects so the robot can provide the user with a menu of available objects and be informed about the objects (e.g., their appearance and appropriate manipulative actions) (2) long-range localization of a tagged object based on its signal characteristics so the robot can approach the object and (3) short-range localization and identification through novel finger-mounted antennas to help the robot manipulate the object, including grasping it and confirming that it has been grasped. We will research these transformative capabilities for assistive robots in the context of object retrieval. Our studies with physically-impaired patients with amyotrophic lateral sclerosis (ALS) from the Emory ALS Center clearly demonstrate that object retrieval via an autonomous mobile robot would be a valued assistive technology, and these capabilities could serve as a foundation upon which other assistive services could be built. We will evaluate the RFID guided robot that results from this research with ALS patients. The robot will include a novel menu-driven user interface that will enable a physically-impaired user to easily select an object and an appropriate delivery method. After this selection, the robot will find, approach, grasp, and deliver the requested object.
Intellectual Merit The proposed research will yield novel RFID signal processing strategies and antenna designs, and novel robot localization and manipulation behaviors that significantly advance the state of the art. This will result in a transformative understanding of RFID as a new modality for robot perception. Through our collaboration with patients at the Emory ALS Center we will continue to obtain feedback to guide the research and maximize its real-world value, which will in turn lead to new insights into how robots can best assist this target population.
Broader Impacts This research will strengthen the collaboration between researchers at Georgia Tech, researchers at Duke, and patients from the Emory ALS Center. This unique collaboration will accelerate progress in this highly promising area of assistive technology. The proposed research will also promote a new collaboration with the Quality of Life Technology Center at CMU, with whom we will work closely to transfer all developed technologies (hardware and software) via opensource models. We will continue our commitment to the education of under-represented minorities through our participation in Georgia Tech's SURE and Duke's REU programs. By involving all students in the patient studies, this research will enhance their education and increase the pool of talent trained to innovate in this area.
|
0.906 |
2010 — 2013 |
Kemp, Charles |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-New: a Robot For in Situ Research On Assistive Mobile Manipulation @ Georgia Tech Research Corporation
Assistive robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Although researchers have demonstrated relevant capabilities within laboratory environments, current methods are untested and potentially unsuitable for the variability of real-world healthcare environments. In order to address this critical issue, the new infrastructure funded by this award consists of a state-of-the-art robot (a mobile manipulator) dedicated to research conducted outside of the lab via collaborations with healthcare researchers and providers. The robot spends extended periods of time (residencies) in environments where assistive robots are expected to make a positive impact, including the homes of persons with disabilities, assisted living environments for the elderly, and clinical facilities. These residencies enable robotics researchers to maximize the impact of their research by identifying and addressing the roadblocks to deployment of autonomous mobile manipulators for healthcare. The research supported by this infrastructure will result in new methods for assistive manipulation and contributions to compliant arm control, multi-modal perception, and human-robot interaction. It will also begin to quantitatively characterize the variability of real-world healthcare environments. Results of this research will be communicated via academic publications, a website, open source code, and publicly released data captured with the robot. The robot will also have residencies at Spelman College (an HBCU for women) to promote science, technology, engineering, and mathematics (STEM) education.
|
0.906 |
2011 — 2017 |
Ting, Lena [⬀] Kemp, Charles Liu, C. Karen Hackney, Madeleine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Efri-M3c: Partnered Rehabilitative Movement: Cooperative Human-Robot Interactions For Motor Assistance, Learning, and Communication
Our vision is to develop caregiver robots that interact fluidly and flexibly with humans during functional motor activities, while providing motor assistance, enhancement, and communication to facilitate motor learning. However, we currently lack theories to understand how rehabilitation and movement therapists provide timely and appropriate physical feedback and assistance to improve mobility in individuals with motor impairments. To develop devices that could accompany an individual as both assistant and movement therapist, our goal is to study human motor coordination during cooperative physical interactions with a humanoid assistive robot. We will use rehabilitative partner dance as a paradigm to examine a sensory-motor theory of cooperative physical interactions relevant to walking and other functional motor activities. We will use a "partnered box step", a constrained and defined pattern of weight shifts and directional changes, as a paradigm for a cooperative physical interaction with welldefined motor goals. Objectives: To 1) experimentally verify a hierarchical theory of human sensory-motor control and learning and 2) develop predictive models of whole-body human movement for cooperative physical interactions with machines. Over four years, we will test our models by demonstrating the successful participation of the robot in a box step as leader or follower and adapt its movements to the motor skill level of a human partner. Intellectual Merit: Our work will provide transformative experimental, theoretical, and practical interdisciplinary frameworks that will forge new paths toward autonomous cooperative robots with physical intelligence to enhance, assist, and improve motor skills in humans with varying motor capabilities. These advances will aid prosthetic and robotic design and may advance our understanding of the brain. Broader Impacts: The expected project outcomes would have long-term impact on the quality of life of millions of Americans by improving fitness, motor skills, and social engagement. Applications include healthcare devices or sports robots that entertain and improve fitness. We will provide seminars on mobility-related issues and rehabilitative dance instruction to older adult living communities and populations with motor impairments. The broad appeal and social nature of this work will likely garner media publicity that will increase public interest in science and technology.
|
0.966 |
2012 — 2018 |
Kemp, Charles |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Haptic Interaction For Robotic Caregivers @ Georgia Tech Research Corporation
Making contact with a person's body is critical to many of the most important caregiving tasks for people with physical disabilities. During these tasks, the forces applied by the robot to the body of the human client (care recipient) are of central importance. Yet robots are currently ignorant of what forces are appropriate for common tasks, and what forces are appropriate when making contact with different locations on the client's body. In this project, the PI's goal is to endow assistive robots with the ability to use appropriate forces when haptically interacting with people. To this end, he will capture and statistically model the forces applied when a person performs assistive tasks for him or herself, or provides care to another person. He will enable robots to intelligently regulate the forces they apply when performing assistive tasks, so that the applied forces are comparable to those used during human-human interactions. And he will enable clients to effectively control the forces applied by a robot during assistive tasks. Throughout the research, the PI will conduct experiments to test relevant hypotheses: That the type of task and the pose of the tool relative to the client's body are highly predictive of the force applied by a human caregiver; That when performing tasks on a mannequin, the robot will successfully emulate the forces observed during human-human interaction; That when the robot applies force to the client's body, the client will prefer that the robot use knowledge of the task and the pose of the tool to interpret user commands rather than a constant mapping. Because a person's ability to perform activities of daily living (ADLs) is highly predictive of his or her ability to live independently, the work will focus on four representative ADL tasks that require contact with the client's head: feeding a person yogurt, wiping a person's face, brushing a person's hair, and shaving a person with an electric razor. Project outcomes will include a system that enables a PR2 robot from Willow Garage to assist people with severe physical disabilities with these four tasks; the PR2 will be modified to have force-torque sensors at its wrists, specialized tools, and a Kinect 3D sensor on its head.
Broader Impacts: This research will begin to endow robots with a crucial form of "common sense" while quantitatively analyzing and synthesizing haptic interaction in the context of humans' most basic needs. It will also lead to a better understanding of human-robot interaction when the robot initiates contact with the user, and will contribute to data-driven methods for intelligent control. The PI will publish extensively and will release open source code, so that the work can catalyze progress towards robots that could empower millions of people to live more independently with a higher quality of life. The PI will directly involve people with disabilities in the research, and will actively engage the broader community by participating in events such as the RESNA conferences and the Atlanta Abilities Expo. In addition, he will incorporate research results in his biomechanics class and graduate course on haptics, and will then adapt the material to teach people around the world about these topics and robotics using the methods and tools of Khan Academy (http://www.khanacademy.org/).
|
0.906 |
2015 — 2020 |
Ting, Lena (co-PI) [⬀] Howard, Ayanna Kemp, Charles Trumbower, Randy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nrt: Accessibility, Rehabilitation, and Movement Science (Arms): An Interdisciplinary Traineeship Program in Human-Centered Robotics @ Georgia Tech Research Corporation
NRT: Accessibility, Rehabilitation and Movement Science (ARMS): An Interdisciplinary Traineeship Program in Health-Centered Robotics
The world's demographics are changing. People continue to live longer and the U.S. population is becoming older and more racially and ethnically diverse. There is also an increase in younger individuals living with a life-long disability, such as veterans who sustain catastrophic injuries, persons suffering from neurodegenerative diseases, and children growing up with developmental disorders or delays. With this changing population profile comes an increasing demand for advanced healthcare technologies and a need to train a new generation of engineers able to develop these new technologies. This National Science Foundation Research Traineeship award to the Georgia Institute of Technology will address this demand by training graduate master's and doctoral students in the interdisciplinary field of healthcare robotics. The traineeship anticipates providing a unique and comprehensive training opportunity for one hundred and fifty-five (155) students, including thirty (30) funded trainees, by combining disciplines in robotics, studies in health sciences, interactions with clinical partners, hands-on rehabilitation research, and a culture of innovation and translational research.
Trainees will have unique exposure to a variety of approaches developed in Robotics, Physiology, Neuroscience, Rehabilitation, and Psychology. The traineeship will bridge the gap between healthcare and robotics by addressing two major barriers: a) the lack of a formalized framework to enable interdisciplinary collaborations between robotics engineers and health professionals; b) the tendency for students in robotics to be unprepared to address problems in healthcare, including a lack of appreciation for the challenges encountered by clinicians, caregivers, and people with disabilities. Through close interactions with various partners, the traineeship will expand student horizons beyond a technology-first mentality to consider challenges in developing robotic solutions that address the needs of clinicians, caregivers, and people with disabilities. The goal is to develop an interdisciplinary curriculum based upon the concept of participatory design, problem-based learning, and an immersive research experience that blends techniques from multiple disciplines to solve problems posed in healthcare. A second major goal of the traineeship is to increase the participation of women, underrepresented minorities, and students with disabilities in robotics and related engineering fields. The project will develop a new M.S. degree program in healthcare robotics and a new PhD concentration area in healthcare robotics as well as curricular materials and best-practices to allow other institutions to develop similar programs.
The NSF Research Traineeship (NRT) Program is designed to encourage the development and implementation of bold, new, potentially transformative, and scalable models for STEM graduate education training. The Traineeship Track is dedicated to effective training of STEM graduate students in high priority interdisciplinary research areas, through the comprehensive traineeship model that is innovative, evidence-based, and aligned with changing workforce and research needs.
This award is supported, in part, by the EHR Core Research (ECR) program, specifically the ECR Research in Disabilities Education (RDE) area of special interest. ECR emphasizes fundamental STEM education research that generates foundational knowledge in the field. Investments are made in critical areas that are essential, broad and enduring: STEM learning and STEM learning environments, broadening participation in STEM, and STEM workforce development.
|
0.906 |
2015 — 2019 |
Turk, Greg (co-PI) [⬀] Kemp, Charles Liu, C. Karen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Medium: Robotic Assistance With Dressing Using Simulation-Based Optimization @ Georgia Tech Research Corporation
The aging population, rising healthcare costs, and shortage of healthcare workers in the United States create a pressing need for affordable and effective personalized care. Physical disabilities due to illness, injury, or aging can result in people having difficulty dressing themselves, and the healthcare community has found that dressing is an important task for independent living. The goal of this research is to develop techniques that enable robots to assist people with putting on clothing, which is a challenging task for robots due to the complexities of cloth, the human body, and robots. A key aspect of this research is that robots will discover how they can help people by quickly trying out many options in a computer simulation. Success in this research would make progress towards robots capable of giving millions of people greater independence and a higher quality of life. In addition to healthcare applications, this research will result in better computer tools for fruitful collaborations between robots and humans in other scenarios.
This research uses efficient physics simulation and optimization tools to substantially automate the design of assistive robots for dressing. The approach considers the robot to be an assistive device that a human learns to use. The system optimizes the assistive robot based on what a particular human with impairments is capable of doing comfortably, rather than what he/she typically does. This approach automatically optimizes personalized assistive controllers for a particular user and article of clothing via simulation. Due to frequent line-of-sight occlusion and the importance of controlling forces applied to the user's body, controllers that use data-driven haptic perception are trained using simulation-generated data. These capabilities critically depend on advancements in the efficient physical simulation of cloth, robots, and humans, as well as the discovery of appropriate human motions for a given assistive robot. This work advances the state of the art in assistive robotics, haptic perception, human modeling, optimization and efficient physical simulation. Evaluation of the system is in simulation and in the real world with test rigs that model aspects of dressing, a PR2 robot dressing a humanoid robot, and a PR2 dressing able-bodied participants with restricted motion.
|
0.906 |
2020 — 2023 |
Kemp, Charles |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research Nri: Int: Scalable, Customizable, Robot Learning With Humans @ Georgia Tech Research Corporation
Activities of daily living (ADLs) are both essential and routine aspects of self-care, including the ability to independently eat, dress, transfer from one position to another, bathe, and toilet. Robotic assistance with activities of daily living could increase the independence of people with disabilities, improve quality of life, and help address pressing societal needs, such as aging populations, high healthcare costs, and shortages of healthcare workers. While progress has been made towards such robotic-assistance, a key challenge is that many activities of daily living require robots to manipulate ?fabric in coordination with ?people?. Notably, many forms of bedside assistance include dexterous manipulation of bedding, hygiene often involves dexterous manipulation of washcloths and towels, and dressing involves a diverse array of clothes. This project seeks to make foundational progress on this major challenge through advancements in machine learning, simulation, and customizable human-robot interaction. This project will result in new capabilities in robot-assisted bedding adjustment, bathing, and dressing for people with disabilities. In addition, this project and its participating research groups will broaden participation by engaging under-represented groups, K-12 students, and undergraduates in research and education.
This research aims to develop a co-robotic framework towards the Integrative Task: Customizable Fabric Manipulation for Home Assistance, with specific emphasis on bedding, bathing, and dressing, which have significant physical interaction between the human and the robot. This project will make foundational progress along three main thrusts, brought together under this overarching integrative application: (1) Learning to Assist from Raw Sensory Observations, (2) Physics Simulation for Learning Assistance, and (3) Customizable Robotic Assistance with Human-in-the-Loop Feedback. Customizability here refers to the ability of our system to adapt to the specific preferences and abilities of each individual. Through open-sourced code development and physics simulations, this project also seeks to lower the barrier to entry for others to work towards the general problem of customizable home assistance. Overall, this research will develop new techniques in robot-assisted dressing, bedding adjustment, and body bathing (hygiene) for people with disabilities, which has the potential to help millions of people achieve greater independence and a higher quality of life.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.906 |