Charles Kemp - US grants
Affiliations: | Psychology | Carnegie Mellon University, Pittsburgh, PA |
Area:
Computational modeling, high-level cognitionWebsite:
http://www.psy.cmu.edu/~ckemp/We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Charles Kemp is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2007 — 2011 | Howard, Ayanna Kemp, Charles Blake, M. Brian Jacko, Julie Brown, Edward |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hri: Robot Learning From Teleoperative-Based Instruction and Multimodal Interaction @ Georgia Tech Research Corporation Teleoperated assistive robots in home environments have the potential to dramatically improve quality of life for older adults and/or people who experience disabling circumstances due to chronic or acute health conditions. It could similarly aid clinicians and healthcare professionals providing treatment. The success of these applications though will critically depend on the ease with which a robot can be commanded to perform common manipulation tasks within a home environment. Thus, the focus of the proposed research is to addresses this key challenge in two significant ways. First, by learning from teleoperated manipulation (i.e. teleoperative-based instruction), robots can acquire the ability to perform elements of common tasks with greater autonomy and reliability. Second, by automatically mapping new modalities (e.g. voice and gesture commands) to the robot's user interface, a wider variety of people will be able to use the robot more easily. The resulting multimodal interfaces may be especially important for people who have difficulty using a single modality, such as vision. These two fundamental research components underlie the basis of our approach to enable manipulation of everyday objects in an unstructured human environment. |
0.906 |
2008 — 2013 | Mitchell, Tom [⬀] Just, Marcel (co-PI) [⬀] Kemp, Charles |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi-Type Ii: From Language to Neural Representations of Meaning @ Carnegie-Mellon University This project seeks to develop a new understanding of how the brain represents and manipulates meaning, by bringing together the perspectives of brain imaging, machine learning and computational modeling, using converging approaches from behavioral psychology, linguistics, computer science and neuroscience. In particular, the brain activity that encodes the meanings of words, phrases and sentences is studied, along with how the brain encodes the meaning of individual words in terms of their component semantic features, how it modifies its encoding of an individual word when it occurs within a phrase or clause, and how it constructs the encoding of a phrase or clause from the encodings of its component words. This work builds on recent research showing (1) that repeatable patterns of fMRI activation are associated with viewing nouns describing concrete objects such as "hammer" or "toe," (2) that the neural patterns that encode the meanings of these words are similar across different people, and (3) that these encodings are similar whether the person views a word or a picture of the object. Whereas previous work has focused on the neural representation of single words in isolation, this project studies multiple word phrases and sentences, which comprise larger units of knowledge; for example how the neural encoding of a noun is influenced by its adjective (e.g., "fast rabbit" vs. "cuddly rabbit") and how the neural encoding of a proposition is related to the encodings of its component words (how "cut" and "surgeons" combine in the proposition "surgeons cut"). To address these questions, computational models are developed using a diverse set of training data including fMRI data, data from a trillion-word corpus of text that represents typical language use, and behavioral data from language comprehension and judgment tasks, as well as online linguistic knowledge bases such as VerbNet, and theoretical proposals from the cognitive neuroscience literature regarding how and where the brain encodes meaning. These perspectives are integrated into a theory in the form of a computational model trained from diverse data and prior knowledge, and capable of making experimentally testable predictions about the neural encodings and behavioral responses associated with tens of thousands of words, and hundreds of thousands of phrases and sentences. |
1 |
2009 — 2012 | Kemp, Charles | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Assistive Object Manipulation Via Rfid Guided Robots @ Georgia Tech Research Corporation PI: Kemp, Charles C. in collaboration with Reynolds, Matthew S. |
0.906 |
2010 — 2013 | Kemp, Charles | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-New: a Robot For in Situ Research On Assistive Mobile Manipulation @ Georgia Tech Research Corporation Assistive robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Although researchers have demonstrated relevant capabilities within laboratory environments, current methods are untested and potentially unsuitable for the variability of real-world healthcare environments. In order to address this critical issue, the new infrastructure funded by this award consists of a state-of-the-art robot (a mobile manipulator) dedicated to research conducted outside of the lab via collaborations with healthcare researchers and providers. The robot spends extended periods of time (residencies) in environments where assistive robots are expected to make a positive impact, including the homes of persons with disabilities, assisted living environments for the elderly, and clinical facilities. These residencies enable robotics researchers to maximize the impact of their research by identifying and addressing the roadblocks to deployment of autonomous mobile manipulators for healthcare. The research supported by this infrastructure will result in new methods for assistive manipulation and contributions to compliant arm control, multi-modal perception, and human-robot interaction. It will also begin to quantitatively characterize the variability of real-world healthcare environments. Results of this research will be communicated via academic publications, a website, open source code, and publicly released data captured with the robot. The robot will also have residencies at Spelman College (an HBCU for women) to promote science, technology, engineering, and mathematics (STEM) education. |
0.906 |
2011 — 2017 | Ting, Lena [⬀] Kemp, Charles Liu, C. Karen Hackney, Madeleine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Emory University Our vision is to develop caregiver robots that interact fluidly and flexibly with humans during functional |
0.966 |
2012 — 2018 | Kemp, Charles | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Haptic Interaction For Robotic Caregivers @ Georgia Tech Research Corporation Making contact with a person's body is critical to many of the most important caregiving tasks for people with physical disabilities. During these tasks, the forces applied by the robot to the body of the human client (care recipient) are of central importance. Yet robots are currently ignorant of what forces are appropriate for common tasks, and what forces are appropriate when making contact with different locations on the client's body. In this project, the PI's goal is to endow assistive robots with the ability to use appropriate forces when haptically interacting with people. To this end, he will capture and statistically model the forces applied when a person performs assistive tasks for him or herself, or provides care to another person. He will enable robots to intelligently regulate the forces they apply when performing assistive tasks, so that the applied forces are comparable to those used during human-human interactions. And he will enable clients to effectively control the forces applied by a robot during assistive tasks. Throughout the research, the PI will conduct experiments to test relevant hypotheses: That the type of task and the pose of the tool relative to the client's body are highly predictive of the force applied by a human caregiver; That when performing tasks on a mannequin, the robot will successfully emulate the forces observed during human-human interaction; That when the robot applies force to the client's body, the client will prefer that the robot use knowledge of the task and the pose of the tool to interpret user commands rather than a constant mapping. Because a person's ability to perform activities of daily living (ADLs) is highly predictive of his or her ability to live independently, the work will focus on four representative ADL tasks that require contact with the client's head: feeding a person yogurt, wiping a person's face, brushing a person's hair, and shaving a person with an electric razor. Project outcomes will include a system that enables a PR2 robot from Willow Garage to assist people with severe physical disabilities with these four tasks; the PR2 will be modified to have force-torque sensors at its wrists, specialized tools, and a Kinect 3D sensor on its head. |
0.906 |
2015 — 2019 | Turk, Greg (co-PI) [⬀] Kemp, Charles Liu, C. Karen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Medium: Robotic Assistance With Dressing Using Simulation-Based Optimization @ Georgia Tech Research Corporation The aging population, rising healthcare costs, and shortage of healthcare workers in the United States create a pressing need for affordable and effective personalized care. Physical disabilities due to illness, injury, or aging can result in people having difficulty dressing themselves, and the healthcare community has found that dressing is an important task for independent living. The goal of this research is to develop techniques that enable robots to assist people with putting on clothing, which is a challenging task for robots due to the complexities of cloth, the human body, and robots. A key aspect of this research is that robots will discover how they can help people by quickly trying out many options in a computer simulation. Success in this research would make progress towards robots capable of giving millions of people greater independence and a higher quality of life. In addition to healthcare applications, this research will result in better computer tools for fruitful collaborations between robots and humans in other scenarios. |
0.906 |
2015 — 2020 | Ting, Lena (co-PI) [⬀] Howard, Ayanna Kemp, Charles Trumbower, Randy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Georgia Tech Research Corporation NRT: Accessibility, Rehabilitation and Movement Science (ARMS): An Interdisciplinary Traineeship Program in Health-Centered Robotics |
0.906 |
2020 — 2023 | Kemp, Charles | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research Nri: Int: Scalable, Customizable, Robot Learning With Humans @ Georgia Tech Research Corporation Activities of daily living (ADLs) are both essential and routine aspects of self-care, including the ability to independently eat, dress, transfer from one position to another, bathe, and toilet. Robotic assistance with activities of daily living could increase the independence of people with disabilities, improve quality of life, and help address pressing societal needs, such as aging populations, high healthcare costs, and shortages of healthcare workers. While progress has been made towards such robotic-assistance, a key challenge is that many activities of daily living require robots to manipulate ?fabric in coordination with ?people?. Notably, many forms of bedside assistance include dexterous manipulation of bedding, hygiene often involves dexterous manipulation of washcloths and towels, and dressing involves a diverse array of clothes. This project seeks to make foundational progress on this major challenge through advancements in machine learning, simulation, and customizable human-robot interaction. This project will result in new capabilities in robot-assisted bedding adjustment, bathing, and dressing for people with disabilities. In addition, this project and its participating research groups will broaden participation by engaging under-represented groups, K-12 students, and undergraduates in research and education. |
0.906 |