1999 — 2000 |
Nass, Clifford |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Digital Government: Information Technology Accomodation Research: Creating a Doorway For Universal Access
EIA-9876745 Nass, Clifford Stanford University
Digital Government: Information Technology Accommodation Research: Creating a Doorway for Universal Access
This proposal would continue the work of the Archimedes project at Stanford's Center for the Study of Language and Information. Using the needs of the disabled as drivers, CSLI will pursue research on a number of different access modalities. One example is the Total Access Processor, a hardware/software abstraction layer between a disabled individual and the PC he/she uses, since their custom environment is tightly woven interwoven with their PC hardware, operating system, applications, etc. TAP addresses the problem that occurs when the disabled individual upgrades hardware or software. This is of interest to the Census Bureau, which has goals in hiring large numbers of disabled individuals for the decennial census. The Social Security Agency and the General Services Administration are interested for similar reasons, and the National Mapping and Imagery Agency is interested in the eye-tracking work of Scott related to productivity of individuals doing feature identification.
|
1 |
2007 — 2008 |
Nass, Clifford |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger: Disagreeing Robots
The traditional vision of human-robot interaction is that the machines will be fully cooperative partners. Correspondingly, issues of robot disagreement have never been explored. Using a variable-based approach, the effects of robot autonomy, robot form, and robot politeness strategies on human behaviors and attitudes will be empirically tested. Behavior measures will include performance, physiological responses, and memory. Attitudinal measures will include affective responses as well as various assessments of the robot. Results will enable the discovery of which aspects of human-human interaction apply directly to human-robot interaction, and which aspects are different with respect to performance, memory, and attitudes.
This exploratory research project will seek to empirically identify features of robots that influence humans' responses to robots' expressions of disagreement. Results are expected to identify strategies to facilitate the resolution of conflicts between humans and robots. While there are models of human-human disagreement, it is unknown which of these models will be applicable to human-robot interaction. This is an important exploratory area to pursue given that in many contexts, such as space exploration and colonization, rehabilitation, and complex manufacturing, the robot must express disagreement with the human, a highly-charged situation. This research will provide initial answers to the following critical questions: 1) which strategies of disagreement will be most effective and most palatable to human interaction partners? and 2) which characteristics of robots will most effectively enable the situation to be one of joint understand rather than pure conflict? It is expected that findings will enable researchers to create and study robots that are better able to coordinate with humans and assist humans in reaching their goals as well as reveal the ways in which robots can induce social responses to technology.
|
1 |
2009 — 2012 |
Nass, Clifford |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Medium: Collaborative Research: the Social Medium Is the Message
"This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5)."
In this project the PIs will investigate human-robot interaction in situations where a human is highly dependent upon a robot that serves as the medium for the outside world, such as emergency response, hostage negotiation, and healthcare. In these domains, the human "dependent" is typically connected to multiple other human "controllers" (medical specialists, structural engineers, rescue operations officials, etc.) via the robot proxy for long periods of time. The literature suggests that under these circumstances the dependent will respond to the robot socially, and will become distrustful as well as cognitively confused by a robot that presents a different affect for different controllers rather than a consistent communication strategy. The PIs believe that such a robot would occupy a novel "social medium" position within the Computers as Social Actors (CASA) model, and would be perceived as a loyal and helpful go-between who is an advocate for the dependent rather than as a device for accomplishing the goals of the controllers. To explore this hypothesis, the PIs will make use of the Survivor Buddy, a multimedia attachment for a robot which allows trapped victims to engage in two-way video conferencing, watch news, listen to music, etc. Formative experiments will be conducted at Stanford's CHIMe lab, followed by comprehensive, high fidelity experiments at Texas A&M's Disaster City using point-of-injury care scripts developed under prior work with medical doctors and rescue professionals. The paralinguistic aspects of the associated communication strategy will couple the ongoing work by Nass in voice characteristics and mannerisms with research in affective physical mannerisms in non-anthropomorphic robots under development by Murphy; the intellectual merit of the project thus stems from its multidisciplinary merging of communications and computer science. The research will introduce the CASA spectrum of relationships as a complement to robot-centric taxonomies, and will define a new relationship where a human is highly dependent upon a medium for long durations along with a new identity of social medium, which the PIs expect will have a greater impact on integrating robots into society than autonomous social actors and tele-operation. Project outcomes will include creation of a formal and comprehensive communication strategy for HRI, which combines verbal and nonverbal affect; this will unify the theory and practice of social robots, thereby breaking the pattern of ad hoc application of affect currently seen in the robotics literature and establishing the fundamental models and paradigms for continuing basic research in HRI.
Broader Impacts: This project will ultimately help save the lives of victims of accidents, disasters, and terrorism, and will also generally improve the quality of life for "shut-ins." The concept of social medium is well-matched to the current capabilities of tele-operation and semi-autonomy in civilian and military robotics; thus, the communication strategies developed will be immediately applicable to domains such as law enforcement and emergency response, which currently use robots, as well as to healthcare, where robots have not yet found a strong economic niche but have huge economic potential. The educational outreach plan includes multi-disciplinary curriculum development, as well as outreach to K-12 teachers and museums.
|
1 |
2009 — 2010 |
Nass, Clifford Pea, Roy (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Symposium: Impacts of Media Multitasking On Children's Learning & Development
Media multitasking is defined in the literature as engaging in more than one media activity, such as watching TV, reading, playing video games, or instant messaging, over a specified period of time. Media multitasking is a growing phenomenon observed among 8-18 year olds, yet little is known regarding how this behavior influences key cognitive, developmental and behavioral processes which affect the way young people learn, reason, socialize, think creatively, and understand the world. Previous research in the adult population generally shows that media multitasking negatively impacts productivity. However, little research to date has examined the processes underlying children's media multitasking or how such processes may affect cognitive development and learning.
Clifford Nass of the Stanford CHIMe Lab and Roy Pea, Co-PI of the NSF LIFE (Learning in Informal and Formal Environments) Center, in partnership with the Joan Ganz Cooney Center at Sesame Workshop, will convene a symposium for an interdisciplinary group of leading scholars from neuroscience, child development, cognitive, communication, and education fields, and key policy leaders to advance our understanding of the potential implications of media multitasking for learning and cognitive development in children and youth. The primary goal of the symposium is to define a coherent agenda to stimulate needed theory-based research on the implications of media multitasking for learning and cognitive development. The symposium will consist of three panels of multi-disciplinary experts to examine: a) media multitasking and cognitive development, b) media multitasking, learning and productivity, and c) design principles for leveraging media multitasking in educational contexts. A meeting report will be distributed on the symposium website and in print form. The symposium website will also facilitate communication and knowledge sharing among a newly defined research community and serve as a clearinghouse for information on the implications of media multitasking for cognitive development and learning. Overall, the activities associated with this grant will help activate a new special interest academic community, and encourage and facilitate knowledge sharing and collaboration across institutional, intellectual and methodological disciplines. Through understanding how media multitasking may affect psychological processes underlying learning and cognition, practitioners will be better prepared to inform policy and to maximize educational impact in a broad range of learning contexts.
|
1 |
2012 — 2018 |
Nass, Clifford Schwartz, Daniel Hinds, Pamela |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Socially Assistive Robots
Socially Assistive Robots Lead PI/Institution: Brian Scassellati, Yale University This Expedition will develop the fundamental computational techniques that will enable the design, implementation, and evaluation of robots that encourage social, emotional, and cognitive growth in children, including those with social or cognitive deficits. The need for this technology is driven by critical societal problems that require sustained, personalized support that supplements the efforts of educators, parents, and clinicians. For example, clinicians and families struggle to provide individualized educational services to children with social and cognitive deficits, whose numbers have quadrupled in the US in the last decade alone. In many schools, educators struggle to provide language instruction for children raised in homes where a language other than English is spoken (over 20%), the fastest-growing segment of the school-age population. This Expedition aims to support the individual needs of these children with socially assistive robots that help to guide the children toward long-term behavioral goals, that are customized to the particular needs of each child, and that develop and change as the child does. To achieve this vision, this Expedition will advance the state-of-the-art in socially assistive human-robot interaction from short-term interactions in structured environments to long-term interactions that are adaptive, engaging, and effective. This progress will require transformative computing research in three broad and naturally interrelated research areas. First, the Expedition will develop computational models of the dynamics of social interaction, so that robots can automatically detect, analyze, and influence agency, intention, and other social interaction primitives in dynamic environments. Second, the Expedition will develop machine learning algorithms that adapt and personalize interactions to individual physical, social, and cognitive differences, enabling robots to teach and shape behavior in ways that are tailored to the needs, preferences, and capabilities of each individual. Third, the Expedition will develop systems that guide children toward specific learning goals over periods of weeks and months, allowing for truly long-term guidance and support. Research in these three areas will be integrated into socially assistive robots that are deployed in schools and homes for durations of up to one year. This Expedition has the potential to substantially impact the effectiveness of education and healthcare for children, and the technological tools developed will serve as the basis for enhancing the lives of children and other groups that require specialized support and intervention. The proposed computing research is tied to a comprehensive student training program, bringing a compelling, engaging, and grounded STEM experience to K-12 students through in-school and after-school activities. It also establishes an annual training summit to provide undergraduates with the multi-disciplinary background to engage in this promising research area in graduate school. Finally, by establishing a brand name for socially assistive robotics, this effort will create a central authority for the distribution of high-quality, peer-reviewed information, providing a coherent focal point for enhancing outreach and education. For more information visit www.yale.edu/SAR
|
1 |
2012 — 2017 |
Nass, Clifford Mataric, Maja [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri-Small: Spacial Primitives For Enabling Situated Human-Robot Interaction @ University of Southern California
To enable natural and productive human-robot interaction (HRI), a co-robot must both understand and control "proxemics" -- the social use of space -- in order to communicate in ways commonly used and understood by humans. This project focuses on answering the question: How do social (speech and gesture), environmental (loud noises and low lighting), and personal (hearing and visual impairments) factors influence positioning and communication between humans and co-robots, and how should a co-robot adjust its social behaviors to maximize human perception of its social signals?
The project will develop principled computational models for the recognition and control of proxemic co-robot behavior in HRI using both telepresence and autonomous co-robots. The research will establish a foundational component of HRI for co-robotics, with specific impact on special needs users in socially assistive contexts -- particularly the elderly, both aging in place and in institutions -- with the goal of mitigating isolation and depression, and encouraging exercise and socialization.
Broader impacts: The work will inform robot design and control, and provide software and a corpus of public HRI data for use by researchers worldwide. Beyond robotics, the project promises to inform, validate, and extend longstanding research in the social sciences. This project also includes a strong public and K-12 outreach component consisting of weaving the HRI themes being developed into annual regional and international outreach events. The events feature large-scale open houses and educational workshops with interactive demonstrations and hands-on activities that highlight human factors in computational systems as an effective means of increasing interest in STEM-related activities.
|
0.954 |