2002 — 2004 |
Krishnamurthy, Arvind [⬀] Nilsson, Henrik Scassellati, Brian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Composing Data-Rich Embedded Systems the Easy Way
Composing Data-Rich Embedded Systems the Easy Way -------------------------------------------------
Arvind Krishnamurthy and Henrik Nilsson Department of Computer Science, Yale University
Embedded computing increasingly takes place in a sensor rich environment where the acquisition of raw information is much easier than its interpretation. In addition to building components to process this data, embedded systems programmers must also arrange the communication among potentially hundreds of components, distributing computation to the processing elements so as to minimize communication costs and maximize responsiveness, and scheduling of the processing elements to adapt to changing priorities and communication patterns. These challenges must be addressed at the system level rather than the processor level.
This project develops a framework for composing distributed, data-rich embedded systems that automates many of the low-level process allocation and scheduling tasks. It takes place against a backdrop of a "next generation" humanoid robot currently being developed at Yale. The robot contains a significant number of processors connected in a heterogeneous fashion. The project addresses two fundamental research issues. The first is the use of modern programming language techniques to address critical embedded system concerns such as composability and dynamic configuration change. The second is improving overall system performance by exploiting high level system knowledge in the run-time system. The end result will be a design methodology that will enable rapid and reliable construction of complex data-rich interactive systems.
|
1 |
2002 — 2006 |
Hudak, Paul [⬀] Trifonov, Valery Scassellati, Brian Taha, Walid (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: a Framework For Rapid Development of Reliable Robotics Software
Taha, Walid CCR-0205542 "ITR: A Framework for Rapid Development of Reliable Robotics Software"
Robots are entering daily life. Commercially available systems are delivering medication to patients in hospitals, mowing lawns, vacuuming floors, and finding wide applications in the entertainment industry. In the future, they will play a more substantial role in areas such as space exploration, health care, and search and rescue. But as these applications grow, so does the complexity of these robots, making the reliability of their software and the productivity of their programmers a priority. It is not clear that current techniques for programming robots are sufficient for building systems that are orders of magnitude more complex than the ones available today. The vast majority of programming methods in current use focus on high-level planning and task and behavioral aspects. By contrast, there are no widely-accepted specialized software processes or programming languages for the integrated development of robotics applications.
This project explores the impact of state-of-the-art programming languages techniques in a small-scale robotics setting. The project applies domain-specific languages methods and automatic program generation techniques. The framework exploits core technologies such as multi-stage programming with simple, high-level annotations to avoid unnecessary runtime overheads yet provide a natural and algorithmic approach to program generation, where generation occurs in a first stage, and the execution of the synthesized program occurs in a second stage. Because (even when the final goal is embedded software) the first stage does not need to be resource-bounded, conventional programming techniques can be used.
The challenge, then, becomes ensuring that the generated programs are suitable for execution on an embedded platform. Multi-stage languages already provide significant safety guarantees. For example, a program generator written in such a language not only is type-safe in the traditional sense, but we are guaranteed that any generated program will also be type safe. This provides a noteworthy degree of assurance about the quality of the generated code. But like most traditional high-level programming techniques, multi-stage programming was designed to satisfy functional requirements rather than operational ones, and existing multi-stage languages do not provide any guarantees about the behavior of programs in the presence of bounded resources. The focus of this project is ways to address this problem by strengthening ``traditional'' multi-stage type systems using a number of state-of-the-art techniques from type theory and functional reactive programming (FRP) to create resource-aware multi-stage programming. Linear and alias types (in conjunction with dependent typing) will be used to ensure space-boundedness, new typing techniques are used to ensure time-boundedness, and signals and behaviors from FRP allow for a natural style of reactive programming.
|
1 |
2003 — 2008 |
Scassellati, Brian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Social Robots and Human Social Development
This project focuses on the development of anthropomorphic robots that interact with people using natural social cues. Socially-competent robots would have great practical impact; users could interact with these robots in a more natural and effortless way, could command them through social instruction, and could integrate them into daily life. This project seeks to address both the technical challenges involved in constructing these robots and the ways in which they can be used as tools to study human social development.
The technical challenges of building social robots are substantial. The design and construction of a robot that can produce gestures and utterances which can be easily interpreted by a human observer is a challenging mechanical design problem. A more difficult technical challenge will be to build machines that can recognize human social cues such as pointing gestures, direction of gaze, and tone of voice. Existing research succeeds in recognizing a few of these cues in structured situations, requiring visual scenes that have a constant background or audio signals that contain only the voice of a single speaker. This project proposes to build on existing work by integrating techniques from multiple sensory modalities and using models of human social development as a roadmap for constructing more complex social behaviors. A final engineering challenge will be the implementation of a computational infrastructure to support these algorithms.
|
1 |
2003 — 2006 |
Hudak, Paul [⬀] Peterson, John Nilsson, Henrik Scassellati, Brian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Dance, a Programming Language For the Control of Humanoid Robots
Robots are becoming increasingly common in, and important to, many commercial, industrial, and military applications. This project focuses on humanoid robots, which are becoming increasingly useful as they advance in sophistication, because they can perform in environments engineered specifically for humans, and because they make it easier for humans to interact with automation. This project focuses specifically on how to program humanoid robots; i.e. how to program their movements and interactions as easily and as effectively as possible. The focus is not on developing new algorithms for robot movement or sensing. Rather, once an algorithm is in hand, how does one program a robot to walk, wave its arms, clap its hands, or pick up an object? How does one do so in a high-level way that is devoid of unnecessary detail, yet is expressive enough to capture all desirable movements and interactions?
The core of this effort is the design of a domain-specific language called "Dance" that is highly abstract, easy to use, yet has enough expressive power to describe a wide range of useful robot movements. Dance incorporates ideas from the PI's previous work on domain-specific languages for computer music, computer animation, and software-enabled control. For example, Dance uses declarative event-based reactivity to give a robot the ability to respond to its environment (through tactile, aural, and visual sensors), to its own body (such as interactions between limbs), and to internal programmatic events (timers, remote messages, user commands, and so on). Innovative language research makes behaviors the objects of computation in Dance, which enables programs to abstract over (aggregate) action sequences and evaluate interactions of such sequences. The language is also amenable to formal reasoning based on a formal algebraic semantics. It is possible to prove crucial run-time properties of Dance programs based on the axioms of this algebra. The proposed work also includes a programming environment called "Dance Studio" that has the ability to simulate and thus visualize a running Dance program, enabling a programmer to dynamically debug her programs prior to full robot deployment.
Dance language research pioneers a control programming concept that is relevant for many applications in which complex, aggregate system behaviors or maneuvers are required, and in which such behaviors must be coordinated and assured. The research is part of, and supports, a broader agenda at Yale to create "socially adept" robots. Building a machine that can recognize social cues from a human observer allows a more natural human-machine interaction style, creates possibilities for machines to learn by directly observing untrained human instructors, and expands on the growing capabilities of robotic systems. Such social machines can be used as investigative tools to study many aspects of human social development. For example, a robot that is capable of perceptually identifying social cues can be used to provide a quantitative metric of social response. This metric may be a useful diagnostic tool for social development disorders such as autism. In fact research on the use of humanoid robots to diagnose and treat autism is being conducted in the broader scope of Yale's humanoid robotics program.
|
1 |
2005 — 2009 |
Scassellati, Brian Volkmar, Fred (co-PI) [⬀] Klin, Ami (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Quantative Measures of Social Response For Autism Diagnosis
Autism is a pervasive developmental disorder that is characterized by a severe set of social and communicative deficits. Autism is diagnosed behaviorally; there is no known blood test, genetic test, or functional imaging method that can diagnose autism. Existing diagnostic methods provide primarily qualitative descriptions of dysfunctional social skills. Given the need to capitalize on early brain plasticity and thus maximize the beneficial impacts of intervention, there is a great need for novel, sensitive and quantified performance-based measurements of social vulnerabilities in young children with autism. The goal of this project is to enhance methods for diagnosing autism by providing technology that can produce quantitative, objective measurements of social response from both passive and interactive recognition techniques. Passive recognition systems characterize social responses without directly taking part in the social interaction (for example, from cameras and microphones in the walls and ceiling of a room). Interactive robots will engage in social presses with an individual in the interest of eliciting a social response that can be measured directly. Preliminary data shows that these interactive systems can provide measurements which are free of subjective bias, which can be tailored to the needs of an individual, and which generate interest and motivation in many children with autism.
|
1 |
2008 — 2012 |
Scassellati, Brian Chawarska, Katarzyna (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi-Type I: Understanding Regulation of Visual Attention in Autism Through Computational and Robotic Modeling
Eye tracking has become a widespread tool throughout the cognitive sciences and has attracted particular attention as a behavioral measurement tool for children with developmental disabilities. However, there are no standardized quantitative tools for assessing broadly defined attention skills in young children, and there is a lack of analysis techniques that would allow gaze patterns to be compared across individuals, across populations, or for a single individual across time. This study will develop methods of quantitatively measuring attentional capacities by (1) designing a Visual Attention Assessment Suite (VAAS) which examines the interaction and impact of particular features of scenes on visual attention; (2) constructing novel computational analysis techniques for comparing gaze patterns across individuals, populations, and time; (3) validating these techniques against both standard behavioral assessment protocols and through an embodied modeling approach to ensure that our models capture the behaviorally important aspects of gaze.
The regulation of attention has been hypothesized as one of the fundamental factors affecting early development of children with autism. This project will develop quantitative measures that can be used as diagnostic and prognostic indicators and to evaluate the effectiveness of particular treatment approaches. This project represents the first integrated and interdisciplinary attempt to develop a much needed full-scale diagnostic instrument that operates purely through eye-tracking, computational techniques, and individual modeling. Although our primary focus is the interpretation of gaze data with respect to autism, eye tracking is used extensively in psychology, primatology, usability studies, marketing, and human-computer interaction experiments. The models and analysis tools constructed under this project will be equally applicable to these other domains. This project also has the potential to produce novel methods of assessing attentional abnormalities in other developmental disorders (e.g., mental retardation, attention deficit disorder, specific learning disabilities), novel educational assessment methods of pre-kindergarten readiness, as well as to develop training methods for teaching clinicians and educators behavioral assessment using a robot as a model illustrating various attentional patterns in children with disabilities.
|
1 |
2010 — 2011 |
Scholl, Brian (co-PI) [⬀] Scassellati, Brian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Socs: Modeling Agency and Intentions in Dynamic Environments as a Precursor to Efficient Human-Computer Interaction
People recognize dramatic situations and attribute roles and intentions to perceived characters, even when presented with extremely simple cues. As any cartoon viewer can attest, two animated shapes are sufficient to describe a scene involving tender lovers, brutal bullies, tense confrontations and hair-raising escapes. These basic notions of agency and intentionality are foundational to our social perception of the world. They provide the first discriminations between agents and objects, delineate which elements of the world can move with goal-directed purpose, and provide the primitive structure for describing cause and effect. Extensive laboratory experiments have described many of the basic properties that produce these perceptions on controlled stimuli. However there have been only limited attempts to quantify these processes and no attempts to see if these same properties hold on real-world activity patterns.
This project models our human ability to perceive agency, intentionality, and goal-directed behavior in dynamic real-world environments. Using off-the-shelf real-time localization systems, the movements of people and objects are recorded as they engage in unstructured activity and staged group games. Drawing on both this empirical data and theories drawn from the psychophysical data, computational models are constructed that quantify, explain, and predict real-world social and goal-directed behavior. The benefits of this work include: (1) modeling tools for use within behavioral studies, (2) a real-world grounding for psychophysical studies, and (3) a computational model of social and intentional behavior that would enhance human-computer and human-robot interfaces.
|
1 |
2011 — 2014 |
Scassellati, Brian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Small: Manipulating Perceptions of Robot Agency
Robots are increasingly becoming a part of daily human interactions: They vacuum floors, deliver medicine in hospitals, and provide company for elderly and disabled individuals. This project examines one aspect of people's interactions with these robots: how intentional and self-reflective the robot seems to be. Because the perceived agency of a robot affects many dimensions of people's interactions with that robot, it is important to understand how features of robot design, such as its behavior and cognitive abilities, affect perceptions of agency. This question is addressed through a series of laboratory experiments that manipulate behavior and cognitive abilities and measure the degree of agency attributed to socially interactive robots.
Intellectual merit: The project will lead to new measures of perceived robot agency and new knowledge about how people collaborate with robots. The results will inform how engineers construct robots, how artificial intelligence researchers conceptualize behavioral architectures, and how designers craft interactions to produce robots that engage people in simple ways.
Broader impacts: The project will provide a new quantitative measurement of agency that can be used in human-robot interaction and related disciplines and new information that can inform how agency is modeled in the design of human-robot interactions, especially in situations where recognition of agency is a primary factor. The outcomes will be used to improve socially assistive robotics for children with social deficits. The project will also enhance interdisciplinary research offerings for graduate and undergraduate students at the investigators' institution.
|
1 |
2012 — 2017 |
Scassellati, Brian Volkmar, Fred (co-PI) [⬀] Morrell, John Shic, Frederick (co-PI) [⬀] Dollar, Aaron Paul, Rhea (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Socially Assistive Robots
Socially Assistive Robots Lead PI/Institution: Brian Scassellati, Yale University This Expedition will develop the fundamental computational techniques that will enable the design, implementation, and evaluation of robots that encourage social, emotional, and cognitive growth in children, including those with social or cognitive deficits. The need for this technology is driven by critical societal problems that require sustained, personalized support that supplements the efforts of educators, parents, and clinicians. For example, clinicians and families struggle to provide individualized educational services to children with social and cognitive deficits, whose numbers have quadrupled in the US in the last decade alone. In many schools, educators struggle to provide language instruction for children raised in homes where a language other than English is spoken (over 20%), the fastest-growing segment of the school-age population. This Expedition aims to support the individual needs of these children with socially assistive robots that help to guide the children toward long-term behavioral goals, that are customized to the particular needs of each child, and that develop and change as the child does. To achieve this vision, this Expedition will advance the state-of-the-art in socially assistive human-robot interaction from short-term interactions in structured environments to long-term interactions that are adaptive, engaging, and effective. This progress will require transformative computing research in three broad and naturally interrelated research areas. First, the Expedition will develop computational models of the dynamics of social interaction, so that robots can automatically detect, analyze, and influence agency, intention, and other social interaction primitives in dynamic environments. Second, the Expedition will develop machine learning algorithms that adapt and personalize interactions to individual physical, social, and cognitive differences, enabling robots to teach and shape behavior in ways that are tailored to the needs, preferences, and capabilities of each individual. Third, the Expedition will develop systems that guide children toward specific learning goals over periods of weeks and months, allowing for truly long-term guidance and support. Research in these three areas will be integrated into socially assistive robots that are deployed in schools and homes for durations of up to one year. This Expedition has the potential to substantially impact the effectiveness of education and healthcare for children, and the technological tools developed will serve as the basis for enhancing the lives of children and other groups that require specialized support and intervention. The proposed computing research is tied to a comprehensive student training program, bringing a compelling, engaging, and grounded STEM experience to K-12 students through in-school and after-school activities. It also establishes an annual training summit to provide undergraduates with the multi-disciplinary background to engage in this promising research area in graduate school. Finally, by establishing a brand name for socially assistive robotics, this effort will create a central authority for the distribution of high-quality, peer-reviewed information, providing a coherent focal point for enhancing outreach and education. For more information visit www.yale.edu/SAR
|
1 |
2017 — 2018 |
Scassellati, Brian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop: the Pioneers Workshop At the 2017 Acm/Ieee International Conference On Human-Robot Interaction
This is funding to support a Pioneers Workshop (doctoral consortium) of approximately 24 graduate students (12 of whom are from the United States and therefore eligible for funding), along with distinguished research faculty. The event will take place as part of the first day of activities at the 12th International Conference on Human Robot Interaction (HRI 2017), to be held March 6-9 in Vienna, Austria, and which is jointly sponsored by ACM and IEEE. HRI is the premier conference for showcasing the very best interdisciplinary and multidisciplinary research on human-robot interaction, with roots in diverse fields including robotics, artificial intelligence, social psychology, cognitive science, human-computer interaction, human factors, engineering, and many more. It is a single-track, highly selective annual international conference that invites broad participation. Building on the "Smart City Wien" initiative, the theme of HRI 2017 is "Smart Interaction." The conference seeks contributions from a broad set of perspectives, including technical, design, methodological, behavioral, and theoretical, that advance fundamental and applied knowledge and methods in human-robot interaction, with the goal of enabling human-robot interaction through new technical advances, novel robot designs, new guidelines for design, and advanced methods for understanding and evaluating interaction. More information about the conference is available online at http://humanrobotinteraction.org/2017. The Pioneers Workshop will afford a unique opportunity for the best of the next generation of researchers in human-robot interaction to be exposed to and discuss current and relevant topics as they are being studied in several different research communities. This is important for the field, because it has been recognized that transformative advances in research in this fledgling area can only come through the melding of cross-disciplinary knowledge and multinational perspectives. Participants will be encouraged to create a social network both among themselves and with senior researchers at a critical stage in their professional development, to form collaborative relationships, and to generate new research questions to be addressed during the coming years. Participants will also gain leadership and service experience, as the workshop is largely student organized and student led. The PI has expressed his strong commitment to recruiting women and members from under-represented groups. To further ensure diversity the event organizers will consider an applicant's potential to offer a fresh perspective and point of view with respect to HRI, will recruit students who are just beginning their graduate degree programs in addition to students who are further along in their degrees, and will strive to limit the number of participants accepted from a particular institution to at most two. As a new feature this year, the organizers will also invite 3 undergraduate students (all eligible for funding) to help increase diversity in the pipeline of students entering this field.
The Pioneers Workshop is designed to complement the conference, by providing a forum for students and recent graduates in the field of HRI to share their current research with their peers and a panel of senior researchers in a setting that is less formal and more interactive than the main conference. During the workshop, participants will talk about the important upcoming research themes in the field, encouraging the formation of collaborative relationships across disciplines and geographic boundaries. To these ends, the workshop format will encompass a variety of activities including three keynotes, a distinguished panel session, and breakout sessions. To start the day, all workshop attendees will briefly introduce themselves and their interests. Following the opening keynote, approximately half of the participants will present 3-minute overviews of their work, leading into an interactive poster session. This will enable all participants to share their research and receive feedback from students and senior researchers in an informal setting. The workshop organizers will facilitate the post-presentation discussion and will encourage participants to ask questions of their peers during the interactive break and poster session. After lunch, the remaining workshop participants will give their 3-minute overviews, followed by presentation of their posters during a second interactive poster session. Senior researchers (in addition to those on the panel) will be invited to attend the student presentations and poster sessions in order to provide feedback to participants, and workshop participants will be invited to present their posters during the main poster session of the HRI conference as well. The conversations between the panel and participants will continue over lunch and during dinner.
|
1 |
2018 — 2021 |
Scassellati, Brian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Watch One, Do One, Teach One: An Integrated Robot Architecture For Skill Transfer
In the last several years, robotics research has transitioned from being concerned exclusively with building fully autonomous and capable robots to include building partially-capable robots that collaborate with human partners, allowing the robot to do what robots do best and the human to do what humans do best. This transition has been fueled by a renaissance of safe, interactive systems designed to enhance the efforts of small- and medium-scale manufacturing, and has been accompanied by a change in the way we think robots should be trained. Learning mechanisms in which the robot operates in isolation, learning from passive observation of people performing tasks, are being replaced by mechanisms where the robot learns through collaboration with a human partner as they accomplish tasks together. This project will seek to develop a robot architecture that allows for new skills to be taught to a robot by an expert human instructor, for the robot to then become a skilled collaborator that operates side-by-side with a human partner, and finally for the robot to teach that learned skill to a novice human student. To achieve this goal, popular but opaque learning mechanisms will need to be abandoned in favor of novel representations that allow for rapid learning while remaining transparent to explanation during collaboration and teaching, in conjunction with a serious consideration of the mental state (the knowledge, goals, and intentions) of the human partner. A fundamental outcome of this work will be a unified representation linking the existing literature in learning from demonstration to collaborative scenarios and scenarios involving the robot as an instructor. Thus, project outcomes will have broad impact in application domains such as collaborative manufacturing, while also enhancing our substantial investment in education and training (especially research offerings for graduate and undergraduate investigators), and will furthermore enrich the efforts to broaden participation in computing.
This effort will build upon research in three subfields and extend the state-of-the-art to address deficiencies in each:
1 - Robot as Student. Building on work from Learning from Demonstration, the team will construct robots that learn task models from humans. However, to be useful to the other thrust areas, these models must not be opaque as many current learning techniques are. Instead, a transparent model will allow the robot to provide and ask feedback about its performance, explain what it has learned, and to proactively ask questions that speed up learning.
2 - Robot as Collaborator. The relatively new field of Human-Robot Collaboration struggles with synchronizing task execution between human and robot partners. By linking to models of learned task behavior and models of user intention and understanding, the team will construct systems that become proficient in negotiating task allocation, accommodating user preferences, and restoring/updating internal representations in case of errors or change of plans.
3 - Robot as Teacher. Fields including Intelligent Tutoring Systems build models of user knowledge, typically modeled using Bayesian knowledge tracing. These models, however, simply show knowledge as known, unknown, or forgotten, and only for factual knowledge. By linking with concrete representations of task and intent, the team will create robots that can detect, extend, or repair the mental model of a student for real-world tasks.
A set of milestones across three years will culminate in a demonstration of a robot that can learn a new task, collaborate on that task, and then teach that task to others.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2019 — 2020 |
Sarkar, Nilanjan [⬀] Rehg, James Scassellati, Brian Bruyere, Susanne Warren, Zachary |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Convergence Accelerator Phase I(Raise): Empowering Neurodiverse Populations For Employment Through Inclusion Ai and Innovation Science
The NSF Convergence Accelerator supports team-based, multidisciplinary efforts that address challenges of national importance and show potential for deliverables in the near future.
The broader impact/potential benefit of this Convergence Accelerator Phase I project is to dramatically increase the engagement of individuals with autism spectrum disorders (ASD) in the workforce. While approximately two-thirds of 2.5 million adults with ASD in the US have average intelligence, more than 50% of them remain unemployed or underemployed. Many individuals with ASD have unique capabilities that are in high demand across many job sectors; optimizing workforce engagement for these individuals holds the potential to transform great societal cost into great societal value. This project utilizes convergent expertise in Artificial Intelligence (AI), virtual reality, robotics, together with expertise in neuroscience, and behavioral and organizational psychology, to develop intelligent tools and systems to facilitate employment of individuals with ASD that have high potential for rapid commercialization and deployment. Specifically, the proposed research will develop intelligent training systems for interviews and other job relevant social interaction skills for individuals with ASD, and skill assessment tools for employers to enhance recruitment and retention. The entire project is based on the foundational idea that many people with ASD have the potential to participate in the workforce in ways that contribute to society while also sustaining personal success and well-being.
This Convergence Accelerator Phase I project presents a comprehensive research plan to create new AI tools, systems, and predictive models, inclusive of employer and stakeholder input, to connect people with ASD to employers via embedded, technologically based, research-informed supports for individuals and organizations alike. For people with ASD, inherent challenges related to social initiation, engagement, and communication impede their adaptive independence, including finding and keeping jobs. This issue has become a top priority of the National Institutes of Health Interagency Autism Coordinating Committee. The project involves six convergent, mutually reinforcing research components: (1) a pipeline to employment for people with ASD; (2) an affect-sensitive, closed-loop virtual reality interview training platform to assess and intervene on skill deficits while also gathering aggregate data relevant to employer training; (3) opportunities for home assessment and practice outside of traditional educational settings through the use of AI-agent mediated collaborative virtual environments and (4) closed-loop interactive socially assistive robots; (5) novel computer vision and wearable computing tools for assessment of real-world generalization of skills learned within VR and robotic systems; and (6) customizable, innovative assessment tools using data-driven visual AI to identify strengths, talents, and job-relevant skills.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.97 |
2019 — 2023 |
Chertow, Marian (co-PI) [⬀] Scassellati, Brian Dollar, Aaron Reck, Barbara |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fw-Htf-Rl: Collaborative Research: Shared Autonomy For the Dull, Dirty, and Dangerous: Exploring Division of Labor For Humans and Robots to Transform the Recycling Sorting Industry
This Future of Work at the Human-Technology Frontier (FW-HTF) project investigates a novel human-robot collaboration architecture to improve efficiency and profitability in the recycling industry, while re-creating recycling jobs to be safer, cleaner, and more meaningful. The specific goal is to improve the waste sorting process, that is, the separation of mixed waste into plastics, paper, metal, glass, and non-recyclables. The US scrap recycling industry -- which represents $117 billion in annual economic activity and more than 530,000 US jobs -- is struggling to meet increasingly challenging standards in domestic and international markets. A major problem for the industry is poor sorting of waste, resulting in materials impurity and a significant decrease in the quality and value of the recycled product. Human perception and judgement are essential to handle the object variety, clutter level and changing characteristics of the waste stream. Yet waste-sorting workers currently face health risks and discomfort arising from sharp and heavy objects, toxic materials, noise, vibration, dust, noisome odors, and poor heating, ventilation, and air conditioning. The innovative robotics component of this project, especially in object detection, manipulation, and human-robot interaction, will allow new sorting facility architectures, creating new, safer roles for human workers. The project complements these technological advances with economic analyses to determine the facility configurations that best remove processing bottlenecks, target materials of high value, and boost the end-to-end efficiency of the recycling process. Division of labor between humans and robots will be investigated to improve job desirability and worker motivation, incorporating consideration of the workers' well-being. In particular, the project will explore ways to utilize robots to amplify worker expertise and value. A holistic and interconnected research approach will be taken for all these aspects, i.e. developing robotics technology, designing the human-machine interfaces, investigating workers' workers' role in the new sorting plant architectures, and understanding and incorporating workers' needs and well-being into the design process.
This project will develop the appropriate robotics technology for recycling industry deployment, which will require advancing the state of the art in waste classification and manipulation to handle the conditions associated with recycling facilities. Deep Neural Networks-based object detection and semantic segmentation frameworks will be designed for rich, multi-modal sensor data in order to solve challenges regarding a high-level of clutter, occlusion and object variety. Novel robotic manipulation algorithms based on dynamic and soft manipulation strategies will be utilized to separate and pick classified items from the cluttered waste stream. Robust and dexterous robot hardware will be developed, including the robotic arms and end effectors. Human-machine interfaces will be designed and implemented to achieve these tasks in an intuitive, efficient and practical workflow that optimizes the contributions of both human workers and automated technologies. The robotics technology will also allow expanding the facilities from simply sorting the incoming materials into a whole recycling ecosystem; additional process lines for onsite materials processing units will enable conveying partially-finished products to next stage manufacturers. This expansion will require a novel systems approach, and will help achieve more efficient recycling plants and a much more comprehensive employment ladder for current and new workers. These technological and structural changes in the interactional system of work will shift both the task and relational landscape of the work. The effect of these shifts on worker satisfaction and motivation will be investigated via worker interviews with simulated systems. The new technological landscape will be formed accordingly for improved work experience.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2020 — 2022 |
Sarkar, Nilanjan [⬀] Rehg, James Scassellati, Brian Bruyere, Susanne Warren, Zachary |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
B1: Inclusion Ai For Neurodiverse Employment
The NSF Convergence Accelerator supports use-inspired, team-based, multidisciplinary efforts that address challenges of national importance and will produce deliverables of value to society in the near future.
Neurodiversity is an emerging concept through which certain neurological differences?Autism, Attention Deficit Hyperactivity Disorder, Dyslexia, and others?are considered a natural part of human neurocognitive variation, associated not only with impairments but also with unique strengths. Indeed, many neurodiverse people have capabilities that are in high demand across many sectors. Yet, while some 70,000 Americans with autism enter adulthood every year, currently 85% of them will be unemployed or underemployed relative to their skill levels, representing a cost to the United States of $175 billion annually. Thus, optimizing workforce engagement for individuals with autism holds the potential to transform great cost into great value. This National Science Foundation Convergence Accelerator (C-Accel) award to Vanderbilt University will address this grand challenge by bringing together cutting-edge Artificial Intelligence (AI) innovations with transdisciplinary expertise?spanning engineering and computer science to organizational psychology, clinical translation, and implementation science?to create a suite of commercially viable technologies that integrate AI within virtual environments, robotic systems, human-human interactions, and novel assessment tools. These technologies will be created using input from stakeholders, including employers of individuals with autism, companies that develop technological products to help employment, state vocational and rehabilitation services that provide job training, and advocacy groups that provide guidance regarding community needs. The technologies will be transitioned to practice through deployment with private- and public-sector partners, together with analysis using implementation science to ensure long-term sustainability and the broadest impact.
This C-Accel Phase II program will advance the scientific and technological methodologies of the projects initiated in Phase I that are designed to create a pipeline to employment for people with autism. Specifically, the suite of tools to be developed include: (1) Visual and Cognitive AI Tools to Assess Autistic Talent; (2) Virtual Reality (VR)-based Simulator for Improving Job-Interview Skills; (3) Collaborative Virtual Environments with Embedded Intelligent Agent for Social Interaction Assessment and Support; (4) Social Robotic System to Assess and Train Tolerance to Interruption; and (5) Computer Vision Tools to Measure and Improve Non-verbal Communication. Across these projects, we will make fundamental scientific and technological advancements in: (i) data-driven visual AI for innovative assessment tools to identify strengths, talents, and job-relevant skills, as well as employer-identified work needs; (ii) novel VR-based platform for job interview training that utilizes real-time closed-loop multimodal affective computing for stress and attention recognition; (iii) a collaborative virtual environment that create new skill estimation algorithms and a peer-based learning paradigm mediated by an AI agent; (iv) a home-based skill assessment and training systems using socially assistive robotics; and (v) novel computer-vision and deep learning methods and algorithms to assess real-world generalization of nonverbal social communication. The project?s intellectual property plan includes advancing each of these technologies from prototype to minimum viable product (MVP) stage and into commercial use through licensing agreements within the two-year project period. Through Vanderbilt University?s Frist Center for Autism & Innovation, graduate students and neurodiverse interns will participate in all aspects of the C-Accel research and development efforts.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.97 |
2021 — 2025 |
Scassellati, Brian Jara-Ettinger, Julian Vazquez, Marynel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Medium: Proactive Physical Assistance For Collaborative Human-Robot Teams
It is essential that robots working side-by-side with people be able to offer physical assistance. However, robots do not yet take the initiative to help. Instead, they typically follow preset plans for collaboration, or respond only when users explicitly ask for help. While useful in many situations, this reactive approach is poorly suited for many collaborative applications. This project focuses on enabling robots to proactively offer physical assistance that is timely, task-appropriate, and wanted by their human partners. The goal is to construct robots that can answer three key questions: (1) Does a teammate need help? (2) Can I (or someone else) help? and (3) Should I help? By enabling these capabilities, this project will make collaborative human-robot teams function more fluently and efficiently.
To develop these capabilities, three research thrusts that are naturally intertwined but that capture important computational aspects of these tasks, will be advanced. First, perceiving other agents through a Theory of Mind for robots. The research team will develop computational models derived from our understanding of how humans represent the knowledge, beliefs, and desires of others to maintain context-sensitive models of the mental states of human and robotic teammates. Second, planning for supportive actions. The team will construct a system that generates possible supportive actions that the robot could take to help a teammate by using a trained model of the robot's own capabilities along with a predictive planning system that determines if these actions have value to the team. Third, decision making within the dynamics of a group. The robot will monitor and engage in social activities that support the overall dynamics of its partners including both conventional conversational interfaces and more subtle social mechanisms that facilitate collaboration.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |