2018 — 2020 |
Quandt, Lorna Malzkuhn, Melissa |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Signing Avatars & Immersive Learning (Sail): Development and Testing of a Novel Embodied Learning Environment
Improved resources for learning American Sign Language (ASL) are in high demand. Traditional educational materials for ASL tend to include books and videos, but there has been limited progress in using cutting-edge technologies to harness the visual-spatial nature of ASL for improved learning outcomes. Interactive speaking avatars have become valuable learning tools for spoken language instruction, whereas the potential uses of signing avatars have not been adequately explored. The aim of this EArly Grant for Exploratory Research is to investigate the feasibility of a system in which signing avatars (computer-animated virtual humans built from motion capture recordings) teach users ASL in an immersive virtual environment. The system is called Signing Avatars & Immersive Learning (SAIL). The project focuses on developing and testing this entirely novel ASL learning tool, fostering the inclusion of underrepresented minorities in STEM. This work has the potential to substantially advance the fields of virtual reality, ASL instruction, and embodied learning.
This project leverages the cognitive neuroscience of embodied learning to test the SAIL system. The ultimate goal is to develop a prototype of the system and test its use in a sample of hearing non-signers. Signing avatars are created using motion capture recordings of native deaf signers signing in ASL. The avatars are placed in a virtual reality landscape accessed via head-mounted goggles. Users enter the virtual reality environment, and the user's own movements are captured via a gesture-tracking system. A "teacher" avatar guides users through an interactive ASL lesson involving both the observation and production of signs. Users learn ASL signs from both the first-person perspective and the third-person perspective. The inclusion of the first-person perspective may enhance the potential for embodied learning processes. Following the development of SAIL, the project involves conducting an electroencephalography (EEG) experiment to examine how the sensorimotor systems of the brain are engaged by the embodied experiences provided in SAIL. The extent of neural activity in the sensorimotor cortex during viewing of another person signing provides insight into how the observer is processing the signs within SAIL. The project team pioneers the integration of multiple technologies: avatars, motion capture systems, virtual reality, gesture tracking, and EEG with the goal of making progress toward an improved tool for sign language learning.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2021 |
Malzkuhn, Melissa Quandt, Lorna |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf Includes Planning Grant: Cultivating Research and Equity in Sign-Related Technology
This NSF INCLUDES planning grant is funded by NSF Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science (NSF INCLUDES), a comprehensive national initiative to enhance U.S. leadership in discoveries and innovations by focusing on diversity, inclusion and broadening participation in STEM at scale. The goal of this planning grant is to initiate a collaborative network focused on increasing the inclusion of deaf and hard-of-hearing (D/HH) individuals in the development of sign-related technologies. Many research projects use cutting-edge technologies applied to sign language translation, sign language teaching, or sign recognition. These projects include signing avatars, virtual/augmented reality sign language learning, gesture recognition for sign language, machine learning for sign recognition, wearable sensors for sign translation, and more. While the potential for these technologies is great, more inclusion and resource-sharing is needed. This project addresses these goals through building and supporting a network of researchers and industry professionals who have interest in sign-related technologies and share a vision for how the network can help build fruitful collaborations. This work increases the inclusion of D/HH individuals in the field and fosters the development of young D/HH students who wish to work in this area.
In this project, a network called Cultivating Research and Equity in Sign-Related Technologies (CREST) is established, a workshop is hosted to launch the network, and resources are provided for collaboration and communication among network members. The workshop will develop the shared vision among stakeholders and initiate and strengthen partnerships. Activities of the network will have a two-way benefit: 1) D/HH researchers and students who are working on sign-related technologies are able to gain experience with new technologies and develop STEM skills they may not already possess; 2) Researchers with the technological capacity to develop cutting-edge sign-related technology benefit from expanding their networks to include those who are already experts in sign languages and the unique challenges they present: sign language users and D/HH researchers. The CREST network increases the chances that sign-related technologies will be developed with D/HH input at all stages, meaning these technologies will be more likely to benefit people who use sign languages as a primary form of communication. It allows for the growth of a community around a rapidly changing field which has the potential to reap great benefits for deaf individuals and communities of people who use sign language. Two activities, contribution of videos and of blogs and posts, will deepen the connection between the CREST network and the NSF INCLUDES National Network that exists already, and the CREST network will bring new people to the National Network. This work will benefit D/HH students, staff, and researchers who are part of the network, as well as the broader communities which may be affected by the technologies in development.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2021 — 2024 |
Malzkuhn, Melissa Quandt, Lorna |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
New Dimensions of Asl Learning: Implementing and Testing Signing Avatars & Immersive Learning (Sail 2)
The highly-spatial three-dimensional nature of American Sign Language (ASL) has created a serious barrier to technology-supported ASL instruction. What if ASL learners could access high-quality ASL instruction from native sign language instructors through a virtual reality-based, game-like environment? This project launches from prior work on the NSF-funded Signing Avatars & Immersive Learning (SAIL) project. The SAIL project yielded a working prototype of an immersive sign language learning environment in virtual reality. The current project expands past the prototype stage into a fully-fledged ASL learning experience. In the new version of SAIL, called SAIL 2, the research team is developing a more complete system where users enter virtual reality and interact with signing avatars (computer-animated virtual humans built from motion capture recordings) who teach users ASL vocabulary. Access to signed language is key to healthy development for many deaf individuals, but it remains a major challenge when access to high quality ASL instruction in limited by time and resources. SAIL 2 sets a foundation for greater access to learning ASL, which has potential for improving the lives of deaf children and adults. The project focuses on developing and testing this entirely novel ASL learning tool and fostering the inclusion of underrepresented minorities in STEM. This work has the potential to substantially advance the fields of virtual reality, ASL instruction, and embodied learning.
Immersive virtual reality is particularly well suited for highly spatial signed languages. The SAIL 2 project leverages head-mounted virtual reality and high-quality signing avatars to create a gamified ASL-learning system. SAIL 2 will be the only ASL learning system in virtual reality which does not require the user to wear specialized gloves or other peripheral devices. The project develops a functioning version of the comprehensive SAIL 2 system, and user testing during the design process guides the details of development. Key features of the system include sign recognition through hand tracking cameras, corrective feedback, and a gamified experience. Following the design and development of SAIL 2, the research team conducts behavioral research to evaluate the learning outcomes of SAIL 2. Evaluation of specific learning outcomes includes both understanding of ASL vocabulary and accuracy of sign production. Because of the embodied nature of signed language, mechanistic measures of the neural substrates of learning, including engagement of the sensorimotor cortices, are obtained through electroencephalography (EEG). The patterns of neural oscillatory activity provide insight into short-term changes in brain activity associated with using SAIL 2. The cognitive neuroscience experiment builds on previous research identifying the neural processes supporting sign language perception, and overall this project extends technological advances in high-fidelity motion capture recordings, avatar creation, and virtual reality.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |