2002 — 2007 |
Turk, Matthew (co-PI) [⬀] Beall, Andrew Loomis, Jack (co-PI) [⬀] Blascovich, James [⬀] Bailenson, Jeremy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Using Virtual Environment Technology to Understand and Augment Social Interaction @ University of California-Santa Barbara
This project focuses on facilitating and augmenting social interaction in virtual environments, particularly immersive virtual environments. Virtual environment technology allows individuals to freely move about digital "worlds" in real time observing and interacting with the environment and virtual others within it. Increased sophistication of virtual environment technology and digital imaging of people promises a new age for technologically mediated social interaction of geographically separated individuals. However, in order to implement such interaction virtually in meaningful and productive ways, an understanding of the parameters of people's perceptions of each other's non-verbal signals (e.g., facial expressions, gestures, gaze) within virtual environments is necessary. Such an understanding will provide a hierarchical taxonomy of the necessary and sufficient non-verbal signals that are critical to social interaction within virtual environments and, therefore, must be tracked and rendered among interactions in virtual environments. Realizing the objectives of the proposed project will advance scientific understanding in the areas of social interaction and non-verbal behavior, human participation in collaborative virtual environments, and technological (e.g., computer vision) aspects of automated tracking and rendering of human on-verbal signals.
|
0.954 |
2002 — 2006 |
Beall, Andrew Bailenson, Jeremy Blascovich, James [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Virtual Environment Technology and Eyewitness Identification @ University of California-Santa Barbara
This project examines the use of immersive virtual environment technology for eyewitness identification of criminal suspects. This technology allows individuals to enter and move about three-dimensional digital "worlds" in real-time, observing and interacting with the environment and virtual others within it. The rapid development of immersive virtual environment technology and increased sophistication in three-dimensional digital imaging of people promises a new age for determining accuracy of eyewitness identification of criminal suspects. This is important societally as the general accuracy of eyewitness identification of criminal suspects using older technologies (e.g., police lineups, mug shots) has been questioned in the research literature as well as in the judicial system.
The scientific goals of the project are threefold. First, the investigators will determine the validity of using digital representations of humans within immersive virtual environments for person recognition. Second, they will determine differences between traditional and immersive virtual police lineups in terms of eyewitness identification accuracy focusing on the increased contextual realism described above. Finally, the investigators will use immersive virtual environment technology to develop quantitative indexes of fairness of such lineups based on the similarity of suspects to foils.
Immersive virtual environment technology allows easier recreation of the same environmental conditions under which an eyewitness viewed criminal activity involving suspects. Replicating such conditions is not easily accomplished using older technologies. Hence, using immersive virtual environment technology, witnesses can be asked to identify suspects and foils at the same distance, viewing angle, lighting, weather (e.g., rain, fog) as was the case during their observation of the criminal activity. Furthermore, this technology makes it easier to match foils to suspects in terms of organismic variables (e.g., height, weight, hair style, coloring), clothing, and movements thereby eliminating potential sources of bias. This technology also allows the quantitative assessment of how well suspects and foils are matched as opposed to pure subjective assessment using older technologies.
|
0.954 |
2005 — 2010 |
Blascovich, James (co-PI) [⬀] Beall, Andrew Bailenson, Jeremy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Transformed Social Interaction in Virtual Environments
The major goal of this research is to understand and test theories of mediated communication in collaborative virtual environments. It will examine how well humans are adapting to the quickly evolving digital technologies used in modern communication and social interaction systems. This work will explore the boundaries that are beyond what we normally consider normal, mediated communication and instead delve into a phenomenon called Transformed Social Interaction, a strategy that allows users to systematically filter their physical appearance and social behaviors (as represented by avatars) in the eyes of their conversational partners, amplifying or suppressing features and nonverbal signals in real-time for strategic purposes such as persuasion, learning, memory, and liking. This goal will be accomplished by using well-accepted methods to study multi-person social interaction using networked virtual reality technology, and the research will focus on three categories of transformations: Self representation (altering an avatar's appearance, voice, and nonverbal behavior), Social-Sensory abilities (giving interactants tools that provide unique perspectives and up-to-date summaries of the social behaviors of others) and Social Environment (changing aspects of the social context to maximize interaction goals).
Transformed Social Interaction is relevant not just to collaborative virtual environments but to any form of communication medium that uses digital representations of people - cell phones, videoconferences, textual chat rooms, online videogames, and many other forms of digital media. Currently, over 60 million people use Internet chat per day. In Korea, it is estimated that 1/20th of the general population spend a significant amount of time in online video games interacting with digital representations of other people. Cell phones are ubiquitous and now include digital photograph and video capabilities. Companies actually offer face tracking and rendering on cell phone avatars. In any communication medium in which there is a digital representation of another person, transformed social interaction is not only possible, it is inevitable. The use of transformed social interaction has the potential to drastically change the nature of distance education, communication practices, political campaigning, and advertising. Consequently, it is crucial to understand both the effectiveness of these transformations as well as people's ability to detect them.
|
1 |
2007 — 2009 |
Bailenson, Jeremy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Exploring the Behavioral and Facial Similarities of Humans and Their Virtual Representations
In many communication, education, and entertainment contexts, people interact with others via some type of digital representations of themselves. This research project examines two types of such representations: 1) virtual humans that behave like a specific individual but looks different from that individual on a specific dimension, and 2) virtual humans that look like a specific individual but perform some novel action which that individual has never done. In the first category, pilot work has demonstrated that people's behavior conforms to the visual features of their representations. In the second category, pilot work has demonstrated that people model the behavior of digital representations more when the representation looks like them than when it does not. The current project explores the strength, duration, and processes behind this effect in terms of interactivity. Specifically, the project will develop the technical aspects of making oneself appear to change in real-time?get older, younger, taller, more attractive, as well as the psychological implications of seeing oneself change shape or social category. This work is risky because a) it is unclear if a human will respond in a natural way to an altered version of the self, b) the computer algorithms that take a three-dimensional face modeled after a specific user have never been tested in terms of changing the age, gender, and attractiveness of a specific face, and c) no researchers have ever tested the implications of being in virtual reality weeks after the exposure to a digital model.
Humans have relied upon abstract representations of themselves for centuries?painted portraits and statues have been one of the cornerstones of historical art. However in the digital age, representations are much more dynamic and transformable than their physical counterparts. Given that a substantial portion of the population are spending literally hours per day interacting via digital representations (e.g., voices on cell phones, characters in online games, profiles on social network web sites such as Facebook), understanding the ramifications of this phenomenon is crucial. For example, how long do the effects last, and what parameters contribute (e.g., interactivity, similarity, etc.) most? The current proposal has the potential to change the way we think about the implications of interacting with online versions of one another, and consequently relates to the fields of communication, psychology, education, and computer science. In sum, in the world in which people have identities in digital space, understanding how those digital representations relate to the physical self is paramount.
|
1 |
2008 — 2011 |
Hanrahan, Patrick (co-PI) [⬀] Bailenson, Jeremy Koltun, Vladlen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi Type I: Virtual Worlds: Scalability and Content Creation
Virtual worlds are networked three-dimensional environments that simulate physical interaction in three-dimensional spaces and decouple such interaction from geographic constraints. Virtual worlds open new avenues for education, business, and scientific discovery, and can significantly enhance collaboration in virtual organizations. To accomplish this requires scalable and secure system architectures, complemented by appropriate tools for creating virtual world content. This project aims to design, build, deploy, and evaluate a virtual world platform for use as a research and development platform for virtual world technology and applications. The two major research thrusts are system architecture and content creation. For system architecture, the project will develop: (1) continuous dynamic world partitioning for maximal resource utilization and fault tolerance, (2) a capability-based security engine that seamlessly enforces access controls, (3) a scalable content distribution network for bandwidth-intensive virtual world content, and (4) privacy-preserving archiving of virtual world events for scientific research and exploration. For content creation, the project will investigate domain-specific modeling tools that leverage domain knowledge and community input to drastically ease the three-dimensional modeling process.
The project significantly advances the state of the art in virtual world systems and three-dimensional content creation. The scalable and secure virtual world platform will be able to support millions of participants concurrently interacting in a shared three-dimensional simulated environment. The novel content creation methodologies will enable untrained participants to create unique high-quality three-dimensional objects for a variety of application domains. Dedicated data collection capabilities will support social science, legal, and economic research on previously unseen scales, with analysis of large-scale behavioral data that may yield deep and novel insights into human and societal behavior.
|
1 |
2010 — 2014 |
Dede, Christopher Bailenson, Jeremy Koltun, Vladlen Gehlbach, Hunter [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Socs: Enhancing Immersive Social Perspective Taking and Perceived Virtual Similarity to Enable Intelligent Social Relationships
Virtual learning environments are proliferating, yet the social interactions between learners in these contexts remain challenging. Cues (such as gestures) that facilitate reading other people in face-to-face contexts are absent in many virtual environments. However, these contexts can endow participants with communicative capabilities that are not possible face-to-face. Because social interactions and relationships are pivotal to an array of outcomes, we need to facilitate interpersonal communication in these contexts.
This research will conduct four experiments to capitalize on the affordances of computer mediated learning environments to improve the relationships participants from within them. These experiments will examine undergraduate and middle-school learners who are learning about complex causality within ecosystems. In each study, relationships between learners will be improved through transformed social interactions, an approach in which participants are endowed with capabilities for navigating their social world that humans do not normally possess. Specifically, participants will be able to take the perspective of other participants and to increase their similarity to other participants.
Intellectual Merit. This study makes important intellectual contributions through experimentally testing ways to improve learners' relationships, examining the impact of these interventions on learning and motivation (and affective outcomes), and advancing the state of the art in non-verbal signals in computer-mediated communication.
Potential Broader Impacts. Because of the proliferation of online learning communities, the capacity for these environments to involve a diverse cross-section of students, and the novel, widely-replicable, computational approach, these studies may have broad impact across the fields of education, communications, and computer science.
|
0.964 |
2013 — 2016 |
Dweck, Carol Bailenson, Jeremy Burnette, Jeni Hoyt, Crystal Lawson, Barry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ibss-Ex: Sbp: Rui: An Interdisciplinary Approach For Increasing Female Involvement and Achievement in Stem
This project addresses the persistent underrepresentation of women in science, technology, engineering, and mathematics (STEM) fields by developing and testing the efficacy of an interdisciplinary intervention aimed at increasing female involvement and performance in STEM. A recent report from Georgetown University forecasts that 51% of all STEM occupations will be computing-related by 2018. However, a worsening gender gap pervades computing fields, both in the number of undergraduate degrees awarded and in employment. Research suggests that one reason for these gaps is that women often find themselves threatened by the potential to confirm negative stereotypes associated with their gender, often termed "identity threat." Drawing on an implicit theory perspective, which distinguishes between a growth mindset (believing human attributes can be cultivated) and a fixed mindset (believing human attributes cannot be changed), this project tests a process model designed to overcome the potentially deleterious effects of identity threat, and thus increase the sense of belonging and performance of females in computer science. The research is grounded in the well-supported idea that individuals with growth, relative to fixed, mindsets tend to remain confident and persevere when challenges arise and ultimately perform better. The investigators explore a novel method for encouraging a growth mindset that integrates (a) psychological theory regarding student feedback, (b) pedagogical research on strategies that help students handle challenging tasks and (c) virtual technologies, which can be particularly powerful learning tools.
The primary contributions of this work are threefold. First, the findings could be applied to the development of a globally competitive work force in STEM fields by helping to increase the involvement of women. Second, the potential intellectual benefits include developing new methods to foster growth mindsets that can be adapted and integrated into any discipline concerned with increasing representation of underrepresented and/or negatively stereotyped groups. Further, the results could contribute to a variety of disciplines (e.g., communication studies, the learning sciences, computer science) invested in understanding how virtual technologies can provide unique learning opportunities. Third, this research will engage undergraduate students in cutting-edge interdisciplinary research. These students, the majority of whom are female and/or minorities, will gain valuable experience in the research process, preparing them for doctoral studies in a scientific discipline. This project is supported through the NSF Interdisciplinary Behavioral and Social Sciences Research (IBSS) competition
|
0.954 |
2018 — 2021 |
Bailenson, Jeremy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Medium: Collaborative Research: Augmented Reality Agents With Pervasive Awareness, Appearance and Abilities
Voice-based assistants that respond to commands of people can be thought of as virtual companions that are always standing by to play music, tell us the weather, turn the lights off, etc. While their powers to respond and act on our behalf are increasing, unlike real companions they are largely unaware of our presence, take little initiative, look like appliances rather than interaction partners, and have limited abilities to both respond to queries and to sense and control real objects around us. This project will develop Augmented Reality Agents (ARAs) to embody these voice-based assistants, making them more are aware of our appearance, emotions, and behaviors, giving dynamic visual representations that make us aware of their state and behaviors; and leveraging the growth in "Internet of Things" (IoT) infrastructure and devices to increase the breadth and depth of their awareness. Together, these advances will lead to more effective and accepted voice-based assistants, both in the home and beyond. Such ARAs have a number of potential applications, including healthcare (by increasing the realism of clinical simulation and training, or providing support for caregiving through remote communication and virtual companionship) and education (by being more engaging tutors or representing historically important individuals). To develop embodied Augmented Reality Agents (ARAs) with pervasive contextual awareness, appearance, and abilities the researchers will undertake a program of research aimed at the nexus of concepts and technologies associated with Augmented Reality (AR), Intelligent Virtual Agents (IVA), and the Internet of Things (IoT). To maximize the expected knowledge outcomes the researchers have organized their plans into three categories. First, they will develop new understanding of and priorities for ARA awareness, appearance, and abilities in a manner that does not require, nor depend on, a specific technological realization of automated behaviors. Second, they will use off-the-shelf and custom components to realize pervasive ARA functionality, to facilitate formative experiments related to basic ARA behaviors, and develop domain-specific applications and experiments. Third, the researchers will use application-specific realizations to assess the potential usefulness related to companionship and two areas of healthcare training: pediatric patient simulators and wide-area team-based medical training. The healthcare-focused work will leverage relevant courses at UF and UCF, and UCF's NSF REU center on the Internet of Things to engage students beyond their core team, in meaningful research.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2018 — 2021 |
Bailenson, Jeremy Wetzstein, Gordon [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fw-Htf: Collaborative Research: Enhancing Human Capabilities Through Virtual Personal Embodied Assistants in Self-Contained Eyeglasses-Based Augmented Reality (Ar) Systems
The Future of Work at the Human-Technology Frontier (FW-HTF) is one of 10 new Big Ideas for Future Investment announced by NSF. The FW-HTF cross-directorate program aims to respond to the challenges and opportunities of the changing landscape of jobs and work by supporting convergent research. This award fulfills part of that aim.
This award supports basic research underpinning development of an eyeglass-based 3D mobile telepresence system with integrated virtual personal assistant. This technology will increase worker productivity and improve skills. The system automatically adjusts visual focus and places virtual elements in the image without eye strain. The user will be able to communicate to the system by speech. The system also uses sensors to keep track of the user's surroundings and provide the relevant information to the user automatically. The project will explore two of the many possible uses of the system: amplifying a workers capabilities (such as a physical therapist interacting with a remote patient), and accelerating post-injury return to work through telepresence (such as a burn victim reintegrating into his/her workplace). The project will advance the national interest by allowing the right person to be virtually in the right place at the right time. The project also includes an education and outreach component wherein undergraduate and graduate students shall receive training in engineering and research methods. Course curriculum at Stanford University and the University of North Carolina at Chapel Hill shall be updated to include project-related content and examples.
This project comprises fundamental research activities needed to develop an embodied Intelligent Cognitive Assistant (GLASS-X) that will amplify the capabilities of workers in a way that will increase productivity and improve quality of life. GLASS-X is conceived of as an eyeglass-based 3D mobile telepresence system with integrated virtual personal assistant. Methods include: body and environment reconstruction (situation awareness) from a fusion of images provided by an eyeglass frame-based camera array and limb motion data provided by inertial measurement units; fundamental research on adaptive focus displays capable to reduce eye strain when using augmented reality displays; dialog-based communication with a virtual personal assistant, including transformations from visual input to dialog and vice versa; human subject evaluations of GLASS-X technology in two workplace domains (remote interactions between a physical therapist and his/her patient; burn survivor remote return-to-work). This research promises to push the state of the art in core areas including: computer vision; augmented reality; accommodating displays; and natural language and dialogue models.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2019 — 2021 |
Bailenson, Jeremy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Advancing Ocean Literacy Through Immersive Virtual Reality
As part of its overall strategy to enhance learning in informal environments, the Advancing Informal STEM Learning (AISL) program funds innovative resources for use in a variety of settings. The project will develop and research how an emerging technology, immersive virtual reality (IVR) using head mounted displays (HMDs), can enhance ocean literacy and generate empathy towards environmental issues. Recent advances in design have resulted in HMDs that provide viscerally realistic and immersive experiences that situate participants in underwater or other remote environments. IVR can provide many people with virtual access to these environments, including persons with disabilities, people living away from coastal areas, or those who lack access for other reasons (e.g., low-income families, underserved/underrepresented communities, persons untrained in diving). The project will develop a high quality 360-degree underwater film that includes live action footage, animation, and interactive elements. The IVR experience will take the participant through an immersive underwater journey of a Pacific reef, using realistic visualizations, narrative, and a compelling story to engage participants in learning the ecology and biology of coral reefs, as well as the impacts of climate change and human disturbances on ocean ecosystems. In addition to the IVR ocean journey, the project will integrate interactive functionality of being on a reef during mass coral spawning, an annual natural phenomenon through which coral reefs replenish their populations. With hand-held controllers, participants will be able to "swim" through the water, watch the degraded reef recover and grow and will have the ability to change the rate of coral recovery and learn how increases in temperature impede coral recovery. While research has been conducted on early, desk-top versions of IVR, the potential impact of IVR on learning is still unclear. The research findings will help guide the development of IVR for use in informal STEM environments such as aquariums, zoos, science museums, and others. The IVR experience will be shared on online platforms for home viewing, at film festivals and conferences, and in informal learning environments.
The project addresses the need for research on the impacts of IVR devices as it become more affordable and more widely used at home and in other informal and formal environments. Few studies have investigated how design elements impact the user in IVR, in which the increased immersion affects the stimuli perception and cognitive processing. The research will assess the learning affordances and impacts of the IVR experience on participant ocean literacy (adapting items from an existing ocean literacy survey), environmental empathy/feelings of presence (naturalistic observations and post-experience interviews), and perceived self-efficacy (pre-post survey, post-interview interviews). In addition, the project will research how segmentation (i.e., a continuous experience vs. an experience with breaks), generative learning tasks (hands-on experiences and interactive during IVR), and gender of the narrator in an IVR experience supports learning about ocean environments. Researchers will collect data from students attending high schools with predominantly minority student enrollments. Research findings will be widely shared through peer-reviewed publications, conference presentations, and publications for educators and designers.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |