2001 — 2008 |
Hollerbach, John (co-PI) [⬀] Creem-Regehr, Sarah Shirley, Peter (co-PI) [⬀] Thompson, William [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Sy: Collaborative/Rui Research On the Perceptual Aspects of Locomotion Interfaces
No current system allows a person to naturally walk through a large-scale virtual environment. The availability of such a locomotion interface would have impacts on a broad range of applications, including education and training, design and prototyping, physical fitness, and rehabilitation; for some of these applications natural walking provides a level of realism not obtainable if movement through the simulated world is controlled by devices such as a joystick, while for others realistic walking is a fundamental requirement. Prototypes have been built for a variety of computer-controlled devices on which a person can walk, but there has been little investigation of the utility of such devices as interfaces to a virtual world and almost no study at all of the interactions of visual and biomechanical perceptual cues in such devices. This project addresses key open questions, the answers to which are needed if locomotion interfaces are to offer effective interaction between users and computer simulations. An effective locomotion interface must provide users with accurate visual and biomechanical sensations of walking; thus, a key objective of this work is to determine how to synergistically combine visual information generated by computer graphics with biomechanical information generated by devices that simulate walking on real surfaces. The PI and his collaborators will investigates methods that allow more accurate walking in a locomotion interface while accurately conveying a sense of the spaces being walked through. Specific issues to be considered include how to facilitate the perception of speed and distance traveled, how to provide a compelling sense of turning when actual walking along a curved path is not possible, how to give a user the sense that he/she is walking over a sloped surface, and more generally how to give a user a clear sense of the scale and structure of the spaces being walked through. The PI's findings on these issues will be relevant across the spectrum of possible approaches to locomotion interfaces.
|
0.915 |
2006 — 2010 |
Hansen, Charles Creem-Regehr, Sarah |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Advanced Volume Visualization Techniques
Advanced Volume Visualization Techniques PI: Charles Hansen, School of Computing, University of Utah Co-PI: Sarah Creem-Regehr, Dept. of Psychology, University of Utah
Direct volume rendering has proven to be an effective and flexible visualization method for interactive exploration and analysis of 3D scalar fields. The ability to analyze 3D volumetric data sets is crucial to the success of large-scale simulation. This research will increase the analysis and understanding of volumetric data through more faithful rendering methods that take into consideration the interaction of light with the volume itself. The research will investigate a new interactive volume shading method that incorporates global illumination effects and is robust for lighting of volumetric materials. Such an illumination model will more effectively bring out data characteristics for analysis and will be more effective for multi-field visualization than current shading methods.
While direct volume rendering is widely used in visualization applications, most, if not all, of these applications render (semi-transparent) surfaces lit by an approximation to the Phong local surface shading model. This shading model renders such surfaces but it does not provide sufficient lighting characteristics for good spatial acuity. To improve an illumination model for volume rendering, it is necessary to investigate approximations to global illumination using novel volumetric shading techniques. This investigation is producing in an effective model that captures physical effects of classified volumetric data such as back-scattering, inter/intra-surface illumination, and better forward scattering. The success of this project is being measured by applying the new techniques to multi-field visualization in the areas of computational combustion, bio-electric field simulation, multi-modal medical imaging, and scanned multi-modal data for non-destructive testing in which analysis through direct volume rendering is an appropriate visualization methodology. The effectiveness of these new methods will be tested through user studies that will include both perceptual psychologists familiar with perception testing in computer graphics and domain scientists.
|
0.915 |
2007 — 2010 |
Creem-Regehr, Sarah Thompson, William [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Improving Spatial Perception in Virtual Environments
This project uses a novel approach of applying expertise in perceptual science to solve the engineering problem of creating displays that are perceived veridically. This project applies a sophisticated understanding of the perceptual information needed to visually determine distances to the engineering of effective virtual environment visual displays. Distance perception involves a complex interaction between different sources of sensory information and between different aspects of the available visual information. The nature of this interaction rapidly adapts over time. Taken together, this significantly complicates our ability to understand and describe the processes involved. While perceptual psychologists have for many years manipulated visual stimuli in ways that change depth perception, the idea that we can do so in a way that is stable over time and which satisfies a variety of engineering constraints associated with virtual environment applications is novel and untested.
The project is intrinsically multidisciplinary, involving genuine collaboration between computer scientists and cognitive psychologists and leading to an exceptional educational environment. The investigators have a well established record of involving undergraduates and women in research and will continue that tradition with this work. Undergraduate students in both computer science and psychology at the University of Utah have been directly involved in research projects similar to this one, leading to high quality senior theses and journal publications.
|
0.915 |
2009 — 2014 |
Creem-Regehr, Sarah Stefanucci, Jeanine (co-PI) [⬀] Thompson, William [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc:Small: a New Method For Evaluating Perceptual Fidelity in Computer Graphics
For many applications of computer graphics, it is important that viewers perceive an accurate sense of the scale and spatial layout depicted in the displayed imagery. Medical and scientific visualizations need to accurately convey information about the size, shape, and location of entities of potential interest. Architectural and educational systems should give the user an overall sense of the scale of a real or hypothesized environment, along with the arrangement of objects in that space. Simulation and training systems need to allow users to perform tasks with the same or similar facility as in the real world. Despite the importance of achieving a high level of perceptual fidelity in computer graphics, there are as yet no established methodologies for evaluating how well computer graphics imagery conveys spatial information to a viewer. The lack of such methodologies is a significant impediment to creating more effective computer graphics systems, particularly for non-entertainment applications. In this multidisciplinary project involving genuine collaboration between computer scientists and cognitive psychologists, the PI and his team will develop a method for quantifying perceptual fidelity that is both generalizable and task-relevant. This work will be the first systematic use of the concept of perceived affordances, defined as the perception of one's own action capabilities, for characterizing the accuracy of space perception in computer graphics. The methodology involves a verbal indication that a particular action can or cannot be performed in a viewed environment. By varying the spatial structure of the environment, these affordance judgments can be used to probe how accurately viewers are able to perceive action-relevant spatial information. The result is a measure relevant to action, less subject to bias than verbal reports of more primitive properties such as size or distance, and applicable to non-virtual-environment display systems in which the actual action cannot be performed.
Broader Impacts: This research will lead to a methodology that significantly impacts displays and rendering methods not yet developed, and will result in qualitative improvements in domain-specific systems that go beyond current practice. Project outcomes will be applicable across a broad range of display technologies and rendering techniques, and will reduce the confounds associated with training and prior experience found in more specialized task performance measures. The nature of this collaboration will lead to an exceptional educational environment, from which students will come away with a depth and breadth of experience which makes them especially well qualified to tackle demanding problems in science and engineering. The investigators have a well established record of involving undergraduates and women in research, and will continue that tradition with this work.
|
0.915 |
2011 — 2016 |
Creem-Regehr, Sarah Stefanucci, Jeanine (co-PI) [⬀] Thompson, William [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Small: Collaborative Research: the Influence of Self-Avatars On Perception and Action in Virtual Worlds
The objective of this research is to enable more effective design and use of virtual worlds. The pervasiveness of visually-oriented online and interactive digital media allows people to represent themselves increasingly through surrogates in virtual worlds. These digital personae are called "avatars," and when they closely represent the user, "self-avatars." Self-avatars enable forms of learning, interaction, and skill development that can increase a user's effectiveness in a virtual world. This project will explore how self-avatars play a significant role through three key components of perception and action: the relationship between action and the perception of space and objects, active acquisition of spatial memory, and the planning and execution of actions themselves.
This research will consider three properties of self-avatars themselves, each likely to have an effect across a broad range of situations: (1) the virtual perspective from which the avatar is seen, (2) the nature of the coupling between user size and motion and avatar size and motion, and (3) the naturalness of the interface system by which the user controls the avatar. The work builds on a growing body of knowledge about the role of body ownership in perceptual and cognitive tasks. This framework provides a theory in which to ground the research, a body of empirical knowledge about perception and action in the real world, and established methodologies that can be used for assessing the results of the research. The ability to utilize work from cognitive and perceptual science to solve a problem in computer graphics and user interaction is a major strength of the research.
Virtual environments are important in many domains, including architecture, education, medicine, simulation, training, and visualization. The core impact of this research is to enable self-avatars to enhance user experience in virtual environments, which are a major category of computer simulations. A broad impact of this project is that enhancing the user experience will lead to more capable applications of virtual environments in the aforementioned domains. This research will also have utility in entertainment systems, the dominant environments for avatars. It advances discovery and understanding while training students in cross-disciplinary research methods in an innovative intellectual environment. The interdisciplinary nature of the research and its consequent applications, together with the close integration of two research groups, will aid in bringing new students to computer science, beyond the students traditionally attracted to that field.
|
0.915 |
2012 — 2017 |
Meyer, Miriah Creem-Regehr, Sarah Whitaker, Ross [⬀] Kirby, Robert (co-PI) [⬀] Thompson, William (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cgv: Large: Collaborative Research: Modeling, Display, and Understanding Uncertainty in Simulations For Policy Decision Making
The goal of this collaborative project (1212806, Ross T. Whitaker, University of Utah; 1212501, Donald H. House, Clemson University; 1212577, Mary Hegarty, University of California-Santa Barbara; 1212790, Michael K. Lindell, Texas A&M University Main Campus) is to establish the computational and cognitive foundations for capturing and conveying the uncertainty associated with predictive simulations, so that software tools for visualizing these forecasts can accurately and effectively present this information about to a wide range of users. Three demonstration applications are closely integrated into the research plan: one in air quality management, a second in wildfire hazard management, and a third in hurricane evacuation management. This project is the first large-scale effort to consider the visualization of uncertainty in a systematic, end-to-end manner, with the goal of developing a general set of principles as well as a set of tools for accurately and effectively conveying the appropriate level of uncertainties for a range of decision-making processes of national importance.
The primary impact of this work will be methods and tools for conveying the results of predictive simulations and their associated uncertainties, resulting in better informed public policy decisions in situations that rely on such forecasts. Scientific contributions are expected in the areas of simulation and uncertainty quantification, visualization, perception and cognition, and decision making in the presence of uncertainty. Results will be broadly disseminated in a variety of ways across a wide range of academic disciplines and application areas, and will be available at the project Web site (http://visunc.sci.utah.edu). The multidisciplinary nature of the research and the close integration of the participating research groups will provide a unique educational environment for graduate students and other trainees, while also broadening the participation in computer science beyond traditional boundaries.
|
0.915 |
2013 — 2018 |
Miller, Harvey (co-PI) [⬀] Cashdan, Elizabeth [⬀] Creem-Regehr, Sarah Stefanucci, Jeanine (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ibss: Age Changes and Gender Differences in Spatial Abilities: Testing the Role of Mobility in Three Non-Industrial Societies and in the U.S.
This interdisciplinary research project will focus on determining how spatial ability is affected by navigational experience and how spatial abilities differ by gender and change with age. Previous research has observed that males generally have larger geographic ranges than females across a wide range of cultures, and males generally have done significantly better at some spatial tests. Because large ranges pose navigational and spatial challenges, many theorists have speculated that gender-related differences in mobility may underlie gender-related differences in spatial performance. This project will test three hypotheses that may explain the root causes of gender-related differences in mobility patterns and evaluate how natural mobility, navigational style, and spatial ability are related. Because there are large cultural differences in age and gender-related patterns of mobility, this project will include participants in communities in Tanzania, Namibia, and Ecuador that subsist on foraging and small-scale farming, and it also will include participants in the United States, where lifestyles are considerably different. The researchers will assess mobility through the use of interviews and by tracking subjects using GPS devices. They will assess navigational and spatial ability through the use of cognitive tests adapted to make them broadly applicable across cultures. The navigational data will based on both real-world and virtual world tasks and will include both field and laboratory-based components.
This project will enhance basic understanding of the factors that shape spatial performance as well as how it differs by gender and across cultures. The project will test a number of hypotheses that have been posited in hopes of relating spatial ability to different kinds of experience. Because spatial skills are related to higher levels of performance in mathematics and science, and women and minorities are underrepresented in science, mathematics, engineering, and technology-related fields, greater knowledge about the factors that enhance spatial thinking has the potential to make scientific and technical education and related employment opportunities more broadly accessible. The project will develop and disseminate assessment tools that can be used with people of all ages and different cultures, including non-literate populations, which should help improve spatial capabilities for people in many different environments. This project is supported through the NSF Interdisciplinary Behavioral and Social Sciences Research (IBSS) competition.
|
0.915 |