1993 — 1995 |
Henderson, Thomas Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Instrumentation
An integrated system consisting of a mobile robot, high-speed image processing hardware, and a real-time operating system will be purchased and used in several robotics research projects. These include: * Visual motion for robot guidance * Autonomous agent behavior specification and analysis. * Agent construction using discrete event dynamic systems. This system will permit closed loop control methods and aid in validation and techniques involving active camera control.
|
0.915 |
1998 — 2002 |
Shirley, Peter [⬀] Smits, Brian Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Realistic Computer Graphics For Natural Scenes
This project aims to improve the quality of computer graphic images of outdoor, natural scenes. To date, renderings of outdoor terrain have had a cartoon-like quality that significantly distracts from a sense of realism. Partially, this is due to computational and source data constraints that limit the geometric complexity of terrain that can be rendered. The thesis of the research described here is that illumination and material properties play an equally important role in creating a sense of realism from these scenes. Moreover, there are important interactions between geometry, illumination, and material properties in a model of outdoor terrain that should be understood when real-time constraints must also be satisfied. Significant progress has been made in the last decade in understanding how to generate realistic renderings of indoor scenes. The general approach is to analyze the physics of light transport in such environments and then to embody approximations to the physics in computational algorithms. Correct modeling of illumination and material properties is vital. It is now known that a sense of realism depends critically on accounting for shadows, secondary illumination, and non-uniform reflectance functions. Accurately approximating the effect of these properties involves great computational expense. As a result, methods for rendering realistic imagery almost always exploit assumptions about the nature of the geometric structure and illumination and materials properties likely to be encountered. Most of these assumptions derive from a presumption of indoor environments. Outdoor scenes present very different computational characteristics. While the physics is the same, geometry, illumination, and reflectance properties are all distinctly different. Many of the techniques developed to support realistic rendering of indoor scenes will require substantial modifications for natural, outdoor environments. The most difficult computational prob lem to overcome is the need to be able to aggregate the effects of micro-structures into large enough units that they can be rendered effectively, while at the same time preserving key aspects of visual appearance. This problem exists across a wide range of scales, ranging from foliage, in which a collection of individual leaves generates a collective appearance that is quite different than that of the constituent members, to distant landmarks, where detail must be suppressed without removing those properties that make landmarks distinctive and thus useful.
|
0.915 |
1999 — 2004 |
Cohen, Elaine (co-PI) [⬀] Hollerbach, John [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Virtual Prototyping For Human-Centric Design
This grant provides funding for the simulation of the sense of contact and force when interacting with human-centric mechanical CAD designs. Human-centric CAD designs are those whose ultimate purpose is to be used by humans, for example, a car's interior design. The location and feel of surfaces, and the accessibility and manipulability of controls, will be prototyped through force feedback from a haptic interface, the Sarcos Dextrous Arm Master. The use of the Sarcos Master allows a user to reach and grasp naturally, and to feel both external forces of contact and internal forces of grasping. Utilizing directly the underlying complex geometries of the design (trimmed NURBS surfaces), surface-to-surface geometrical computations will be developed to model the bumping of the arm when reaching and the grasping by the hand of controls. A global minimum distance calculation will identify areas of potential contact, and then fast local surface tracing computations will model detailed geometrical interaction. Realistic surface models for friction, texture, and softness will be developed based on measurements of real surfaces. The mechanical action of controls such as switches will be similarly modeled.
The goal of virtual prototyping is to replace physical mockups with computational mockups, thereby greatly decreasing costs and speeding up iterations in the design. When a design's purpose is to be used by a human, it is similarly desirable to prototype a human's interaction with the design without building a physical prototype. To date only stereoscopic visual displays have been available to examine a design, but a complete evaluation should allow designers to reach, touch, grasp, and manipulate virtual objects in the design, as if using their own arms against real objects. The determination of how easy it is to reach and manipulate controls in a cluttered car interior will lead to more ergonomically satisfactory designs.
|
0.915 |
1999 — 2000 |
Hollerbach, John [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Motion Display in the Treadport Locomotion Interface
Abstract
IIS-9908675 Hollerbach, John University of Utah $75,000 - 12 mos.
Motion Display in the Treadport Locomotion Interface
Locomotion interfaces represent a new field, and at this early stage there are many uncertainties about the best approaches. This exploratory research is concerned with the mechanical display of slope, inertia, and turning on a treadmill-based locomotion interface, the Sarcos Treadport. The Sarcos Treadport comprises a linear tilting treadmill, an active mechanical tether, and a CAVE-like visual display. The active mechanical tether, which is the unique aspect of the Treadport, measures user position and applies axial forces to the user. Although the treadmill is a linear device, the research aims to show that the tether coupled to the visual display provides a reasonable basis for turning control. The force-producing capability of the active tether offers the unique ability to simulate inertial forces during running, which are otherwise absent because the user is stationary. Treadmill tilt mechanisms are typically too slow to display rapid slope changes. By pulling on the user to simulate uphill walking and pushing for downhill walking, the active mechanical tether can simulate gravity. The PIs propose to conduct a series of biomechanical, modeling, and psychophysical studies to demonstrate the proposed motion displays for turning, inertial force, and slope using an active mechanical tether. The objective is to show that adding a mechanical tether to linear treadmills is a useful and reasonable approach towards locomotion interfaces, and that the proposed locomotion display approaches have some scientific basis and are effective from a user standpoint.
|
0.915 |
1999 — 2001 |
Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Instrumentation: Realistic Computer Graphics
9818344 Thompson, William B. Cohen, Elaine Utah University
Realistic Computer Graphics
This research instrumentation enables research projects in: - Locomotion Interfaces, - Virtual Prototyping, - Perception of Spatial Organization in Large-Scale Synthetic Environments, and - Realistic Computer Graphics for Natural Scenes.
The award supports the acquistion of a system for generating and projecting high quality computer graphics. The equipment supports research projects involving immersive interfaces and realistic computer graphics. Four projects are taking advantage of this resource. Improved locomotion interfaces are being developed, allowing a user to walk through a virtual environment. Work on haptic interfaces for computer aided design (CAD) is being extended, with particular attention to the integration of visual and haptic displays to better convey a sense of the complex geometries involved in most mechanical designs. Methods are being developed for more accurately conveying a sense of distance, scale, and speed in computer generated imagery. Because current techniques have great difficulty producing images of objects which appear to be far away and/or of significant size this important work may contribute to the field. Finally, an analysis of how to generate realistic looking synthetic images of outdoor scenes is being initiated. To be successful, all four of these efforts require the graphical rendering power and high resolution display capabilities being made available as part of this infrastructure acquisition.
|
0.915 |
1999 — 2003 |
Hansen, Charles Shirley, Peter (co-PI) [⬀] Smits, Brian Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Interactive Ray Tracing For Visualization
Visualization systems are most effective when the user can interactively explore the data by varying the viewpoint and visualization parameters via direct manipulation. However, visualization applications fre-quently deal with extremely large datasets, so such interaction is often impossible. In addition to being large, these datasets often have high depth-complexity and non-polygonal primitives, which makes them poorly suited to most current graphics accelerators. This project will investigate the use of ray tracing vi-sualization applications. By exploiting parallelism and image-based-rendering (pixel reprojection) this project can create a system that is both interactive and responsive for very large datasets. In addition, the programmable nature of such a system will allow improved rendering effects such as shadows and transparency, and sup-port for non-polygonal primitives such as glyphs and volumes. To test the relevance and performance of this system the project will be tested in several application areas: medical visualization, fluid flow visualization, terrain visualization, and bioelectric modeling visualization.
|
0.915 |
1999 — 2002 |
Johnson, Christopher Cohen, Elaine (co-PI) [⬀] Hansen, Charles Shirley, Peter (co-PI) [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of An Experimental Testbed For Computer Graphics
EIA-9977218 Hansen, Charles Cohen, Elaine University of Utah
MRI: Acquisition of an Experimental Testbed for Computer Graphics
The proposal requests a large-scale, multi-processor compute engine with which they plan to investigate alternative rendering strategies with potential suitability for desktop systems in a decade or so. Because of the many trade-offs between image quality, rendering speed, and interactivity that must be considered, this work is highly experimental in nature and requires dedicated computing power sufficient for software implementation of possible approaches able to run interactively. The value of improved interactive rendering will be demonstrated in a broad range of application areas, including architectural walkthroughs, terrain rendering, computer-aided geometric design, medical imaging, and scientific visualization.
|
0.915 |
2000 — 2004 |
Smits, Brian Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Collaborative Research: Generating An Accurate Sense of Depth and Size Using Computer Graphics
Despite impressive gains in realism over the last decade, computer graphics is currently unable to effectively generate images of objects and environments that look large. This is mostly because computer graphics is poor at conveying information about absolute depth. The goal of this project is to demonstrate that it is possible to significantly improve the sense of depth and scale in computer graphics if rendering methods are developed with specific attention to the need to convey cues for absolute depth. Accomplishing this goal will require new insights into the 3D information extractable from 2D images, modifications to graphics algorithms in order to better render salient information, and sophisticated perceptual experimentation to validate that people can actually see the intended 3D space. The PI's approach will be to draw upon the results and methods of computational vision in ways that have not previously been done in the computer graphics community. Computational vision provides insights into the intrinsic constraints on how information about 3D space can be recovered from 2D images. In particular, the computational analysis of vision points out the important distinction between relative depth judgments and absolute depth judgments. Surprisingly few of the commonly studied image cues are in fact sufficient to provide information about absolute depth. Of those that do, several cannot be exploited in computer graphics due to fundamental limitations in display technology and our inability to precisely control viewing conditions except in immersive environments. The research will impact a broad range of graphics applications in which accurate spatial information needs to be conveyed, including education and training, design and prototyping, and telepresence.
|
0.915 |
2001 — 2008 |
Hollerbach, John (co-PI) [⬀] Creem-Regehr, Sarah (co-PI) [⬀] Shirley, Peter (co-PI) [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Sy: Collaborative/Rui Research On the Perceptual Aspects of Locomotion Interfaces
No current system allows a person to naturally walk through a large-scale virtual environment. The availability of such a locomotion interface would have impacts on a broad range of applications, including education and training, design and prototyping, physical fitness, and rehabilitation; for some of these applications natural walking provides a level of realism not obtainable if movement through the simulated world is controlled by devices such as a joystick, while for others realistic walking is a fundamental requirement. Prototypes have been built for a variety of computer-controlled devices on which a person can walk, but there has been little investigation of the utility of such devices as interfaces to a virtual world and almost no study at all of the interactions of visual and biomechanical perceptual cues in such devices. This project addresses key open questions, the answers to which are needed if locomotion interfaces are to offer effective interaction between users and computer simulations. An effective locomotion interface must provide users with accurate visual and biomechanical sensations of walking; thus, a key objective of this work is to determine how to synergistically combine visual information generated by computer graphics with biomechanical information generated by devices that simulate walking on real surfaces. The PI and his collaborators will investigates methods that allow more accurate walking in a locomotion interface while accurately conveying a sense of the spaces being walked through. Specific issues to be considered include how to facilitate the perception of speed and distance traveled, how to provide a compelling sense of turning when actual walking along a curved path is not possible, how to give a user the sense that he/she is walking over a sloped surface, and more generally how to give a user a clear sense of the scale and structure of the spaces being walked through. The PI's findings on these issues will be relevant across the spectrum of possible approaches to locomotion interfaces.
|
0.915 |
2003 — 2007 |
Shirley, Peter [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Graphical Navigation of the Earth in Space and Time
Graphical Navigation of the Earth in Space and Time
An explosion is occurring in the availability of on-line data relating to archeology and related disciplines such as paleontology, geology, and paleohydrology. Much of this is geometric information, either scanned from the real world or hand-modeled. Our research aims at utilizing a variety of tools from computer graphics to allow access to this data in a natural manner. Our specific interest is in developing novel methods for displaying how cultural artifacts change over time and space. Ultimately, we envision what amounts to a spatial/temporal graphical browser for data related to the Earth.
A session with the hypothetical system A user's browser displays the view from the University of Utah over Salt Lake City with the Great Salt Lake visible on the horizon. The images have the pen-and-ink and watercolor style of architectural \presentation graphics" with detail and texture indicated with just a few strokes, and most colors muted to make the lines prominent. This is the same style used in most manuals and textbooks. The user first moves across the rendered city in the air, looking down at the bustling people and traffic. A particular building catches the user's eye. The user clicks the mouse and a web browser brings up information known about the building, such as its being built in 1870. The user adjusts the time indicator back to 1870. Over the course of thirty seconds (based on the user's preference settings and heuristics) the adjacent buildings come and go, and a trolley system appears and disappears in front of the building.
The user is now in 1870 and has a much clearer view of the lake to the west. The user moves to the lake, and can see moving water and small amounts of human activity. The user now more aggressively moves backward in time to 9000 B.C. and watches the shores of the lake fluctuate widely as the water rises and falls. Now the user asks the system to "flag" areas where the database has high densities of unsynthesized data. A flag appears to the west of the lake. The user zooms to this and sees two caves. A click on the caves opens a browser window that indicates the caves are the oldest known inhabited sites in Utah, and were used over several thousand years by paleoindians. The user enters Danger Cave, and observes a group of paleoindians preparing food over a fire.
The user now turns selects "uncertainty rendering". Here objects in the database that are stored with a high confidence are rendered with clean lines and detailed textures. Objects stored with low confidence are drawn with sketchy lines and no color. For example, the petroglyphs near the mouth of the cave still exist and thus have high confidence. Petroglyphs in the back of the cave, if they existed, have been destroyed by rock fall and erosion. They have been created speculatively by the archeologist based on other sites, and are thus drawn with low confidence. Note that the system merely accesses archaeologic data. More sophisticated archaeologic uses would be done by other programs, just as the current Web is not used for general data manipulation. The user now exits the cave and asks for the nearest significant events in the past and future near the cave. In the future is shown the arrival of agriculture in the area around 200 A.D. In the past is shown the draining of Lake Bonneville around 10,000 B. C. Here the ancestor of the Great Salt Lake, spanning most of the state of Utah and having an average depth of hundreds of feet, lost most of its water volume through a collapsed narrow pass into the Snake River. The user selects the beginning of the past event and can see the shores of the giant lake, and a variety of wildlife including mammoths and giant land sloths. Still in uncertainty mode, there are also a few sketchily drawn humans; it is debated whether paleoindians were present in Utah that early in time. A visual flag indicates an interesting feature to the North. The user can go witness the site of the landslide. By selecting the end of the flood, the user over thirty seconds can watch the inland sea drain, and the shorelines vastly contract. Finally, the user can zoom back to the present day to see the current cultural features and distant lake.
|
0.915 |
2007 — 2010 |
Creem-Regehr, Sarah (co-PI) [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Improving Spatial Perception in Virtual Environments
This project uses a novel approach of applying expertise in perceptual science to solve the engineering problem of creating displays that are perceived veridically. This project applies a sophisticated understanding of the perceptual information needed to visually determine distances to the engineering of effective virtual environment visual displays. Distance perception involves a complex interaction between different sources of sensory information and between different aspects of the available visual information. The nature of this interaction rapidly adapts over time. Taken together, this significantly complicates our ability to understand and describe the processes involved. While perceptual psychologists have for many years manipulated visual stimuli in ways that change depth perception, the idea that we can do so in a way that is stable over time and which satisfies a variety of engineering constraints associated with virtual environment applications is novel and untested.
The project is intrinsically multidisciplinary, involving genuine collaboration between computer scientists and cognitive psychologists and leading to an exceptional educational environment. The investigators have a well established record of involving undergraduates and women in research and will continue that tradition with this work. Undergraduate students in both computer science and psychology at the University of Utah have been directly involved in research projects similar to this one, leading to high quality senior theses and journal publications.
|
0.915 |
2008 — 2013 |
Zachary, Joseph (co-PI) [⬀] Sansone, Carol [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Increasing Student Motivation Without Compromising Student Performance in Online Classes
Project Abstract: The main purpose of the project is to address the problem of keeping students in online courses motivated and on task in their learning activities. The study seeks to assess the potential tradeoffs in on line courses between designs that motivate or enhance interest (e.g. related links; multiple pathways) and designs that enhance on-task or in-depth learning. It is critical for successful learning in online courses that students self regulate their learning activities, since they are no longer in a supervised classroom setting. The project builds on a model the PIs developed called Self-Regulation of Motivation. The issue for research is how students construct their own learning tasks in light of their need to both reach learning goals and experience interest. The project will involve a series of experimental studies to assess the various featues of a on-line course that lead to higher interest and learning.
The findings of the study have the potential to enhance on-line learning experience world wide regardless of academic discipline. Furthermore, on-line STEM instructors must develop courses that often involve multiple dimensions of knowledge (e.g., sensory experiences or experiments, a huge vocabulary to be learned/memorized, formalization of knowledge often involving abstract concepts). This embedded and complex learning of content knowledge requires that the orchestration of motivation and performance be known explicitly. The findings of the study have implications for design and delivery of online courses.
|
0.915 |
2009 — 2014 |
Creem-Regehr, Sarah (co-PI) [⬀] Stefanucci, Jeanine (co-PI) [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc:Small: a New Method For Evaluating Perceptual Fidelity in Computer Graphics
For many applications of computer graphics, it is important that viewers perceive an accurate sense of the scale and spatial layout depicted in the displayed imagery. Medical and scientific visualizations need to accurately convey information about the size, shape, and location of entities of potential interest. Architectural and educational systems should give the user an overall sense of the scale of a real or hypothesized environment, along with the arrangement of objects in that space. Simulation and training systems need to allow users to perform tasks with the same or similar facility as in the real world. Despite the importance of achieving a high level of perceptual fidelity in computer graphics, there are as yet no established methodologies for evaluating how well computer graphics imagery conveys spatial information to a viewer. The lack of such methodologies is a significant impediment to creating more effective computer graphics systems, particularly for non-entertainment applications. In this multidisciplinary project involving genuine collaboration between computer scientists and cognitive psychologists, the PI and his team will develop a method for quantifying perceptual fidelity that is both generalizable and task-relevant. This work will be the first systematic use of the concept of perceived affordances, defined as the perception of one's own action capabilities, for characterizing the accuracy of space perception in computer graphics. The methodology involves a verbal indication that a particular action can or cannot be performed in a viewed environment. By varying the spatial structure of the environment, these affordance judgments can be used to probe how accurately viewers are able to perceive action-relevant spatial information. The result is a measure relevant to action, less subject to bias than verbal reports of more primitive properties such as size or distance, and applicable to non-virtual-environment display systems in which the actual action cannot be performed.
Broader Impacts: This research will lead to a methodology that significantly impacts displays and rendering methods not yet developed, and will result in qualitative improvements in domain-specific systems that go beyond current practice. Project outcomes will be applicable across a broad range of display technologies and rendering techniques, and will reduce the confounds associated with training and prior experience found in more specialized task performance measures. The nature of this collaboration will lead to an exceptional educational environment, from which students will come away with a depth and breadth of experience which makes them especially well qualified to tackle demanding problems in science and engineering. The investigators have a well established record of involving undergraduates and women in research, and will continue that tradition with this work.
|
0.915 |
2011 — 2016 |
Creem-Regehr, Sarah (co-PI) [⬀] Stefanucci, Jeanine (co-PI) [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Small: Collaborative Research: the Influence of Self-Avatars On Perception and Action in Virtual Worlds
The objective of this research is to enable more effective design and use of virtual worlds. The pervasiveness of visually-oriented online and interactive digital media allows people to represent themselves increasingly through surrogates in virtual worlds. These digital personae are called "avatars," and when they closely represent the user, "self-avatars." Self-avatars enable forms of learning, interaction, and skill development that can increase a user's effectiveness in a virtual world. This project will explore how self-avatars play a significant role through three key components of perception and action: the relationship between action and the perception of space and objects, active acquisition of spatial memory, and the planning and execution of actions themselves.
This research will consider three properties of self-avatars themselves, each likely to have an effect across a broad range of situations: (1) the virtual perspective from which the avatar is seen, (2) the nature of the coupling between user size and motion and avatar size and motion, and (3) the naturalness of the interface system by which the user controls the avatar. The work builds on a growing body of knowledge about the role of body ownership in perceptual and cognitive tasks. This framework provides a theory in which to ground the research, a body of empirical knowledge about perception and action in the real world, and established methodologies that can be used for assessing the results of the research. The ability to utilize work from cognitive and perceptual science to solve a problem in computer graphics and user interaction is a major strength of the research.
Virtual environments are important in many domains, including architecture, education, medicine, simulation, training, and visualization. The core impact of this research is to enable self-avatars to enhance user experience in virtual environments, which are a major category of computer simulations. A broad impact of this project is that enhancing the user experience will lead to more capable applications of virtual environments in the aforementioned domains. This research will also have utility in entertainment systems, the dominant environments for avatars. It advances discovery and understanding while training students in cross-disciplinary research methods in an innovative intellectual environment. The interdisciplinary nature of the research and its consequent applications, together with the close integration of two research groups, will aid in bringing new students to computer science, beyond the students traditionally attracted to that field.
|
0.915 |
2012 — 2017 |
Meyer, Miriah Creem-Regehr, Sarah (co-PI) [⬀] Whitaker, Ross [⬀] Kirby, Robert (co-PI) [⬀] Thompson, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cgv: Large: Collaborative Research: Modeling, Display, and Understanding Uncertainty in Simulations For Policy Decision Making
The goal of this collaborative project (1212806, Ross T. Whitaker, University of Utah; 1212501, Donald H. House, Clemson University; 1212577, Mary Hegarty, University of California-Santa Barbara; 1212790, Michael K. Lindell, Texas A&M University Main Campus) is to establish the computational and cognitive foundations for capturing and conveying the uncertainty associated with predictive simulations, so that software tools for visualizing these forecasts can accurately and effectively present this information about to a wide range of users. Three demonstration applications are closely integrated into the research plan: one in air quality management, a second in wildfire hazard management, and a third in hurricane evacuation management. This project is the first large-scale effort to consider the visualization of uncertainty in a systematic, end-to-end manner, with the goal of developing a general set of principles as well as a set of tools for accurately and effectively conveying the appropriate level of uncertainties for a range of decision-making processes of national importance.
The primary impact of this work will be methods and tools for conveying the results of predictive simulations and their associated uncertainties, resulting in better informed public policy decisions in situations that rely on such forecasts. Scientific contributions are expected in the areas of simulation and uncertainty quantification, visualization, perception and cognition, and decision making in the presence of uncertainty. Results will be broadly disseminated in a variety of ways across a wide range of academic disciplines and application areas, and will be available at the project Web site (http://visunc.sci.utah.edu). The multidisciplinary nature of the research and the close integration of the participating research groups will provide a unique educational environment for graduate students and other trainees, while also broadening the participation in computer science beyond traditional boundaries.
|
0.915 |