1986 — 1989 |
Loomis, Jack M. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Factors Limiting the Tactile Perception of Form @ University of California Santa Barbara
The proposed work will continue a program of empirical and theoretical research that seeks to understand the sensory and non-sensory factors that limit the perception of tactile spatiotemporal patterns. The earlier research has led to a model of recognition of static raised characters sensed by the finger. The major thrust of the work will be to further test the model and hopefully extend it to a broader empirical domain (to spatiotemporal patterns presented to different body sites using a variety of tactile displays). Among the experiments being proposed are (1) further work on the measurement of cutaneous spatial sensitivity using sinewave gratings, (2) a comparison of pattern perception at different body loci, (3) an attempt to disentangle sensory and non-sensory factors that account for the large individual differences in tactile pattern perception and (4) tactile (and visual) recognition of characters drawn from various set sizes (e.g., 8, 15, 26 characters).
|
0.958 |
1987 — 1989 |
Loomis, Jack M. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Analysis of Navigation Without Sight @ University of California Santa Barbara
The research is concerned with non-visually guided navigation by blind and by blindfolded observers. All experimental tasks involve locomotion through a work area of 30m by 30m; some segments of travel will involve guidance by the experimenter while others will involve free locomotion. The experiments will attempt to analyze navigation performance into two major components: (1) perception of distance and heading changes and (2) cognitive representation of surrounding space and transformations of this representation during locomotion. Precision of the first component will be assessed by simple tasks such as estimation, reproduction, and bisection of distances or angles. The second component will be assessed by more complex tasks, such as having the observer (1) return to the start point after being guided over two legs of a triangle and (2) proceed directly between two locations that are known previously by traveling between each and a common origin. The research will also evaluate the utility of a stereophonic auditory display as an interface to a digital map system. The research will add to our understanding of the apprehension of space without vision and will aid in the development of an effective display to be used in conjunction with those digital map/navigation systems which are coming into use and may some day prove useful for the visually impaired.
|
0.958 |
1990 — 1994 |
Loomis, Jack |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Perception of 3-D Structure From Optical Motion @ University of California-Santa Barbara
Despite a long history of distinguished research, our understanding of how humans perceive the shapes, sizes, and locations of objects in the three-dimensional space surrounding them is still incomplete. In particular, we are only beginning to understand how motion of an image on the retina, resulting from movement of the object being observed or of the observer, leads to a perception of object shape and of spatial layout of a complex scene. One reason that this aspect of space perception is only now receiving due consideration is that proper investigation of the question was impossible until recently, when fairly powerful graphics computers became widely available. This research will address how adult human observers are able to perceive three-dimensional shape solely from retinal image motion as might be produced by a rapid succession of flat images on a computer video display. The primary goals are to identify which aspect of the retinal image motion is critical for the perception of 3-D shape and to develop a computational model that can predict the perceived shapes corresponding to different patterns of retinal motion. Another goal is to determine the degree to which the perception of 3-D shape, once built up, perseveres in the absence of sustaining retinal stimulation; there is evidence that some sort of internal representation does remain even when sensory stimulation is interrupted. A final goal is to determine how the perceived shape produced by retinal stimulation depends upon the perceived distance of the stimulus; prior work suggests that the perceived shape of a simulated object undergoing rotation varies in a different fashion with distance of the simulated object than the perceived shape of such an object undergoing side-to-side motion (translation). If confirmed, this result will provide important clues to the nature of the brain process involved in the perception of three-dimensional shape from motion. Enhanced understanding of these processes may lead to two important applications. First, it will indicate ways of optimizing displays for use in scientific visualization; scientific visualization refers to the use of 3-D animations as tools in understanding abstract scientific concepts (e.g., electron cloud), structures (e.g., complex molecules), and processes (e.g., molecular interactions). Second, enhanced understanding of how humans recover shape from motion will point to one possible way for computer visual systems to exploit optical motion in the analysis of complex scenes. In addition, understanding human perception of shape from motion will contribute to understanding of the more general problem of visual space perception, mentioned above.
|
1 |
1992 — 1995 |
Loomis, Jack M. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Navigation Aid For the Visually Impaired @ University of California Santa Barbara |
0.958 |
1998 — 2003 |
Loomis, Jack Blascovich, James (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Kdi: Virtual Environments and Behavior @ University of California-Santa Barbara
State-of-the-art virtual environment technology immerses individuals in illusory physical and social environments which are under the complete control of the virtual environment creators. People act relatively unrestrictedly and in real time within such virtual environments. This fact has broad implications for basic and applied research in the behavioral, educational, and social sciences. This project focuses on immersive virtual environments as a basic research tool in four substantive areas: Learning, visual perception, social interaction and social influence, and spatial cognition. The research will help to answer basic theoretical research questions, and will establish the validity and reliability of immersive virtual environments as a research tool.
The project will use immersive virtual environments to understand learning of scientific systems, including: presence (i.e., the feeling of immersion or being in a virtual representation of the scientific system such as a heart or a lung); guidance (i.e., free vs. guided exploration within a three-dimensional representation); realism (i.e., abstract vs schematic virtual representation); and verbal augmentation (i.e., audible narrative). The project will also use immersive virtual environments to investigate basic perceptual mechanisms underlying distance perception, perceptual-motor transformations (i.e., perceptual correction of visual distortions caused by external factors such as prisms), and perception of lightness and shape. Immersive virtual environments will also be used to explore and identify nonverbal communication characteristics that are essential to meaningful social interaction within and outside of virtual environments, and to conduct studies on fundamental social influence processes (i.e., social facilitation/inhibition, group risk taking, and ostracism) within virtual environments. Within the area of spatial cognition, the project will investigate how individuals aggregate local cognitive maps into global ones, and to investigate basic properties of alignment effects (i.e., how an individual's body orientation affects acquisition of spatial knowledge).
|
1 |
1999 — 2002 |
Loomis, Jack M. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Navigating Without Vision--Basic and Applied Research @ University of California Santa Barbara
DESCRIPTION (Adapted From The Applicant's Abstract): The project consists of applied and basic research, with a decided focus on the latter. On the applied side, the team will continue refining the test-bed navigation system for the blind developed during the last four years. The system guides a blind person through an outdoor environment and provides information about prominent landmarks and environmental features. A differentially-corrected GPS receiver worn by the traveler is used to determine the person's longitude and latitude, the values of which are communicated to the computer with a spatial database containing information about environmental landmarks. A virtual acoustic display indicates the positions of environmental features and landmarks by having their labels, spoken by speech synthesizer, appear as sounds at the appropriate locations within the auditory space of the traveler. Experimental research includes an experiment comparing spatialized sound with non-spatialized synthesized speech in fairly realistic settings. Their basic research is relevant to long-term development of an effective navigation system, but focuses on underlying non-visual spatial processes. There are 4 basic research topics: auditory space perception, path integration, the learning of spatial layout, and the learning of route configurations by "preview". In connection with auditory space perception, they will conduct a systematic study of the factors influencing the extracranial localization of earphone sound and another study to determine whether the perceived locations of auditory targets fully determine the perceived interval between them. In connection with path integration (a form of navigation in which self-motion is integrated to determine current position and orientation), they will address the effects on path integration of homing to spatialized sound vs. passive guidance (by way of the sighted guide technique) and the scale of the path. In connection with the learning of spatial layout, they will conduct experiments with repeated traversal of a path. The studies gradually increase the complexity of the subject's task, starting with perceiving and remembering the location of a single landmark while traversing a straight path and ending with learning the spatial layout of several off-route landmarks while repeatedly traversing a square path. In these tasks they will compare the relative effectiveness of spatialized sound and non-spatialized speech for conveying the locations of the landmarks (relative to the subject's current location). They also investigate whether, if a path is repeatedly explored in the same direction, the learned representation is orientation-specific. The experiments on spatial learning by preview compare the learning of a route by walking vs. auditory or haptic exposure.
|
0.958 |
2002 — 2007 |
Turk, Matthew (co-PI) [⬀] Beall, Andrew Loomis, Jack Blascovich, James [⬀] Bailenson, Jeremy (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Using Virtual Environment Technology to Understand and Augment Social Interaction @ University of California-Santa Barbara
This project focuses on facilitating and augmenting social interaction in virtual environments, particularly immersive virtual environments. Virtual environment technology allows individuals to freely move about digital "worlds" in real time observing and interacting with the environment and virtual others within it. Increased sophistication of virtual environment technology and digital imaging of people promises a new age for technologically mediated social interaction of geographically separated individuals. However, in order to implement such interaction virtually in meaningful and productive ways, an understanding of the parameters of people's perceptions of each other's non-verbal signals (e.g., facial expressions, gestures, gaze) within virtual environments is necessary. Such an understanding will provide a hierarchical taxonomy of the necessary and sufficient non-verbal signals that are critical to social interaction within virtual environments and, therefore, must be tracked and rendered among interactions in virtual environments. Realizing the objectives of the proposed project will advance scientific understanding in the areas of social interaction and non-verbal behavior, human participation in collaborative virtual environments, and technological (e.g., computer vision) aspects of automated tracking and rendering of human on-verbal signals.
|
1 |
2008 — 2009 |
Loomis, Jack Giudice, Nicholas Klatzky, Roberta |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Spatial Images From Vision, Touch and Hearing in Sighted and Blind @ University of California-Santa Barbara
As people interact with their environment, they maintain a perceptual representation of its physical layout, including the location of objects. When the supporting sensory stimulation ceases, as, for example, when an object is hidden by another or when it passes out of the person's field of view as they turn around, people still 'know' where the object is and can direct actions toward it (e.g., they can point to its location without seeing it). This project investigates the representation of spatial layout that remains in the absence of direct sensory support. The hypothesis is that these representations ("spatial images") are fully three-dimensional, may be created by vision, hearing, or touch, and stationary with respect to the environment as the person moves. The work examines whether intentional interaction with the environment strengthens spatial images and compares spatial images from different sensory modalities (vision, hearing and touch) to determine whether spatial images are modality-specific (retaining characteristics of the input modality) or are amodal. Importantly, the work will evaluate spatial images in both blind and sighted people.
The significance of the project is twofold. First, the project will bring some balance to the enormous literature on imagery, which has been almost exclusively concerned with visual imagery in sighted people. Second, a better characterization of the functional properties of spatial images will inform the design of non-visual computer interfaces for blind/low-vision people and for sighted people performing tasks in which visual information is lacking (e.g., while steering a car or aircraft, keeping track of goals and threats not currently in sight). The work also has relevance for the development of navigation systems for blind and visually impaired people. The results should promote better understanding of the efficacy of non-visual displays for use by the blind and visually impaired populations.
|
1 |
2009 — 2010 |
Giudice, Nicholas A (co-PI) [⬀] Klatzky, Roberta L Loomis, Jack M. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multimodally Encoded Spatial Images in Sighted and Blind @ University of California Santa Barbara
DESCRIPTION (provided by applicant): The proposed research investigates a representation of spatial layout that serves to guide action in the absence of direct perceptual support. We call this representation a "spatial image." Humans can perceive surrounding space through vision, hearing, and touch. Environmental objects and locations are internally represented by modality-specific "percepts" that exist as long as they are supported by concurrent sensory stimulation from vision, hearing, and touch. When such stimulation ceases, as when the eyes close or a sound source is turned off, the percepts also cease. A spatial image, however, continues to exist in the absence of the percept. For example, when one views an object and then closes the eyes, one experiences the continued presence of the object at its perceptually designated location. Although the phenomenological properties of the spatial image are known only to the observer, functional characteristics of spatial images can be revealed through systematic investigation of the behavior of the observer on a spatial task like spatial updating. For example, the observer might try to walk blindly to the location of a previously viewed object along any of a variety of paths. A sizeable body of research indicates that people have an impressive ability to do so. An important property of spatial images is that they function equivalently in many cases, despite variations in the input sensory modality. In previous work, the PI's have shown that distinct input modalities, like vision and audition, induce equivalent performance on a variety of spatial tasks. Perhaps even more surprising, spatially descriptive language was found to produce spatial images that are functionally equivalent, or nearly so, as revealed by performance on spatial tasks. Our hypothesis is that the different spatial modalities of vision, touch, hearing, and language all feed into a common amodal representation. Spatial images can also be created by retrieving information about spatial layout from long-term memory. Importantly, blind individuals are able to perform many spatial tasks because spatial images are not restricted to the visual modality. Although most of our understanding of spatial images comes from laboratory experiments that seem unrepresentative of everyday life, it is important to realize the pervasiveness of spatial images in the lives of sighted and blind people. For both populations, there are many circumstance where maintaining a spatial image of the immediately surrounding environment (e.g., working at the office, playing sports) allows individuals to rapidly redirect their activity to objects without having to re-initiate search for them. This leads to fluency of action with minimal effort. Our proposed research will further our knowledge about spatial images produced by visual, haptic, auditory, and language input as well as those activated by retrieval of spatial information from long-term memory. Our research consists of theoretically-based experiments involving sighted and blind subjects. All of the experiments rely on logic to make inferences about internal processes and representations from observed behavior, such as verbal report, joystick manipulation, and more complex spatial actions, like reaching, pointing, and walking. Our experiments are grouped into 3 topics. The first topic is concerned with establishing further properties of spatial images. Four of the five experiments under this topic are concerned with whether touch and vision produce spatial images that are functionally similar;the fifth will investigate possible interference between spatial images from perception and those from long-term memory. The five experiments within the second topic exploit different paradigms and logic for testing whether spatial images from different sensory modalities are amodal (retaining no information about the encoding modality) or modality-specific (retaining information about the encoding modality). The third topic is concerned with whether spatial images are equally precise in all directions around the head, in contrast to visual images which are thought to be of high precision only when located in front of head. The primary significance of this research will be the expansion of knowledge of multimodal spatial images, which so far have received very little scientific attention in comparison with visual images, about which hundreds of scientific papers have been published. This knowledge will further our understanding of the extent to which spatial cognition is similar in sighted and blind people. This knowledge will also be useful for researchers and technologists who are developing assistive technology, including navigation systems, for blind and visually impaired people. More generally, this knowledge will lead to improved tests of spatial cognition that will be useful in better understanding the deficits in knowledge and behavior resulting from diseases, such as Alzheimer's, and brain damage.
|
0.958 |