2003 — 2011 |
He, Zijiang |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. R56Activity Code Description: To provide limited interim research support based on the merit of a pending R01 application while applicant gathers additional data to revise a new or competing renewal application. This grant will underwrite highly meritorious applications that if given the opportunity to revise their application could meet IC recommended standards and would be missed opportunities if not funded. Interim funded ends when the applicant succeeds in obtaining an R01 or other competing award built on the R56 grant. These awards are not renewable. |
Mechanisms of Intermediate Distance Space Perception @ University of Louisville
DESCRIPTION (provided by applicant): A significant proportion of human activities involving space orientation, locomotion and action occur over a critical span of space of about 2 to 25m from the observer. Yet, the mechanisms underlying the abilities to flawlessly perform these activities in the intermediate distance range are not well understood, though appreciated whenever the abilities are impaired due to brain injuries. How is space perception impaired? A key to answering this question is to understand how human perception in the intermediate distance range is referenced relative to the physical space. An early theoretical answer to this question was provided by Gibson (1950), who proposed that space perception in the intermediate distance range is highly influenced by the structure of the ground surface. The current proposal presents some empirical evidence to support the ground reference idea, in addition to presenting new hypotheses to uncover the perceptual mechanisms underlying space perception in the intermediate distance range. Three broadly defined issues will be addressed. These are: 1. How does the visual system define the ground surface reference frame for distance judgment? 2. How is the eye level determined and calibrated? 3. How is an object above the ground surface localized? This research will be conducted both in the real and virtual reality environments. The latter not only provides for a controlled stimulus environment, but will also provide valuable insights into designing virtual reality systems with high immersion quality, which will be of benefit to those interested in the vocational and therapeutic usage of the virtual reality systems. Above all, this research will advance the knowledge of how the ground surface is represented by the brain, and how it is employed as a reference frame for localizing objects, which is an important step to understanding space perception and cognition.
|
0.915 |
2013 — 2014 |
He, Zijiang Zahorik, Pavel A |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Psychophysical Research On Auditory/Visual Space Perception and Navigation @ University of Louisville
DESCRIPTION (provided by applicant): Our normal everyday perception of 3-dimensional space and our abilities to interact and navigate within the space require integration of information from multiple sensory modalities. The integration of distance/depth information in the intermediate range (2 - 20 m), where visual and auditory modalities provide the primary inputs, is not well understood, however. The long-term goal of this project is a complete understanding of how auditory and visual information is integrated to form distance percepts that can support accurate orientation and navigation in both normal and sensory-impaired populations. The objective of this application is to test and refine an innovative conceptual framework that represents the integration processes in a normal-hearing, normal-vision population. The central hypothesis guiding this framework is that distance perception, unlike the perception of direction, requires additional contextual or background information about the environment that is beyond that provided by the object itself. This background representation can act like a frame of reference for coding distance. For multisensory distance input, object, contextual and background information must be integrated across modalities. But since all the information is not necessarily available at the same time, memory must be involved in the integration process. The rationale that underlies the proposed research is that once a conceptual framework for auditory/visual distance integration has been specified and validated for normal populations, new and innovative approaches can be applied to understanding and minimizing the impact of sensory impairments on spatial perception and navigation. This hypothesis will be tested by pursuing two specific aims: 1) Reveal an integrated auditory and visual reference frame for distance perception based on the environmental background. 2) Determine the role of working memory in auditory/visual distance perception. These aims will be addressed by testing human distance judgment and navigation performance under conditions in which the contributions of the contextual information, background information or working memory are manipulated. Virtual and real stimulus manipulation techniques will allow for novel pairing of auditory and visual information that will be used to evaluate and refine the proposed framework. Development and validation of this framework will be a significant contribution because it will provide a better understanding of how humans are able to successfully integrate auditory and visual information to perform spatial tasks in the environment. Moreover, it will provide a vehicle for future studies to advance the field of multisensory space perception. The proposed research is relevant to public health because it will lead to a better understanding of how auditory or visual impairment affects multisensory space perception. Ultimately, this knowledge may inform the development of new strategies for assisting or enhancing degraded spatial information to improve orientation and navigation abilities in visually- and/or hearing-impaired populations.
|
0.915 |
2014 — 2018 |
He, Zijiang |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Mid-Level Mechanisms of Surface and Binocular Perception @ University of Louisville
DESCRIPTION (provided by applicant): Retinal images are inherently fragmentary and ambiguous because images of separate entities overlap. But the early visual mechanisms are not equipped to parse the overlapping 2-D retinal images into distinct 3-D entities. The job of parsing these images falls on the mid-level mechanisms, whose main role is to represent the distinct entities as separate surfaces. The represented surface information then serves as inputs to the WHAT and WHERE systems that underlie our 3-D perception of objects and space, respectively. As such, the mid-level mechanisms are not just simple conduits of information between early and late level visual mechanisms but play a crucial role in determining the quality and reliability of the visual information conveyed. Compared to other aspects of visual processing, less is known about the mid-level mechanisms. One of the biggest challenges is to discover how the often fragmentary and ambiguous retinal information is transformed into reliable surface representations, presumably, through a spreading-in operation. At times, when an image belonging to the same entity is broken into parts due to occlusion, a surface interpolation operation is required to integrate the parts into a global surface. Moreover, inputs from the two eyes that contribute to these operations can be disparate in content and location. In the face of the myriad complexities of the visual inputs, it is further proposed that the mid-level mechanisms must rely on internal assumptions (perceptual rules) and feedbacks from the higher visual levels for guidance in representing surfaces. But how these operations are accomplished is still unclear. Remedying it, this proposal uses the human psychophysical approach to investigate the above issues by focusing on three specific aims. Aim 1 investigates how the spreading-in operation represents surfaces with texture patterns, which is more complex than representing texture-free surfaces. It is proposed the principle of reducing coding redundancy that governs the spreading-in operation causes the global surface representation operation to be efficient but prone to poor resolution. The latter could be one basis of the well-known crowding effect phenomenon. Aim 2 investigates the texture-surface interpolation operation. Cognizant of the roles of attention and object knowledge, the research investigates how these top-down factors influence surface integration. Aim 3 investigates the long-term plasticity of the mid-level mechanisms. Perceptual learning experiments will be conducted to reveal how extensive training modifies the perceptual rules implemented at the mid-level. The long-term goal of this proposal is to advance our knowledge of how visual information is processed and represented by the mid-level mechanisms. This knowledge helps us better understand how humans perceive the visual world, and provides a clinical basis for behavioral diagnoses and treatments of visual dysfunctions related to amblyopia, strabismus and aging.
|
0.915 |
2021 |
He, Zijiang Ooi, Teng Leng |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Visual Mechanisms of Intermediate Distance Space Perception During Self-Motion @ University of Louisville
Project Summary Every day we rely on our vision to judge the absolute distances of objects around us to plan and guide our actions, such as walking and driving. This, wayfinding, process of ascertaining one?s position and planning for possible routes of actions cannot be accomplished without reliable perception of visual space in the intermediate distance range (~2-25m from the observer). Thus, the broad long-term objective of this project is to uncover the mechanisms underlying intermediate distance space perception that supports distance judgment. Yet, less is known about the underlying mechanisms of intermediate distance space perception compared to those of near space perception (<2m). Moreover, extant knowledge is predominantly obtained from testing static observers, making it difficult to generalize to the more common situation where observers plan and execute self- motion. The latter situation is more complex because self-motion is accompanied by retinal image motion of static objects in the surrounding environment, potentially requiring the visual system to simultaneously track the locations of all objects in the environment. The visual system also requires more processing capacity because it has to simultaneously compute the visual space representation, explore the environment, implement motor controls, etc. Clearly, both challenges ? coding complexity and capacity limitation ? could pose as potential threats to our ability to efficiently judge absolute distances and implement actions. We hypothesize the visual system overcomes both challenges by: (a) spatially updating the moving observer?s position using an allocentric, world-centered spatial coordinate system for representing visual space, and (b) use spatial working memory (spatial-image) during spatial updating. We will investigate both hypotheses in three specific aims. Aim 1: Investigate the implementation of the allocentric, world-centered spatial coordinate system Aim 2: Investigate the factors affecting the spatial updating of visual space Aim 3: Investigate the role of spatial-image memory in visual space perception Our psychophysical experiments will measure human behavioral responses in the real 3D environment. This approach allows us to understand how our natural ecological niche, namely, the ground surface, both constrains and supports space perception and action in the real world. We will test human observers? ability to judge target locations in impoverished visual environments under various conditions, such as while manipulating the observers? cognitive load (attention and memory), or available visual and idiothetic (vestibular and proprioception) information, while they plan and/or execute self-motion (walking). The outcomes of this research will advance the space perception literature, bridge theoretical knowledge of visual space perception and memory-directed navigation (cognitive maps), as well as reveal the influence of vestibular and somatosensory signals. In turn, the theoretical advancements provide insights for better understanding of intermediate distance space perception related to eye and visual impairments and their impacts on mobility in the real 3D environment.
|
0.915 |