2004 — 2007 |
Giudice, Nicholas A |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Spatial Learning With Multiple Sensory Modalities @ University of California Santa Barbara
DESCRIPTION (provided by applicant): The aim of this research is to investigate spatial learning and navigation within and across sensory modalities, between sighted, low vision and blind participant groups. The general hypothesis posits that spatial information learned from individual sensory modalities leads to the formation of a common spatial representation across modalities. Experiment I and II establish if some of the perceptual biases known for vision also manifest in tactile learning. This work represents the first effort to directly compare visual and tactile map learning and to investigate performance as a function of visual status. Experiment III uses a cross-modal learning paradigm to assess if functionally equivalent spatial representations are formed between learning with vision, touch, and spatial language. This is the first study of its kind to directly compare environmental learning with information matched environments across three sensory modalities. The final outcome of this research will significantly add to our understanding of spatial learning between the senses and speak to the extent of functional equivalence that exists across spatial representations. The results of these experiments will also benefit the development of navigation systems for the blind.
|
0.958 |
2008 — 2013 |
Giudice, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi-Type Ii: Collaborative Research: Cyber Enhancement of Spatial Cognition For the Visually Impaired
Wayfinding is an essential capability for any person who wishes to have an independent life-style. It requires successful execution of several tasks including navigation and object and place recognition, all of which necessitate accurate assessment of the surrounding environment. For a visually-impaired person these tasks may be exceedingly difficult to accomplish and there are risks associated with failure in any of these. Guide dogs and white canes are widely used for the purpose of navigation and environment sensing, respectively. The former, however, has costly and often prohibitive training requirements, while the latter can only provide cues about obstacles in one?s surroundings. Human performance on visual information dependent tasks can be improved by sensing which provides information and environmental cues, such as position, orientation, local geometry, object description, via the use of appropriate sensors and sensor fusion algorithms. Most work on wayfinding aids has focused on outdoor environments and has led to the development of speech-enabled GPS-based navigation systems that provide information describing streets, addresses and points of interest. In contrast, the limited technology that is available for indoor navigation requires significant modification to the building infrastructure, whose high cost has prevented its wide use.
This proposal adopts a multi-faceted approach for solving the indoor navigation problem for people with limited vision. It leverages expertise from robotics, computer vision, and blind spatial cognition with behavioral studies on interface design to guide the discovery of information requirements and optimal delivery methods for an indoor navigation system. Designing perception and navigation algorithms, implemented on miniature-size commercially-available hardware, while explicitly considering the spatial cognition capabilities of the visually impaired, will lead to the development of indoor navigation systems that will assist blind people in their wayfinding tasks while facilitating cognitive-map development.
|
0.915 |
2009 — 2014 |
Worboys, Michael [⬀] Giudice, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Iii:Small: Information Integration and Human Interaction For Indoor and Outdoor Spaces
The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the "affordances" that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor.
Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments.
Results will be distributed using the project Web site (www.spatial.maine.edu/IOspace) and will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine?s NSF-funded RET (Research Experiences for Teachers). The research teams are working with two companies and one research center on technology transfer for building indoor-outdoor navigation tools with a wide range of applications, including those for the persons with disabilities.
|
0.915 |
2009 — 2010 |
Giudice, Nicholas A Klatzky, Roberta L Loomis, Jack M. [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multimodally Encoded Spatial Images in Sighted and Blind @ University of California Santa Barbara
DESCRIPTION (provided by applicant): The proposed research investigates a representation of spatial layout that serves to guide action in the absence of direct perceptual support. We call this representation a "spatial image." Humans can perceive surrounding space through vision, hearing, and touch. Environmental objects and locations are internally represented by modality-specific "percepts" that exist as long as they are supported by concurrent sensory stimulation from vision, hearing, and touch. When such stimulation ceases, as when the eyes close or a sound source is turned off, the percepts also cease. A spatial image, however, continues to exist in the absence of the percept. For example, when one views an object and then closes the eyes, one experiences the continued presence of the object at its perceptually designated location. Although the phenomenological properties of the spatial image are known only to the observer, functional characteristics of spatial images can be revealed through systematic investigation of the behavior of the observer on a spatial task like spatial updating. For example, the observer might try to walk blindly to the location of a previously viewed object along any of a variety of paths. A sizeable body of research indicates that people have an impressive ability to do so. An important property of spatial images is that they function equivalently in many cases, despite variations in the input sensory modality. In previous work, the PI's have shown that distinct input modalities, like vision and audition, induce equivalent performance on a variety of spatial tasks. Perhaps even more surprising, spatially descriptive language was found to produce spatial images that are functionally equivalent, or nearly so, as revealed by performance on spatial tasks. Our hypothesis is that the different spatial modalities of vision, touch, hearing, and language all feed into a common amodal representation. Spatial images can also be created by retrieving information about spatial layout from long-term memory. Importantly, blind individuals are able to perform many spatial tasks because spatial images are not restricted to the visual modality. Although most of our understanding of spatial images comes from laboratory experiments that seem unrepresentative of everyday life, it is important to realize the pervasiveness of spatial images in the lives of sighted and blind people. For both populations, there are many circumstance where maintaining a spatial image of the immediately surrounding environment (e.g., working at the office, playing sports) allows individuals to rapidly redirect their activity to objects without having to re-initiate search for them. This leads to fluency of action with minimal effort. Our proposed research will further our knowledge about spatial images produced by visual, haptic, auditory, and language input as well as those activated by retrieval of spatial information from long-term memory. Our research consists of theoretically-based experiments involving sighted and blind subjects. All of the experiments rely on logic to make inferences about internal processes and representations from observed behavior, such as verbal report, joystick manipulation, and more complex spatial actions, like reaching, pointing, and walking. Our experiments are grouped into 3 topics. The first topic is concerned with establishing further properties of spatial images. Four of the five experiments under this topic are concerned with whether touch and vision produce spatial images that are functionally similar;the fifth will investigate possible interference between spatial images from perception and those from long-term memory. The five experiments within the second topic exploit different paradigms and logic for testing whether spatial images from different sensory modalities are amodal (retaining no information about the encoding modality) or modality-specific (retaining information about the encoding modality). The third topic is concerned with whether spatial images are equally precise in all directions around the head, in contrast to visual images which are thought to be of high precision only when located in front of head. The primary significance of this research will be the expansion of knowledge of multimodal spatial images, which so far have received very little scientific attention in comparison with visual images, about which hundreds of scientific papers have been published. This knowledge will further our understanding of the extent to which spatial cognition is similar in sighted and blind people. This knowledge will also be useful for researchers and technologists who are developing assistive technology, including navigation systems, for blind and visually impaired people. More generally, this knowledge will lead to improved tests of spatial cognition that will be useful in better understanding the deficits in knowledge and behavior resulting from diseases, such as Alzheimer's, and brain damage.
|
0.958 |
2010 — 2016 |
Moratz, Reinhard Beard-Tisdale, Mary-Kate Giudice, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cdi-Type Ii: Collaborative Research: Perception of Scene Layout by Machines and Visually Impaired Users
The project investigates computational methods for object detection, spatial scene construction, and natural language spatial descriptions derived from real-time visual images to describe prototypical indoor spaces (e.g., rooms, offices, etc.). The primary application of this research is to provide blind or visually impaired users with spatial information about their surroundings that may otherwise be difficult to obtain from non-visual sensing. Such knowledge will assist in development of accurate cognitive models of the environment and will support better informed execution of spatial behaviors in everyday tasks.
A second motivation for the work is to contribute to the improvement of spatial capacities for computers and robots. Computers and robots are similarly "blind" to images unless they have been provided some means to "see" and understand them. Currently, no robotic system is able to reliably perform high-level processing of spatial information on the basis of image sequences, e.g., to find an empty chair in a room, which not only means finding an empty chair in an image, but also localizing the chair in the room, and performing an action of reaching the chair. The guiding tenet of this research is that a better understanding of spatial knowledge acquisition from visual images and concepts of spatial awareness by humans can also be applied to reducing the ambiguity and uncertainty of information processing by autonomous systems.
A central contribution of this work is to make the spatial information content of visual images available to the visually impaired, a rapidly growing demographic of our aging society. In an example scenario a blind person and her guide dog are walking to her doctor's office, an office which she has not previously visited. At the office she needs information for performing some essential tasks such as finding the check-in counter, available seating, or the bathroom. No existing accessible navigation systems are able to describe the spatial parameters of an environment and help detect and localize objects in that space. Our work will provide the underlying research and elements to realize such a system.
|
0.915 |
2014 — 2017 |
Giudice, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Non-Visual Access to Graphical Information Using a Vibro-Audio Display
Vision impairment is estimated by the World Health Organization as effecting 12 million people in the United States and as many as 285 million people worldwide, and these numbers are projected to double by 2030 due to the aging of our population. Lack of access to non-textual material such as graphs and maps is a major impediment for the blind, because the ability to apprehend and accurately interpret such information is critical for success in the classroom and in the workplace, as well as for independent travel. The inability to exploit this information helps explain why only about 11% of blind or low-vision persons have a bachelor's degree, why only 25% of blind people are employed, and why almost 70% of blind individuals do not navigate independently outside of their home. A major step toward improving these numbers, as well as the overall quality of life for members of the blind and low-vision community, would be to solve the longstanding challenge of affording low-cost and effective access to key graphical material.
The PI's goal in this project is to develop and evaluate a highly intuitive tool for doing just that. To this end, he will explore a multimodal combination of vibro-tactile, audio, and kinesthetic cues that can be generated by modern touchscreen tablets (especially smartphones), to convey useful visual information in real-time. Benefits of this approach include portability, affordability, and flexibility of use for multiple critical and common applications such as those enumerated above. The PI argues that considering these design factors from the outset, in conjunction with principled empirical investigations to evaluate and enhance information perceptibility interleaved with frequent prototype testing and iterative refinement, with tight involvement of members of the target population in all phases of the research, will ensure that project outcomes significantly reduce the graphical information gap between blind persons and their sighted peers. The research will make important contributions to our understanding of how blind and low-vision individuals process non-visual information, which is essential for rendering perceptually salient multimodal graphics, and will also help establish best practices both for rendering these graphics using a vibro-audio interface implemented on touchscreen enabled devices, as well as for similar future specialized interface development efforts.
|
0.915 |
2017 — 2019 |
Giudice, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Ecr Dcl Level 2: Perceptual and Implementation Strategies For Knowledge Acquisition of Digital Tactile Graphics For Blind and Visually Impaired Students
Students with disabilities often have fewer opportunities for experiential learning, an important component of quality STEM education. With continued shifts toward the use of digital media to supplement instruction in STEM classrooms, much of the content remains inaccessible, particular for students with visual impairments. The promise of technology and use of tactile graphics is an effective, emerging innovation for providing more complete access to important information and materials. Tactile graphics are images that use raised surfaces to convey non-textual information such as maps, paintings, graphs and diagrams. Touchscreen-based smart devices allow visual information to be digitally and dynamically represented via tactile, auditory, visual, and kinesthetic feedback. Tactile graphic technology embedded in touchscreen devices can be leveraged to make STEM content more accessible to blind and visually impaired students.
This project will develop a learner-centered, perceptually-motivated framework addressing the requirements for students with blindness and visual impairments to access graphical content in STEM. Using TouchSense technology, the investigators will create instructional materials using tactile graphics and test them in a pilot classroom of both sighted and BVI students. The investigators will work with approximately 150 students with visual impairments to understand the kind of feedback that is most appropriate for specific content in algebra (coordinate plane), cell biology, and geography. Qualitative research methods will be used to analyze the video-based data set.
This project is supported by NSF's EHR Core Research (ECR) program and the Discovery Research PreK-12 Program. The ECR program emphasizes fundamental STEM education research that generates foundational knowledge in the field. Investments are made in critical areas that are essential, broad and enduring: STEM learning and STEM learning environments, broadening participation in STEM, and STEM workforce development. The program supports the accumulation of robust evidence to inform efforts to understand, build theory to explain, and suggest intervention and innovations to address persistent challenges in STEM interest, education, learning and participation. The Discovery Research PreK-12 program (DRK-12) seeks to significantly enhance the learning and teaching of science, technology, engineering and mathematics (STEM) by preK-12 students and teachers, through research and development of innovative resources, models and tools (RMTs). Projects in the DRK-12 program build on fundamental research in STEM education and prior research and development efforts that provide theoretical and empirical justification for proposed projects.
|
0.915 |
2017 — 2018 |
Giudice, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I-Corps: Touchscreen-Based Graphics For the Blind and Visually-Impaired
The broader impact/commercial potential of this I-Corps project is in resolving the long-standing graphical access problem for millions of blind and visually-impaired (BVI) people. Lack of access to these substantial informational components of daily life represents one of the biggest challenges to the independence and productivity of BVI individuals. This project addresses this vexing accessibility issue through a viable information access solution built on commercial, low-cost touchscreen-based smart computing devices such as smartphones and tablets. The solution will assist BVI people by potentially providing dynamic access to digital school books and test materials, graphics in vocational settings, digital maps, and graphical contents of printed materials. The broader impact of this project is that it will promote empowerment of BVI individuals by supporting increased educational advancement, vocational opportunities, enhanced quality of life, and overall greater independence.
This I-Corp project will explore the commercial potential and market fit for a touchscreen-based graphics screen reader. This solution will allow blind people to freely explore and access graphical information via vibration and auditory feedback on a smartphone or tablet. Through a battery of usability, psychophysical, and technical experiments, the technology has been shown to be effective compared to existing solutions and is accurate in conveying informative graphical materials such as graphs, shapes, patterns, and indoor maps. The focus of the proposed I-Corp project is to understand the commercial fit and usability of the solution in meeting end-user needs and problems.
|
0.915 |
2018 — 2021 |
Doore, Stacy Dimmel, Justin Giudice, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Remote Multimodal Learning Environment to Increase Graphical Information Access For Blind and Visually Impaired Students
There are many limitations for students who are blind or visually impaired (BVI) in accessing complex STEM graphical information in the classroom or workplace. This longstanding problem arises due to reliance on inaccessible and outdated learning materials, the need for costly specialized devices, and an adherence to an outdated educational service model. To address these issues, this project will investigate the development and evaluation of an innovative remote learning system based on the use of multiple sensory channels to strategically present information from auditory, linguistic, touch, and enhanced visual sensing. The research will focus specifically on the optimization of multimodal information presentation and perception, separating sensory output based on its unique information processing characteristics for conveying different types of stimuli. The first project goal is to increase the quality of STEM instruction for BVI students by determining perceptually motivated learning supports that promote non-visual knowledge acquisition of STEM graphical and spatial information (learning goal). The second project goal is to increase access to graphical and spatial STEM content through creation of an innovative remote multimodal interface for communicating the conceptual meaning of visual information (technology goal). The project outcomes will contribute to theories of non-visual learning and multisensory processing, and a clear translational path to development of more efficient, intuitive, and usable multimodal interfaces for both blind and sighted users. The application of the results will help to address the severe under-representation of BVI individuals in STEM-related disciplines, and the 70% unemployment rate of this demographic, by providing a new, low-cost, and accessible technology platform for communicating non-visual graphical STEM materials.
The researchers will answer the following inter-connected questions: 1) What is the best information content to be conveyed by different modal outputs for maximizing perceptual saliency, learnability, interpretation, and representation of STEM graphical materials? Once optimized in the lab, 2) How well does the optimized multimodal learning system perform in a remote deployment environment in conveying graphical STEM materials to BVI learners; and 3) Does the remote learning system increase the level of comprehension of STEM graphical content as compared to traditional BVI instructional methods? Both quantitative and qualitative data about the optimization process and the remote technology system will be collected and analyzed including user response metrics on speed/accuracy, user experience data, and STEM graphical assessment instruments adapted for BVI students. The first phase of the research will investigate multimodal information processing in order to establish best practices for information delivery and non-visual graphical learning efficiency with experiments comparing graphical information presented in different modalities for three core STEM graphical themes: graphs, diagrams, and maps. The second phase of experiments will investigate the remote learning system's efficacy as well as evaluating user performance on graphical STEM learning measures and key usability and satisfaction metrics. This project has the broader goals of increasing independence for BVI learners and other students with or without disabilities who might benefit from a remote multimodal learning environment, and the development of a new tool for supporting large-scale research and assistive technology evaluation with BVI human subjects, thereby dramatically increasing scientists' ability to recruit and work with a much larger population of BVI users than is currently possible from lab-based studies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2019 — 2022 |
Giudice, Nicholas Corey, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Improving User Trust of Autonomous Vehicles Through Human-Vehicle Collaboration
Fully autonomous vehicles (FAVs) represent a key future direction for transportation, especially for populations who are currently disenfranchised by traditional, manually operated vehicles, such as those with visual impairments or older adults. Unfortunately, the benefits of self-driving technology for these demographics have yet to be adequately considered in the design of FAVs. This concern is part of a larger problem in which the majority of people simply do not trust that self-driving cars will meet their needs. This project explores new ways to share decision-making information between people and FAVs. Instead of existing systems where the FAV makes the decisions in a "black box" way that users don't see or understand, the project will develop a Human-Vehicle Collaboration (HVC) framework in which effective communication about the vehicle's decision-making process improves users' sense of agency and decisions around trusting the FAV. Through studying people's reactions to and understanding of FAV driving decisions and developing algorithms and interfaces that help FAVs communicate in ways that address them, the HVC framework will promote appropriate levels of trust in FAVs for all users, while providing substantial benefits around improving accessibility for under-served populations.
The HVC framework will be designed and evaluated using a high-fidelity driving simulator that tests new interaction methods for sharing information during key driving events. Research will use this simulator and experimental platform to manipulate a host of variables relating to decision states while driving. Human data will be collected on reaction time, interpretation of vehicle decision-making, physiological measures based on galvanic skin response and heart rate, as well as pre-post survey measures for assessing trust in fully-autonomous vehicles. Results will form the foundation for developing and testing HVC profiles that provide individualized interactions and collaborations during driving events. These profiles and the resulting guidelines represent a key deliverable for the project as they will be designed from the outset to improve trust, accessibility, and the overall optimization of fully-autonomous vehicles.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2021 — 2024 |
Giudice, Nicholas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Investigating Inclusive Data Science Tools to Overcome Statistics Anxiety
The objective of this collaborative project is to design, create, and evaluate the Relatable Online Accessible Data Science (ROADS) platform. ROADS is designed to make online data science easier to understand by undergraduate and graduate students, with a focus on overcoming statistics anxiety. Reasons for statistical anxiety include limited math and computing background, self-esteem, gender, ethnicity, and disability, in addition to the design of statistics platforms. Data science literacy is integral for success in post-secondary science, technology, engineering, and math (STEM) fields, as well as for developing critical, industry-relevant, computational thinking skills. In collaboration with partner organizations, which span higher education, accessibility, and data science, the project will evaluate ROADS to lower barriers for successful participation. ROADS will be iteratively and inclusively designed, using both formative and summative empirical evaluations, at all stages of development, to inform the end user experience.
This project will investigate a new platform that uses relatable data science language, is readily available online through the web, and is accessible through visual, auditory, and touch feedback to students with disabilities. It brings together investigators in Computer Science, Mechanical Engineering, Education, and Cognitive Neuroscience to investigate three critical areas: (1) clarity and human comprehension of data science representations; (2) anxiety reduction; and (3) accessibility to people with disabilities. For evaluation, a series of studies is planned, involving 180 undergraduate students without disabilities and 120 with disabilities. Further, the researchers plan to study the outputs of data science tools, in conjunction with anxiety, involving approximately 1,000 undergraduate or graduate students, with and without disabilities. The project will incorporate regular feedback from an engaged advisory board, including a data science governance group (ACM Data Science Task Force), the Association on Higher Education and Disability, and disabilities services offices at four institutions. Project outcomes will include the development of design implications for inclusive design of data science tools and pedagogy at the undergraduate and graduate level, with a specific focus on reducing anxiety associated with data science.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |