1998 — 2001 |
Klatzky, Roberta Macwhinney, Brian (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Model For Research Training in Psychology, From Classroom to Project @ Carnegie-Mellon University
We will provide instruments in a new 3600-square-foot teaching laboratory located in the Department of Psychology at Carnegie Mellon University, in order to develop state-of-the-art facilities for the teaching of research methods in psychology. The educational process will occur both in formal classes, largely conducted in a new classroom facility, and in individual student projects, conducted in independent-study rooms within the same area. Our model is for students to take courses in the teaching laboratory that will prepare them for subsequent independent work. Instrumentation will comprise both software and hardware. A principal goal of the planned curriculum development is to enhance the department's research methods courses, which are a cornerstone of our undergraduate program. We currently teach research methods in cognitive psychology, developmental psychology, and social health psychology. New instrumentation will greatly improve those courses and allow us to add a methods course in cognitive neuroscience. Other curricular developments will occur in courses utilizing brain images, courses teaching computational methods, and faculty-supervised independent research projects. Course materials and tools that are developed will be disseminated over the internet, as a model for the application of particular tools to teaching of research methods. A Web site will be created to publicize results from classroom curricular development and, when appropriate, tools developed in student projects. The site will provide information about each methods course, including the syllabus, assignments that use the instrumentation (software and hardware), and faculty-developed tutorials for using the instruments. Local and national outcomes will be assessed.
|
0.915 |
1998 — 2003 |
Klatzky, Roberta Hollis, Ralph [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
High-Fidelity Haptic Interaction With Three-Dimensional Environments Using Lorentz Magnetic Levitation @ Carnegie-Mellon University
*** 9802191 Hollis This project builds on results from previous NSF grant IRI9420869: "Magnetic Levitation Haptic Interfaces." The previous grant developed a haptic interface (force/torque feedback) interaction capability for computer users based on Lorentz magnetic levitation. The user interacts with a single moving part with three rotational and three translational degrees of freedom (DOF) floating above the desktop. Combined with advances in physically-based simulation methods, it provided computer users convincingly real haptic (sense of touch) interaction with computers through six-degree-of-freedom (DOF) position input and 6-DOF force/torque output to the user's hand at resolutions of approximately 10 microns and a position bandwidth of approximately 75 Hz. The present effort seeks to make several technology enhancements and to quantitatively measure the psychophysical effectiveness of this approach. Comparisons are being made for haptic interaction with virtual environments and telemanipulation environments, and both of these with real environments to quantitatively clarify the differences between these modes. Methods for robustly "caching" local haptically-relevant regions of the virtual environment are being explored to smoothly synchronize the visual and haptic displays. Psychophysical methods are employed in the design of the human-computer interface to provide a controlled setting for conducting psychophysical measurements. Finally, detailed psychophysical measurements are being performed with the developed experimental system to quantitatively measure the degree of reality provided. The knowledge gained from this work provides needed information for the engineering science of haptic interface design while helping to elucidate the nature of haptic interaction itself. ***
|
0.915 |
2004 — 2009 |
Klatzky, Roberta Hollis, Ralph [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Quantitative Analysis of 3d Haptic Performance and Perception @ Carnegie-Mellon University
The project concerns high-performance haptic (sense of touch) interaction with three-dimensional (3D) computed (virtual) and real environments. The principal research objective is to quantitatively determine how much reality is achievable in 3D haptic/visual virtual and remote real environments. The approach is based on six-degree-of-freedom (6-DOF) haptic interaction technology using Lorentz magnetic levitation. Both proprioceptive (kinesthetic) senses of the fingers, hand, and wrist as well as the tactile senses in the skin are involved in the interaction. Lorentz levitation provides higher bandwidths and motion resolutions than are available with traditional technologies. The project includes i) adding direct force-torque sensing, combining favorable aspects of both impedance- and admittance-type devices; ii) the creation of a highly realistic 3D peg-in-hole virtual environment; iii) development of an elastic deformation environment with buckling phenomena; iv) comparison of subjects' interaction with virtual, real, and remote-real environments, and v) performance of a suite of psychophysical experiments to quantitatively measure the degree of reality provided. The quantitative characterization of haptic interaction transparency afforded by this approach contrasts markedly with pure engineering measurements or purely subjective evaluations. The research results provide knowledge for the engineering science of haptic interface design while helping to elucidate the nature of the human haptic interaction process. This could lead to the future widespread use of haptic technology for computer augmented design, medical training, telemanipulation and telepresence systems, vehicle piloting simulation, and the exploration of complex multi-dimensional data sets. The project has an important educational impact which includes the study of haptics in undergraduate and graduate course work.
|
0.915 |
2006 — 2007 |
Klatzky, Roberta Macwhinney, Brian (co-PI) [⬀] Behrmann, Marlene (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pac: Embodiment, Ego-Space, and Action @ Carnegie-Mellon University
The majority of research on human perception and action tends to treat these as separate functions. What is less often considered in these research domains is that humans interact with a perceived world in which they themselves are part of the perceptual representation. Evidence has been mounting to show that self-representation is fundamental to both executing and understanding spatially directed action. It has been theorized to play a role in reaching and grasping, locomotion and navigation, infant imitation, spatial and social perspective taking, and neurological dysfunctions as diverse as phantom limb pain and autism. Behavioral research has revealed a number of tantalizing outcomes that point to a role for the representation of the body in basic human function; neuroscientists have identified multiple sensorimotor maps of the body within the cortex and specific brain areas devoted to the representation of space and place; and developmental researchers have identified neonatal behaviors indicating a representation of self and have traced the course of spatially oriented action across the early years. What is needed is a shared effort to merge perspectives of behavioral science, neuroscience, and developmental psychology in order to further our understanding of self-representation. With support from the National Science Foundation, the 2006 Carnegie Symposium will provide a forum by which researchers from these various perspectives can come together to share their findings, ideas, aspirations, and concerns.
|
0.915 |
2010 — 2015 |
Klatzky, Roberta |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Medium: Collaborative Research: Surface Haptics Via Tractive Forces @ Carnegie-Mellon University
Surface Haptics, or the creation of virtual haptic effects on physical surfaces, is a topic of rapidly growing importance in human?]computer interaction because of the increasingly widespread use of touch screens. Touch is at once an elegant and maddening interface modality. It is elegant in its simplicity: one can make a selection or tap a button or key with no intervening mouse or joystick. Moreover, touch (especially multi?]finger touch) supports gestures, such as swiping and expanding, which are satisfyingly natural. It is maddening, however, due to the lack of tactile and kinesthetic feedback that are so critical to natural touch. Typing on a virtual keyboard, for instance, is typically an experience of visually guided hunt?]and?]peck with liberal use of the back?]space key.
In this research the PIs will further develop a new class of surface haptic devices, called xPaDs, that promise to enrich the use of touch screen and touchpad interfaces for sighted as well as blind users. xPaDs are notable because they provide controllable shear forces between the fingertips and an ordinary sheet of glass. By controlling shear force in response to a measure of fingertip position (which may be obtained using a variety of existing technologies), it is possible to simulate a huge array of virtual effects; examples include toggle switches that flip from one state to another (each state is a "potential well" on the glass surface that pulls the finger to a given location), and contours that can be easily traced.
The heart of the current project lies in the systems engineering that will lead to practical and effective devices capable of controlling a force at one or more fingertips, and in the psychophysical and application?]based studies that will teach us how these capabilities may best be used. xPaDs are sophisticated dynamic systems that employ ultrasonic vibrations to modulate friction synchronized with in?]plane vibrations to produce controllable force vectors. The PIs will address the challenges of controlling force individually at each fingertip, of producing xPaDs with large surface area, and of minimizing energy consumption and audible noise generation. They will use the idea and methodology of "pop?]out" experiments to find haptic primitives, that is to say features the human perceptual system can extract with minimal or no perceptual load. The PIs will measure the information transmission capacity of surface haptic devices treated as symbolic channels. And they will explore the ability of the perceptual system to "bind" surface haptic features presented to different fingertips into a meaningful, coherent whole. These studies will position the PIs for investigating a set of applications for the blind.
Broader Impacts: Computer interfaces for the blind often rely heavily on speech, which necessarily presents information serially. The PIs argue that a haptic surface can augment a speech?]based interface with critical spatial information. They will study the editing and reading of mathematical expressions, of locating key content on a web page, of navigating intersections, and of planning routes with tools such as Google Maps. In addition, the PIs will develop a low?]cost xPaD development kit, make the plans and code available on the Internet, and develop a high school enrichment unit based upon these materials.
|
0.915 |
2015 — 2019 |
Klatzky, Roberta |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Large: Collaborative Research: Textureshop: Tools For the Composition and Display of Virtual Texture @ Carnegie-Mellon University
When we interact with the physical world touch is a vitally important sensory channel, but when we interact with the digital world that is not yet the case. Historically, this situation may have been principally due to inadequate tactile displays, but that limitation is quickly disappearing. Increasingly, the principal limitation is the lack of tactile content. The goal of this collaborative research that involves scientists at three institutions is to empower the content creator, by enabling people to perform the same sorts of operations with tactile textures that they routinely perform with photographs. Those operations include "capturing" a texture, building a mathematical representation of it, creating and displaying synthetic versions that feel very much like the original, enhancing it in various ways (e.g., making it rougher or more velvety), and ultimately "composing" novel textures that nonetheless feel realistic and credible. As a tangible step in this direction, an open source, open hardware project begun under prior NSF support will be continued and expanded. That project resulted in the distribution of surface haptic devices to about a dozen different labs, leading to a variety of research studies. In this project, a low-cost surface haptic display and a variety of applications and software tools will be distributed to about 50 early adopters in the research community. Those individuals will be engaged in this research (e.g., by helping to "tag" various textures), and will be empowered to carry out their own research. In addition, workshops will be organized at major human-computer interaction conferences to support the growing surface haptics community.
This work is timely and compelling for a number of reasons. One, scientific understanding of the physical and neuronal bases of texture perception has advanced considerably in recent years. For instance, the relationships between vibrations on the skin (produced when a finger slides across a surface), spike timing in afferent neurons, and high-level percepts such as recognition of a specific texture, have recently been elucidated. Two, "surface haptic" technologies for displaying texture to the bare fingertips have also advanced significantly and can now display complex stimuli across the full bandwidth of tactile acuity. Three, the prevalence of touch screen interfaces has created a plethora of applications such as children's e-books, interfaces for the blind, games, and automobile control panels, which would be well-served by high quality tactile content. The merit of this research is that it will provide a principled foundation for both the creation and manipulation of that content. Contributions will include: the development of a "tactile camera" that is able to capture the relevant frictional and vibratory data from which realistic textures can be recreated; a novel mathematical representation of the salient aspects of texture as well as algorithms for synthesizing artificial textures on the basis of that representation; a suite of techniques for enhancing aspects of texture by direct operation on the mathematical representation, interpolation between multiple textures, and interaction with audio cues; and finally a set of tools for composing novel textures including search, texture combination and scale transformation.
|
0.915 |
2015 — 2019 |
Klatzky, Roberta Hebert, Martial (co-PI) [⬀] Siewiorek, Daniel (co-PI) [⬀] Satyanarayanan, Mahadev [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Csr: Chs: Large: Wearable Cognitive Assistance @ Carnegie-Mellon University
This research explores the deep technical challenges of a new class of computing systems that integrate a wearable device (such as Google Glass) with cloud-based processing to guide a user step by step through a complex task. Although easy to describe, many challenges in computer systems, computer vision and human-computer interaction must be overcome for this concept to become reality. Human cognition is a remarkable feat of real-time processing. It involves the synthesis of outputs from real-time analytics on multiple sensor stream inputs. An assistive system amplifies human cognition with compute-intensive processing that is so responsive that it fits into the inner loop of the human cognitive workflow. In its most general form, cognitive assistance is a very broad and ambitious concept that could be applied to virtually all facets of everyday life. As a pioneering effort, this research is more narrowly focused on user assistance for well-defined tasks that require specialized knowledge and/or skills, and for which task state and task-relevant actions are fully accessible to computer vision algorithms.
The research is organized into four broad thrusts. The first thrust decouples and cleanly separates low-level mobile computing and cloud computing issues such as resource management, network latency, placement, provisioning, scalability, and load balancing from the task-centric foci of the other tasks. The second thrust focuses on the computer vision research necessary to address the challenges of wearable cognitive assistance. Vision is the dominant sensing modality for the kinds of tasks addressed in this research, but the validation experiments will include proof-of-concept use of other sensing modalities such as audio and location. The third thrust focuses on task description, tracking, sequencing and user guidance. Its goal is to create a set of generalizable principles and tools that can be applied to a wide range of tasks. Matching task assistance to task demands and user capabilities will be integral to this thrust. The fourth thrust involves continuous integration of research from the first three thrusts and applies it towards end-to-end validation on a series of tasks of increasing sophistication and difficulty. This thrust involves close collaboration with industry partners.
This research will advance computer science by producing scientific insights, algorithms, system designs, implementation techniques, and experimental validations at the intersection of computer systems (including mobile computing, cloud computing, virtual machines, operating systems, wireless networking, and sensor networks), vision technologies (including computer vision and machine learning), and human-computer interaction. More broadly, society will benefit from wearable cognitive application in areas such as health care training, industrial troubleshooting and consumer product assembly. From an educational viewpoint, this research offers many unique opportunities to train graduate and undergraduate students on how to approach problems from a broad cross-disciplinary viewpoint.
|
0.915 |
2021 — 2024 |
Klatzky, Roberta Satyanarayanan, Mahadev [⬀] Lucia, Brandon |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cns Core: Medium: a User-Centric Adaptation Framework For Edge-Native Applications @ Carnegie-Mellon University
This proposal addresses a new class of applications called "edge-native applications" that have enormous societal value in domains such as assisting handicapped users, enforcing privacy in video streams, and enhancing the productivity of just-in-time manufacturing. Edge-native applications are simultaneously compute-intensive, bandwidth-hungry, and latency-sensitive. These attributes pose a fundamental challenge to scalability. The goal of this proposal is to develop new techniques for efficiently supporting large numbers of users of such applications, without hurting their quality of experience (QoE).
The proposed research is organized into four thrusts. Thrust-1 investigates how on-device processing and adaptive sampling of sensor data can reduce load on edge infrastructure, while minimally impacting QoE. This thrust also creates an API between the operating system and applications for adaptation. Thrust-2 explores how to efficiently and seamlessly move work from overcommitted edge infrastructure to underutilized sites. It investigates both an application-transparent approach that is based on virtual-machine (VM) encapsulation, and an application-optimized approach that seeks to be frugal in data transmission. Thrust-3 creates tools and mechanisms to study QoE. Using machine learning on history-based data that is dynamically collected, it builds models of user-specific and application-specific tradeoffs for mapping application fidelity to QoE. It also creates tools for QoE debugging of edge-native applications. Thrust-4 explores how multi-fidelity applications that dynamically vary QoE can be evaluated without performing user studies. It develops a new evaluation methodology that is based on the concept of synthetic users, also known as "droids".
Through close partnership with industry and local government, this research will accelerate the emergence of transformative edge-native applications. Through integration with education and curriculum development, this research will provide unique learning opportunities for students in Computer Science, Electrical and Computer Engineering, and Human-Computer Interaction at the undergraduate and graduate levels. Because of the central role of applications in this research, it offers many research opportunities for a diverse group of individuals, including those from under-represented groups.
Software developed in the course of this research will be released open source via GitHub (http://github.com). Benchmarks and experimental data will be released on an institutional website (http://elijah.cs.cmu.edu). All results generated through this research will be available and actively maintained for at least five years after the conclusion of the project or after the publication of the data, whichever is first.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |