2010 — 2015 |
Krusienski, Dean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Medium: Control of a Robotic Manipulator Via a Brain-Computer Interface @ Old Dominion University Research Foundation
A brain-computer interface (BCI) is a system that allows users, especially individuals with severe neuromuscular disorders, to communicate and control devices using their brain waves. There are over two million people in the United States afflicted by such disorders, many of whom could greatly benefit from assistive devices controlled by a BCI. Over the past two years, it has been demonstrated that a non-invasive, scalp-recorded electroencephalography (EEG) based BCI paradigm can be used by a disabled individual for long-term, reliable control of a personal computer. This BCI paradigm allows users to select from a set of symbols presented in a flashing visual matrix by classifying the resulting evoked brain responses. One of the goals of this project is to establish that the same BCI paradigm and techniques used for the aforementioned demonstration can be straightforwardly implemented to generate high-level commands for controlling a robotic manipulator in three dimensions according to user intent, and that such a BCI can provide superior dimensional control over alternative BCI techniques currently available, as well as a wider variety of practical functions for performing everyday tasks.
Electrocorticography (ECoG), electrical activity recorded directly from the surface of the brain, has been demonstrated in recent preliminary work to be another potentially viable control for a BCI. ECoG has been shown to have superior signal-to-noise ratio, and spatial and spectral characteristics, compared to EEG. But the EEG signals used at present to operate BCIs have not been characterized in ECoG. The PI believes ECoG signals can be used to improve the speed and accuracy of BCI applications, including for example control of a robotic manipulator. Thus, additional goals of this project are to characterize evoked responses obtained from ECoG, to use them as control signals to operate a simulated robotic manipulator, and to assess the level of control (speed and accuracy) between the two recording modalities and compare the results to competitive BCI techniques. Because this is a collaborative effort with the Departments of Neurology and Neurosurgery at the Mayo Clinic in Jacksonville, the PI team will have access to a pool of ECoG grid patients from which to recruit participants for this study.
Broader Impacts: This research will make a number of contributions in the emerging field of BCI and thus will serve as a step toward providing severely disabled individuals with a new level of autonomy for communicating with others and for performing everyday tasks, which will ultimately dramatically improve their quality of life.
|
0.915 |
2014 — 2017 |
Shih, Jerry Krusienski, Dean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Investigating the Neural Correlates of Musical Rhythms From Intracranial Recordings @ Old Dominion University Research Foundation
The project will develop an offline and then a real-time brain computer interface to detect rhythms that are imagined in people's heads, and translate these rhythms into actual sound. The project builds upon research breakthroughs in electrocorticographic (ECoG) recording technology to convert music that is imagined into synthesized sound. The project researchers will recruit from a specialized group of people for this project, specifically patients with intractable epilepsy who are currently undergoing clinical evaluation of their condition at the Mayo Clinic in Jacksonville, Florida, and are thus uniquely prepared to use brain-computer interfaces based on ECoG recording techniques. This is a highly multidisciplinary project that will make progress towards developing a "brain music synthesizer" which could have a significant impact in the neuroscience and musical domains, and lead to creative outlets and alternative communication devices and thus life improvements for people with severe disabilities.
Most brain-computer interfaces (BCIs) use surface-recorded electrophysiological measurements such as surface-recorded electroencephalogram (EEG). However, while some useful signals can be extracted from such surface techniques, it is nearly impossible to accurately decode from such signals the intricate brain activity involved in activities such as language with the detail needed to achieve a natural, transparent translation of thought to device control. On the contrary, intracranial electrodes such as ECoG are closer to the source of the desired brain activity, and can produce signals that, compared to surface techniques, have superior spatial and spectral characteristics and signal-to-noise ratios. Research has already shown that intracranial signals can provide superior decoding capabilities for motor and language signals, and for BCI control. Because complex language and auditory signals (both perceived and imagined) have been decoded using intracranial activity, it is conceivable to decode perceived and imagined musical content from intracranial signals. This project will attempt to similarly use ECoG to decode perceived and imagined musical content from intracranial signals as has been done for language and auditory signals.
|
0.915 |
2016 — 2019 |
Shih, Jerry Krusienski, Dean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-German Data Sharing Proposal: Crcns Data Sharing: Revealing Spontaneous Speech Processes in Electrocorticography (Response) @ Old Dominion University Research Foundation
The uniquely human capability to produce speech enables swift communication of abstract and substantive information. Currently, nearly two million people in the United States, and far more worldwide, suffer from significant speech production deficits as a result of severe neuromuscular impairments due to injury or disease. In extreme cases, individuals may be unable to speak at all. These individuals would greatly benefit from a device that could alleviate speech deficits and enable them to communicate more naturally and effectively. This project will explore aspects of decoding a user's intended speech directly from the electrical activity of the brain and converting it to synthesized speech that could be played through a loudspeaker in real-time to emulate natural speaking from thought. In particular, this project will uniquely focus on decoding continuous, spontaneous speech processes to achieve more natural and practical communication device for the severely disabled.
The complex dynamics of brain activity and the fundamental processing units of continuous speech production and perception are largely unknown, and such dynamics make it challenging to investigate these speech processes with traditional neuroimaging techniques. Electrocorticography (ECoG) measures electrical activity directly from the brain surface and covers an area large enough to provide insights about widespread networks for speech production and understanding, while simultaneously providing localized information for decoding nuanced aspects of the underlying speech processes. Thus, ECoG is instrumental and unparalleled for investigating the detailed spatiotemporal dynamics of speech. The research team's prior work has shown for the first time the detailed spatiotemporal progression of brain activity during prompted continuous speech, and that the team's Brain-to-text system can model phonemes and decode words. However, in pursuit of the ultimate objective of developing a natural speech neuroprosthetic for the severely disabled, research must move beyond studying prompted and isolated aspects of speech. This project will extend the research team's prior experiments to investigate the neural processes of spontaneous and imagined speech production. In conjunction with in-depth analysis of the recorded neural signals, the researchers will apply customized ECoG-based automatic speech recognition (ASR) techniques to facilitate the analysis of the large amount of phones occurring in continuous speech. Ultimately, the project aims to define fundamental units of continuous speech production and understanding, illustrate functional differences between these units, and demonstrate that representations of spontaneous speech can be synthesized directly from the neural recordings. A companion project is being funded by the Federal Ministry of Education and Research, Germany (BMBF)
|
0.915 |
2019 — 2021 |
Krusienski, Dean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Eeg-Based Cognitive-State Decoding For Interactive Virtual Reality @ Virginia Commonwealth University
The increasing availability of affordable, high-performance virtual reality (VR) headsets creates great potential for applications including education, training, and therapy. In many applications, being able to sense a user's mental state could provide key benefits. For instance, VR environments could use brain signals such as the electroencephalogram (EEG) to infer aspects of the user's mental workload or emotional state; this, in turn, could be used to change the difficulty of a training task to make it better-suited to each user's unique experience. Using such EEG feedback could be valuable not just for training, but in improving people's performance in real applications including aviation, healthcare, defense, and driving. This project's goal is to develop methods and algorithms for integrating EEG sensors into current VR headsets, which provide a logical and unobtrusive framework for mounting these sensors. However, there are important challenges to overcome. For instance, EEG sensors in labs are typically used with a conducting gel, but for VR headsets these sensors will need to work reliably in "dry" conditions without the gel. Further, in lab settings, motion isn't an issue, but algorithms for processing the EEG data will need to account for people's head and body motion when they are using headsets.
To address these challenges, the project team will build on recent advances in dry EEG electrode technologies and motion artifact suppression algorithms, focusing on supporting passive monitoring and cognitive state feedback. Such passive feedback is likely to be more usable in virtual environments than active EEG feedback, both because people will be using other methods to interact with the environment directly and because passive EEG sensing is more robust to slower response times and decoding errors than active control. Prior studies have demonstrated the potential of EEG for cognitive-state decoding in controlled laboratory scenarios, but practical EEG integration for closed-loop neurofeedback in interactive VR environments requires addressing three critical next questions: (1) can more-practical and convenient EEG dry sensors achieve comparable results to wet sensors?, (2) can passive EEG cognitive-state decoding be made robust to movement-related artifacts?, and (3) can these decoding schemes be generalized across a variety of cognitive tasks and to closed-loop paradigms? To address these questions, classical cognitive tasks and more-complex simulator tasks will be implemented and tested as novel, interactive VR environments. Building upon preliminary results that successfully characterized movement artifacts and decoded cognitive workload in interactive VR using active-wet EEG sensors, this work will further explore the practical integration of EEG sensors with room-scale VR headsets to balance data quality, cognitive decoding performance, ease of setup and use, and user comfort.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2021 — 2023 |
Krusienski, Dean Shih, Jerry |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-German Research Proposal: Adaptive Low-Latency Speech Decoding and Synthesis Using Intracranial Signals (Adspeed) @ Virginia Commonwealth University
Recent research has demonstrated that it is possible to synthesize intelligible speech sounds directly from invasive measurements of brain activity. However, these approaches have a perceptible delay between brain activity and audible speech output, preventing a natural spoken communication. Furthermore, the approaches generally require pre-recorded speech and thus cannot be directly applied to people who are unable to speak and generate such recordings. This project aims to develop methods for synthesizing speech from brain activity without perceptible processing delay that do not rely on pre-recorded speech from the user. The ultimate goal is to develop a system that restores natural spoken communication to the millions of people who suffer from severe speech disorders, including those with complete loss of speech.
The project is organized into three research thrusts. The first thrust focuses on asynchronous and acoustics-free model training, where novel surrogates to the user's vocalized speech will be created using approaches based on dynamic time warping and the inference of intended inner-speech acoustics from corresponding textual representations. The second thrust focuses on online validation and user adaptation, where the existing low-latency speech decoding and synthesis scheme, which is not inherently adaptable, will be validated in a closed-loop fashion using online human-subject experiments. This will provide valuable insights into how the user responds and adapts to the artificial, synthesized speech output. The third thrust focuses on the development and testing of low-latency system-user co-adaptation schemes. Co-adaptation, where both the user and system adapt to optimize the synthesized output, is crucial for revealing the elusive representations of inner (i.e., imagined or attempted) speech in the absence of a reliable surrogate for modeling. As a result, this research will simultaneously advance the understanding of the neural representations of inner speech and, in turn, co-adaptive inner speech decoding toward the development of practical closed-loop speech neuroprosthetics.
A companion project is being funded by the Federal Ministry of Education and Research, Germany (BMBF).
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |