2021 — 2024 |
Li, Xin Wang, Shuo (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc: Small: Toward Computational Modeling of Autism Spectrum Disorder: Multimodal Data Collection, Fusion, and Phenotyping @ West Virginia University Research Corporation
Autism spectrum disorder (ASD) is a complex neurodevelopmental disorder affecting one out of 54 children in the US. ASD is arguably one of the greatest public health challenges of our time, which has imposed a significant impact on children and their families, not to mention the burden on the current healthcare and educational systems. Despite decades of research, many fundamental issues related to ASD remain from early diagnosis to personalized intervention. The heterogeneity of ASD has contributed significantly to the difficulty in identifying the specific traits associated with this disorder (i.e., phenotyping), genetically or behaviorally. In addition to apparently increasing prevalence and unknown etiology, modeling the ASD phenotype has remained a long-standing open problem in autism research. An improved understanding of ASD phenotypes can shed novel insight to both more accurate diagnosis and more effective intervention of ASD. This project aims to understand ASD biomarkers based on behavioral measurement and sensor-gathered data, including neural recording, eye tracking, video/audio capture, and other sensor data. Through multi-disciplinary collaboration, this project will lead to transformative advances in behavioral science and data-driven computational neuroscience for ASD phenotyping. Improved and earlier diagnosis can substantially improve quality of life of ASD individuals and their communities. This project will provide an excellent platform to train both graduate and undergraduate students at the intersection of neuroscience and computer science.
This project will address the problem of ASD modeling by taking a multimodal data-driven approach integrating behavior imaging data (eye-tracking, audio/video) with neuroimaging data such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG)/ magnetoencephalography (MEG). The research team will carry out multimodal data fusion to extract ASD-relevant biomarkers without feature engineering, and data-driven modeling to obtain an understanding of the neural underpinnings of ASD, especially in the relationship between behavioral and sensor-oriented signals. This multimodal data-based modeling will combine complementary information about salient ASD biomarkers, such as dynamic functional connectivity, across different modalities. To avoid heuristics-based feature engineering for ASD phenotyping, the researchers will use two stream-based deep learning techniques along with XAI explainable AI (Artificial Intelligence). XAI will provide the interpretations for the decisions made by the deep learning algorithms to identify the traits associated with ASD. In addition to ASD diagnosing, multimodal neuroimaging will lead to investigations into the richness and complexity of ASD, referred to here as ASD phenotyping.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |