2021 — 2026 |
Plis, Sergey Dotson, Vonetta (co-PI) [⬀] Calhoun, Vince Turner, Jessica (co-PI) [⬀] Morris, Robin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crest Center For Dynamic Multiscale and Multimodal Brain Mapping Over the Lifespan [D-Map] @ Georgia State University Research Foundation, Inc.
Center for dynamic multiscale and multimodal brain mapping over the lifespan
The Centers of Research Excellence in Science and Technology (CREST) program supports the enhancement of research capabilities of minority-serving institutions through the establishment of centers that effectively integrate education and research. CREST promotes the development of new knowledge, enhancements of the research productivity of individual faculty, and an expanded presence of students historically underrepresented in science, technology, engineering, and mathematics disciplines. With National Science Foundation support, Georgia State University establishes the Center for dynamic multiscale and multimodal brain mapping over the lifespan [D-MAP] to study the links between brain development across the lifespan. The Center aims to understand brain structure and connectivity across multiple scales with three synergistic research studies. The proposed work will promote undergraduate development in preparation for STEM education and careers; create opportunities for graduate students to work in multidisciplinary environments; and develop education and training modules that can be integrated into existing graduate and undergraduate curricula.
The study of links between early brain development, adulthood, and senescence throughout the lifespan is an important and understudied area. Subproject 1 (Unimodal Brain Dynamics) develops methods to advance understanding of time-varying brain connectivity and the evolution of whole brain connectivity patterns over time. New methods are needed that can incorporate explicitly spatial information into dynamics, estimate potential nonlinear relationships, and integrate dynamic information across scales. These methods will be applied to study the short and long-term dynamics of reading acquisition. Subproject 2 (Multimodal Data Fusion) develops novel methods to lead the field in multivariate approaches to model linked changes in multi-modal measures and their trajectories over the lifespan. Key contributions include the incorporation of network subspaces, flexible approaches to identify links between data with mismatched dimensionality, and the development of multimodal models that leverage deep learning to capture more complex relationships. Initial emphasis will be on multimodal MRI and EEG/MEG data. The focused application is to study the multimodal signatures of cognition and mood. Subproject 3 (Predictive Neuroimaging) focuses specifically on approaches to leverage lifespan data for individualized prediction. The subproject exploits large open data repositories to develop predictive fingerprints of development and aging along multiple dimensions. Anticipated contributions include novel predictive multimodal models that evolve both within and among individuals, advanced visualization approaches to enhance interpretability, and development and use of neuroinformatics infrastructure for reproducible large N brain imaging data analysis of various populations. The focused application will be to use neuroimaging to predict aspects of linguistic processing.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2022 — 2025 |
Morris, Robin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Improving Speech Technology For Better Learning Outcomes: the Case of Aae Child Speakers @ Georgia State University Research Foundation, Inc.
The lack of reading proficiency seen in children of underserved school districts has lasting impacts on students’ performances in various subjects. Low literacy is an especially pressing issue for African American students. Interactive spoken language systems offer the possibility of a powerful tool for assisting in early childhood education, freeing up teachers’ time, and engaging students in repeated opportunities for learning. These systems involve both Automatic Speech Recognition and Text-to-Speech Systems. The goal of this research is to improve the performance of such systems for young speakers of African American English (AAE) such that automated oral literacy assessment can be developed. The research has important societal and technological impacts. It will enhance the usability of speech technology in early education for AAE speaking children, providing a model for better supporting students with diverse dialects. Many under-resourced children do not have access to adequate reading and language assessments, and the proposed work will address these issues by creating methods for adapting spoken language technology to AAE children, increasing fairness in speech technology on a broader scale. The work has strong outreach and dissemination programs and will train undergraduate and graduate students in interdisciplinary research in Electrical and Computer Engineering, Linguistics, Education, and Psychology.
Challenges facing children’s Automatic Speech Recognition (ASR) are due to (1) lack of child speech data and, hence, current models used for recognition are trained using data collected from adult speakers, and (2) children display a wider range of intra- and inter- speaker variability than adults. ASR performance is especially poor for children who are non-native English speakers or those who at times transition into dialects such as AAE that are different from what ASR systems are typically trained on. In addition, most dialog systems built on text-to-speech (TTS) technology are designed using General American English (GAE) voices, which minority children may not identify with. In the high-stakes area of education, these considerations impact the effectiveness of technology for different groups. The work will utilize a new and continuously developing database of AAE children's speech to research the impact of spoken language systems on children’s learning outcomes. On the learning side, the research will highlight the impact of dialect on literacy assessment. On the technology side, the work will yield novel machine learning algorithms for low-resource tasks. Specifically, this project will develop data augmentation techniques that can increase the amount of training data available for low-resource tasks, and data normalization techniques so that ASR performance is improved for AAE child speakers. The work on TTS will explore new methods of disentangling speaker and dialect impacts on spectral realization of phrases that model dialect density (rather than treating dialect as a categorical variable) and separately accounting for pronunciation and prosodic factors. Methods found to be effective for TTS will be leveraged in the data augmentation work for ASR and explored as a diagnostic in literacy assessment.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |