2010 — 2014 |
Leahy, Richard (co-PI) [⬀] Pantazis, Dimitrios |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Automatic Detection of Cortical Networks Across Frequencies in Audiovisual Speech Integration @ University of Southern California
The brain-basis of perception is complex, and recent research suggests that neural processing depends on large-scale oscillation of neuronal units. Oscillatory cortical networks detected with electroencephalography and magnetoencephalography recordings often involve several frequency bands, indicating that a multivariate (multi-frequency) analytic approach would have better sensitivity in detecting neural effects than univariate analysis. However, popular connectivity measures, such as coherence and phase synchrony, typically analyze pairs of spatial locations and take into account a single quantity from each location, such as amplitude or phase within a specified frequency band. With funding from the National Science Foundation, Drs. Dimitrios Pantazis, Richard Leahy, and Jintao Jiang will develop robust multivariate statistical methods for detecting brain interactions in electroencephalography. Given the wealth of information in electroencephalography data, analysis using a single frequency approach requires either prior knowledge of the frequencies at which interactions occur or, conversely, a large number of tests, one for each possible type of interaction. In this project, the researchers are using canonical correlation analysis, which can find the optimal combinations of frequencies in one cortical site that best correlate with frequencies at another cortical site. In contrast to conventional methods of interaction analysis, this project is automating the identification of frequency bands that contribute significantly to cortical networks. The target application focuses on audiovisual speech integration effects. The multivariate methods developed in this proposal are being used to detect multisensory interaction cortical sites and account for different levels of phase-resetting from audiovisual speech stimuli with different stimulus onset asynchronies.
This research will facilitate the detection of oscillatory cortical networks both in the normal and pathologic brain. Changes in oscillatory brain activity have been reported in a wide array of neurological diseases, including epilepsy, schizophrenia, and Alzheimer's disease, and improved methodologies to detect the presence and differences in oscillatory activity and associated networks will in turn advance the understanding of these diseases and facilitate the development and assessment of therapeutic interventions. This effort brings together engineers and neuroscientists to tackle a broad range of scientific and technological problems, and as a result, the project offers opportunities for integrated interdisciplinary research training of doctoral students. Research results will be disseminated broadly to the research community through professional meetings and journals, and the developed research tools will be distributed to the research community through the open source software BrainStorm.
|
1 |
2011 |
Pantazis, Dimitrios |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Multivariate Connectivity Analysis @ University of California Los Angeles
This subproject is one of many research subprojects utilizing the resources provided by a Center grant funded by NIH/NCRR. Primary support for the subproject and the subproject's principal investigator may have been provided by other sources, including other NIH sources. The Total Cost listed for the subproject likely represents the estimated amount of Center infrastructure utilized by the subproject, not direct funding provided by the NCRR grant to the subproject or subproject staff. N/A
|
0.976 |
2015 — 2018 |
Oliva, Aude (co-PI) [⬀] Torralba, Antonio (co-PI) [⬀] Pantazis, Dimitrios |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ncs-Fo: Algorithmically Explicit Neural Representation of Visual Memorability @ Massachusetts Institute of Technology
As Lewis Carroll famously wrote in Alice in Wonderland - It's a poor sort of memory that only works backwards-. On this side of the mirror, we cannot remember visual events before they happen; however, our work will help predict what people remember, as they see an image or an event. Our team of investigators in cognitive science, human neuroscience and computer vision bring the synergetic expertise to determine how visual memories are encoded in the human brain at milliseconds and millimeters-resolution. Cognitive-level algorithms of memory would be a game changer for society, ranging from accurate diagnostic tools to human-computer interfaces that will foresee the needs of humans and compensate when cognition fails.
The project capitalizes on the spatiotemporal dynamics of encoding memories while providing a computational framework for determining the representations formed from perception to memory along the scale of the whole human brain. A fundamental function of cognition is the encoding of information, a dynamic and complex process underlying much of our successful interaction with the external environment. Here, we propose to combine three technologies to predict what makes an image memorable or forgettable: neuro-imaging technologies recording where encoding happens in the human brain (spatial scale), when it happens (temporal scale), and what types of computation are performed at the different stages of storage (computational scale). Characterizing the spatiotemporal dynamics of visual memorability, and determining the type of computation and representation a successful memorability system performs is a crucial endeavor for both basic and applied sciences.
|
0.915 |
2021 |
Pantazis, Dimitrios |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Resolving Human Face Perception With Novel Meg Source Localization Methods @ Massachusetts Institute of Technology
A brief glimpse at a face quickly reveals rich multi-dimensional information about the person in front of us. How is this impressive computational feat accomplished? A recently revised neural framework for face processing suggests perception of face form information, i.e. face invariant features such as gender, age, and identity, are processed through the ventral visual pathway, comprising the occipital face area, fusiform face area, and anterior temporal lobe face area. However, evidence from fMRI remains equivocal about when, where, and how specific face dimensions of age, gender, and identity, are extracted. A key property of a complex computation is that it proceeds via stages and hence unfolds over time. We recently investigated the computational stages of face perception in a MEG study (Dobs et al., Nature Comms, 2019) and found that gender and age are extracted before identity information. However, this temporal information has yet to be linked to the spatial information available from fMRI because of limitations in current methods for spatial localization of MEG sources. Here, we propose to overcome these limitations and provide the full picture of how face computations unfold over both time and space in the brain by developing novel methods for localizing MEG sources, leveraging our team?s expertise in MEG and machine learning. In Aim 1 we will develop a new analytical MEG localization method called Alternating Projections that iteratively fits focal sources to the MEG data. In Aim 2 we will develop a novel data-driven MEG localization method based on geometric deep learning that reconstructs distributed cortical maps by learning statistical relationships in the non-Euclidean space of the cortical manifold. In Aim 3, we will first identify which method is most suitable to model human MEG face responses using fMRI face localizers as ground truth. We will then extract spatially and temporally accurate face processing maps to characterize the computational steps entailed in extracting age, gender, and identity information along the ventral visual pathway. A computationally precise characterization of the neural basis of face processing would be a landmark achievement for basic research in vision and social perception in humans. Insights into how face perception is accomplished in humans may further yield clues for how to improve AI systems conducting similar tasks. Further, the methods developed here may increase the power of MEG data to answer questions about the spatiotemporal trajectory of neural computation in the human brain.
|
0.915 |