2000 — 2002 |
Poeppel, David E |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
A Timing Basis For Auditory Processing Asymmetry @ University of Maryland College Pk Campus
The goal of this research program is to develop a theoretically motivated and neurobiologically grounded framework for understanding auditory processing, in general, and particularly speech perception, in the context of the cerebral lateralization of auditory perceptual processes. One of the generalizations that has emerged about the cortical basis of speech is that left hemisphere regions, especially in the temporal and frontal lobes, are differentially better at processing information that rapidly changes in time. Because important aspects of the speech signal are characterized by rapid spectro-temporal changes (e.g. formant transitions associated with consonant-vowel syllables), it has been proposed that what makes the left hemisphere well suited to the analysis of the speech signal is its sensitivity to temporal signal properties. Two concepts derived from psychophysics and neurophysiology are exploited to develop a physiological account of temporal processing asymmetries, the concept of temporal integration windows and the concept of neuronal oscillations. Temporal integration windows provide time-based, logistical constraints on central nervous system processing Oscillations have been implicated in recent years in a variety of neurophysiologic contexts, including as potential mechanisms for binding sensory information to yield coherent percepts. It is hypothesized that oscillations reflect the quantization of processing in a appropriate temporal windows. The experiments use high-density electroencephalography (EEG) to characterize the auditory evoked responses elicited by complex sounds, including speech. The experiments are designed to explore the idea that the left and right hemispheres differentially analyze sensory information in the time domain. The overall hypothesis is that the left and right temporal lobes have temporal integration windows of different sizes (25-35ms and 150-250ms, respectively), and that this will be reflected in asymmetric oscillatory responses in the gamma versus theta spectral bands. This "asymmetric sampling in time" model will be investigated in the speech domain using continuous spoken language and in the non- speech domain using ripple stimuli. Continuous speech is the most ecologically natural spoken language stimulus. By comparison, ripples are the auditory analogue of visual gratings and provide well characterized, dynamic, broadband stimuli in which relevant temporal parameters can be manipulated. The use of these two types of stimuli permits us to test whether the observed rhythmic activity is conditioned in significant ways by stimulus properties or occurs independently of stimulus-related acoustic variation.
|
0.911 |
2002 — 2018 |
Poeppel, David E |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Cortical Mechanisms in Speech Perception: Meg Studies
DESCRIPTION (provided by applicant): Communicating using spoken language feels effortless and automatic to healthy listeners with no hearing deficits or language processing problems. But the subjective ease belies the number and complexity of the many operations that -in aggregate- constitute speech perception. Transforming the acoustic signals that arrive at the ear into the abstract representations that underpin language processing requires a large number of little steps. When one or several of these intermediate operations malfunction, pathologies of hearing, speech perception, or language processing can be the consequence. Developing a theoretically well motivated and mechanistic, neurobiologically grounded understanding of this system remains one of the foundational challenges of the cognitive neuroscience of hearing, speech, and language. The research program outlined in this grant proposal strives to further develop a brain-based model of speech perception that is motivated by the insights of linguistic and psychological research, on the one hand, and is sensitive to the physical (acoustic) and neurobiological constraints of speech processing, on the other. The proposed experiments use the noninvasive electrophysiological neuroimaging technique magnetoencephalopgraphy (MEG), paired with magnetic resonance imaging (MRI). MEG is particularly useful because it combines very high temporal resolution (necessary because speech processing is fast) with good spatial resolution (necessary to understand the anatomic organization of the system). We investigate the speech processing system in the context of three specific research aims. The focus of the first aim is to understand more precisely the functional architecture in the brain. In particular, we want to understand the computational contribution of the critical regions mediating the processing of speech, both in perception and production. Furthermore, we test whether the same architectural (dual stream) model helps us understand both the perception of speech (old news) and the covert (internal) and overt production of speech (new news). The studies in the second aim test whether intrinsic brain rhythms (neural oscillations) that one observes (in animal and human studies) have a causal role in speech processing, as has recently been hypothesized. For example, the alignment of slow brain rhythms with the input signal may be necessary to understand speech (by parsing the continuous spoken input into the right 'chunk size' for further analysis). In the third aim, we turn to the perennial puzzle of brain asymmetry and its role in speech processing. We evaluate, building on the studies of oscillations, whether left and right auditory regions execute the same or different analyses of the speech input. As a group, these studies serve to further specify the 'parts list' of auditory and speech processing, with a special emphasis on timing and its implications for health and disease.
|
0.958 |
2007 |
Poeppel, David E |
R56Activity Code Description: To provide limited interim research support based on the merit of a pending R01 application while applicant gathers additional data to revise a new or competing renewal application. This grant will underwrite highly meritorious applications that if given the opportunity to revise their application could meet IC recommended standards and would be missed opportunities if not funded. Interim funded ends when the applicant succeeds in obtaining an R01 or other competing award built on the R56 grant. These awards are not renewable. |
Cortical Mechaisms in Speech Perception: Meg Studies @ University of Maryland College Pk Campus
The representation of speech and other complex auditory signals in the human brain constitutes a major interdisciplinary challenge for cognitive neuroscience. Understanding in a principled manner how acoustic signals are transformed and ultimately recognized as words in a speaker's mental dictionary requires the integration of knowledge across fields ranging from single-cell recording in auditory cortex to linguistic theory. The research program outlined here is focused on two subroutines in speech processing. In the context of the first specific aim, the hypothesis is investigated that speech is analyzed concurrently on two time scales in human auditory cortex, with one corresponding to analysis at the syllabic scale, another at the segmental (phonemic) scale. This multi-time resolution model, which provides an account of hemispheric asymmetry in audition, is tested in a series of behavioral and electrophysiological studies. The goal is to provide a theoretically motivated and neurobiologically sensible answer to how acoustic signals are fractionated in time and how they map to words stored in the brain. The second aim encompasses both behavioral (often audio-visual) and electrophysiological studies that test how (specifically, how abstractly) speech and words are represented in the human brain. The goal is to test models of the cortical encoding of speech sounds and words. The principal method used in this research program is magnetoencephalography (MEG), typically with parallel behavioral studies performed. Other non-invasive recording modalities are also employed (EEC, fMRI) to validate and extend data from any single approach. ( Successfully perceiving speech and recognizing words are processes at the basis of human communication. A mechanistic characterization of the brain structures that mediate these skills is essential to understand the range of disorders associated with problems in speech processing. Health-related phenomena ranging from dyslexia and autism in childhood to aphasia and Alzheimer's disease in the aging population have been repeatedly linked to problems with the auditory analysis of complex signals and the ability to process words appropriately. The development of innovative diagnostic, interventional, and therapeutic approaches critically depends on our enriched knowledge of the brain basis of the processes underlying human speech.
|
0.911 |
2011 — 2013 |
Buchwald, Adam [⬀] Poeppel, David Marantz, Alec (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Workshop: Cognitive, Computational, and Neural Processing--Constraints On Theories of Language Production -New York University - July 2012
Language production is a remarkably complex cognitive ability which requires the successful integration of multiple levels of cognitive/neural processing. Research on the mechanisms underlying language production is performed from a variety of disciplinary perspectives, including psycholinguistics, neurolinguistics, theoretical linguistics, computational linguistics, cognitive neuropsychology, and communication sciences and disorders. However, a complete understanding of language production requires situating our findings in a broader context that addresses the constraints that are placed on theories of language production by general cognitive, neural and computational processing principles. This award provides support for a special workshop session addressing cognitive, computational and neural constraints on theories of language production. The session will be part of the July 2012 meeting of the International Workshop on Language Production at New York University (NYU). The special session will consist of presentations by five leading scientists whose research on cognitive, neural and computational processes can directly constrain theories of language production. Over the past seven years, the International Workshop on Language Production has become the premier meeting focused solely on language production, and is thus the ideal venue to hold a special session of lectures and discussions addressing constraints on language production theories. This special session will inform language production researchers about state-of-the-art findings on the constraints on language production theories, which they can incorporate into their research, and will also provide opportunities to form collaborations between researchers who focus on language production with others who focus on more general cognitive, neural and computational issues.
|
1 |
2012 — 2013 |
Poeppel, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Linking Language and Cognition to Neuroscience Via Computation
The past decade has seen an explosion of information concerning the neuroscience of language and other cognitive processes. We now have quite reasonable 'brain maps' that specify where in the brain the major operations occur that underlie various aspects of language processing. However, the relation between language and the brain is still at best understood at a correlational level. There is no explanatory understanding of how specialized neural circuits account for the implementation of the specific operations that underpin linguistic computations and representations.
The emphasis of this workshop is to bring together experts from various fields (e.g., neuroscience, cognitive science, linguistics, computer science, and psychology) to identify new directions in the computational neurobiology of language. There are two intended outcomes. First, the workshop should stimulate new scientific collaborations that use computation to connect cognition/language and neuroscience. Second, participants will write a "white paper" that (i) summarizes key ideas, problems, and prospects for research in this interdisciplinary area of inquiry and (ii) identifies and recommends promising questions and methodological tools for future work in cognitive neuroscience, with a special emphasis on speech and language processing.
|
1 |
2013 — 2018 |
Ding, Mingzhou (co-PI) [⬀] Poeppel, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Inspire Track 1: Crowd-Sourcing Neuroscience: Neural Oscillations and Human Social Dynamics
This INSPIRE award is partially funded by the Perception, Action, and Cognition Program, the Cognitive Neuroscience Program, and the Social Psychology Program in the Division of Behavioral and Cognitive Sciences in the Directorate for Social, Behavioral, and Economic Sciences, the Research and Evaluation on Education in Science and Engineering Program in the Division of Research on Learning in Formal and Informal Settings in the Directorate for Education and Human Resources, and the Control Systems Program in the Division of Civil, Mechanical, and Manufacturing Innovation in the Directorate for Engineering.
The goal of the project is to understand naturalistic human social interaction, specifically in group contexts. While neuroscientists are increasingly recording from two participants concurrently, the neural basis of group dynamics remains uninvestigated. Capitalizing on the growing body of knowledge about the role of brain rhythms, the project builds on the hypothesis that one can characterize coupled neural oscillations between individuals as one candidate mechanism that tracks successful social communication in a dynamic context. This aim is pursued by using novel portable EEG technology to record brain activity from a large number of participants concurrently (between 10-20) in ecological situations, specifically a classroom. This will address the significant hardware and software challenges associated with recording data sets from groups. Moreover, the new type and amount of data will also require a novel analytic toolbox, which will form the basis of modeling multiple brains engaged in socially relevant situations.
The research will impact education and technology, and provide significant outreach opportunities. First, the key experiments will be performed in a high school classroom, in collaboration with the science teachers. As such, the project provides a new type of platform to provide hands-on STEM training. Second, the successful implementation of the wearable mobile brain EEG recording system will have significant impact on future neuroscience research, providing a valuable tool for research outside of the lab (e.g., in a crowd: theatres, schools), with populations that are otherwise difficult to reach (e.g., children, patients, the elderly). Finally, by comparing communication between people in the same room to people at a distance (e.g., MOOCs), this project contributes to issues surrounding the relevance of real-life behavioral cues to successful communication and teaching.
|
1 |
2017 — 2020 |
Poeppel, David Milne, Catherine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Brain-to-Brain Synchrony in Stem Learning
The STEM classroom is a highly dynamic social environment where students and teachers interact face-to-face in real time. These interactions are fundamental to the learning process. Yet our understanding of the brain basis of social interactions in the classroom is very limited. This project will use novel portable electroencephalogram (EEG) technology to record brain activity from a teacher and a group of students in a high school science classroom. The goal is to investigate whether similarities and differences in brain activity between teachers and students predict STEM engagement and learning outcomes. The proposed research will contribute to educational practice in a number of ways. For example, the extent to which brainwaves exhibit similar patterns across students can be used as an online, implicit measure of student engagement with STEM content. As such, brainwave synchrony can prove useful in the future as an objective measure of the effectiveness of teaching practices, providing insight into the learning process in real time. In addition, as part of the proposed research, an EEG-based neuroscience curriculum for high schools will be developed and tested. The project is funded by the EHR Core Research (ECR) program, which supports work that advances the fundamental research literature on STEM learning.
This project pursues a novel and potentially transformative approach to studying classroom interactions using cutting-edge portable electroencephalogram (EEG) technology. The project extends previous NSF-funded work that enabled the development of the experimental setup and analytical tools required to simultaneously record brain activity from a teacher and a group of students. The project will advance our understanding of naturalistic classroom interaction by: (a) using state-of-the-art wireless EEG headsets that are expected to provide much more accurate and richer neurophysiological data than the low-grade portable EEG headsets from prior research; (b) recording brain activity not only from students, but also from teachers; (c) collecting data from a sample of students and teachers in different schools and school types; (d) investigating the relationship between brain-to-brain synchrony across students and teachers, learning outcomes, and objective measures of attention; and (e) exploring brain-to-brain synchrony in the context of an unresolved issue in the STEM learning literature, the effectiveness of virtual laboratory environments. This project will take neuroscience research outside of the laboratory and into the classroom, taking an important step in integrating neuroscience research and education practices. By validating novel neuroscience methods and analytic approaches, this research will pave the way for future cognitive neuroscience research on educational practices. Further, the results of this study will illuminate classroom interactions from a novel perspective and provide teachers with a deeper understanding of the relationship between student engagement and learning outcomes.
|
1 |