
Richard N. Aslin, Ph.D. - US grants
Affiliations: | 1975-1984 | Psychology | Indiana University, Bloomington, Bloomington, IN, United States |
1984-2017 | Brain and Cognitive Sciences | University of Rochester, Rochester, NY | |
2017- | Baby Lab | Haskins Laboratories, New Haven, CT, United States |
Area:
cognitive developmentWebsite:
https://haskinslabs.org/people/richard-aslinWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Richard N. Aslin is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
1977 — 1980 | Aslin, Richard | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Development of Human Eye Movement Control Systems @ Indiana University |
0.915 |
1979 — 1980 | Aslin, Richard | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Indiana University |
0.915 |
1980 — 1984 | Aslin, Richard | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Development of the Visual System in Human Infants @ Indiana University |
0.915 |
1985 — 1986 | Aslin, Richard N | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perception of Speech and Nonspeech Sounds in Infancy @ University of Rochester A series of experiments are proposed for investigating the manner in which infants and young children perceive snythetic and natural speech sounds as well as nonspeech stimuli that match the complex temporal and frequency relations present in speech. Discrimination and categorization data will be obtained from 6- to 12-months-old infants using an operant headturning procedure. Discrimination, categorization, and labeling data will be obtained from 2- to 4-year-olds using a two-alternative pointing procedure. Studies of infants will focus on the discrimination of foreign speech contrasts, trading relations, categorization of stop consonant place of articulation and vowel category in CV syllables, and the extraction of phonetic features. Studies of young children will focus on the discrimination of foreign speech contrasts, the discrimination and labeling of vowels, vowel normalization and reduction, context effects, trading relations, feature extraction, and the discrimination and labeling of place of articulation in stops, VOT, and TOT. In addition to these studies of speech perception, experiments with 6- to 12-months-olds will address several aspects of developmental psychoacoustics, including thresholds for high frequency tones, frequency discrimination, discrimination of frequency modulation, and psychophysical tuning curves. These psychoacoustic studies will not only establish norms for this age range but also uncover any sensory constraints on speech discrimination. The overall goal of this program of research is to evaluate the mechanisms underlying speech perception in infancy and early childhood. The role of early linguistic experience, particularly in the early stages of language production, will be examined, as well as the onset age for a phonetic mode of analysis. |
1 |
1985 — 1987 | Aslin, Richard N | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Sensory Constraints On Oculomotor Development in Infancy @ University of Rochester The purpose of this proposal is to investigate the development of saccadic, pursuit, and vergence eye movement control in 6-, 12-, and 18-week-old human infants. The overall goal is not only to characterize significant developments in oculomotor control that occur during the first four postnatal months, but also to assess a variety of sensory abilities that may limit the effectiveness with which each eye movement system is guided. Studies of the saccadic system will use an automated eye monitoring system to examine the smallest target displacement that reliably elicits a saccade and the forced-choice preferential looking (FPL) procedure to examine sensory estimates of the threshold for target displacement. Studies of the pursuit system will use FPL to examine thresholds for motion detection, velocity discrimination, and the motion after effect and the eye monitoring system to examine predictive tracking in young infants. Studies of the vergence system will use the binocular capabilities of the eye monitoring system to examine fusional vergence, lateral phorias, the AC/A ratio, and the resting position of vergence; binocular pupillometry and the visual evoked potential (VEP) to examine monocular suppression; and an automated refraction device to examine the resting position and accuracy of accommodation. These experiments will aid in determining if the inefficient and inaccurate oculomotor control exhibited by young infants is the result of degraded sensory information or a deficit in motor or sensory-motor programming. Finally, the sensory and oculomotor assessment techniques developed for use with normal infants will be applied to a small sample of clinical patients with ocular anomalies. |
1 |
1987 — 1991 | Aslin, Richard N | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Perceptual Segmentation of Speech by Infants @ University of Rochester The purpose of the program of research outlined in this grant proposal is to determine whether human infants in the second six months of life are able to perceptually segment fluent speech into word-length units that could be used to understand their native language. Although many studies in the past 15 years have documented the sophisticated speech discrimination capacities of young infants, those studies have focussed on the infant's perception of isolated speech segments. Studies of speech segmentation based on speech production are limited by the poor articulatory control shown by infants once words are produced and by the fact that perceptual segmentation must precede word production. In contrast to the perception of isolated speech segments, the perception of fluent speech requires on-line processing ant the extraction of discrete acoustic units from a speech waveform characterized by inconsistent acoustic markers for word boundaries. Two questions are critical: (a) can infants extract a familiar acoustic unit from fluent speech and recognize the equivalence of this unit in various contexts, and (b) is this extraction process facilitated by the entry of acoustic units into the lexicon? Three strategies will be employed to study speech segmentation in 6- to 12-month-olds. First, an operant headturning procedure and a visual habituation procedure will be used to assess infants' extraction of word-length units from fluent speech. Variables though to influence segmentation (e.g., speaking rate, intonation and stress, coarticulation of adjacent phonetic segments) will be examined on these segment extraction tasks. Second, the speech INPUT to the infant from the mother will be analyzed in great detail to describe the variability and consistency with which acoustic cues to segmentation are presented to the infant. Short-term "training" studies will assess the role of reference in infants' segmentation of words and pseudowords. Third, infants' preferences for variations in speaking rate and stress patterns will be obtained to determine whether these suprasegmental aspects of fluent speech influence infants' attention. Taken together, these studies will clarify an under-studied but essential aspect of language required for lexical and syntactic development. |
1 |
1995 — 1999 | Aslin, Richard | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Lexical Development in Human Infants @ University of Rochester 921064 The purpose of this research is to reveal the lexical-recognition competencies of normal human infants during the second six months of postnatal development. The focus of the research is on the perceptual and memory abilities required for the segmentation and extraction of words (or wordlength units) from fluent maternal speech prior to the onset-age of vocal production. A new familiarization-preference (F-Pref) technique has been developed to assess infants' sensitivities to words. This technique involves a brief familiarization period (30-45 seconds) during which infants are presented with target words (or non-words) embedded in sentences and subsequently tested for recognition of those words in isolation compared to non-familiar words. Postfamiliarization recognition is revealed by longer listening times to familiar over non-familiar words. This technique will be used to explore further the conditions under which 7 to 8 month old infants are able to extract words from fluent speech when the surrounding acoustic, phonetic, and language-specific contexts are varied, when speaking rate is altered, and when the frequency of pauses is manipulated during the familiarization period. In addition to these primary questions, we will also attempt to extend the F-Pref technique to infants in the first six months of postnatal development (using simplified procedures), and to older infants in an attempt to reveal the emergence of semantic priming in infancy. Because the segmentation, extraction, and memory for words embedded in fluent speech is a necessary precursor to the formation of a lexicon, these experiments should reveal a fundamental capacity of infants in the early stage of language acquisition when comprehension is thought to be superior to production. |
0.915 |
1998 — 2002 | Aslin, Richard Newport, Elissa (co-PI) [⬀] Jacobs, Robert (co-PI) [⬀] Hauser, Marc |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Kdi: Statistical Learning and Its Constraints @ University of Rochester Both humans and non-human primates show remarkable learning abilities. However, these abilities are often limited to certain domains, developmental periods, or behavioral contexts. For example, nearly all humans acquire one or more complex linguistic systems-that is, languages -- but not all humans acquire complex musical systems. Similarly, non-human primates are exceptionally adept at learning to forage for and categorize different types of food, but are severely limited in acquiring complex communication systems. Also, both humans and non-human primates appear to learn best in several domains during early periods of development. Thus, learning is nearly always characterized by specializations, rather than by general-purpose mechanisms. Understanding the constraints on learning will contribute to basic research, by accounting for domain- and species-specializations, and to applied research, by refining our understanding of which domains, ages, and contexts are optimal for human |
0.915 |
1998 — 2002 | Aslin, Richard Tanenhaus, Michael [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Time Course of Spoken Word Recognition @ University of Rochester Understanding the mechanisms by which people recognize spoken words in continuous speech is of central importance for theories of how language is processed, how it develops, and how it is affected by brain injury. An understanding of how people recognize words in continuous speech also provides valuable informative for researchers developing speech recognition technology and human-computer interaction systems. It is well-established that during spoken word recognition listeners evaluate the unfolding input by activating a set of potential lexical candidates which compete for recognition. However, numerous questions remain about how the set of possibilities is established and how it is evaluated during real-time processing. For example, little is known about whether or not people are able to use fine-grained acoustic differences during initial word recognition. Thus, it is not clear whether as the word `carpet` is heard in continuous speech, the word recognition system considers all words that begin with similar sequences of sounds, e.g., (car, card, etc.) or whether subtle differences in the length of vowels in one-syllable and polysyllabic words are used to restrict the set of alternatives. Questions like these have important implications for how we understand and model the word recognition system. However, our ability to answer these questions has been limited because few of the experimental methods sensitive spoken word recognition can be used with continuous speech in natural tasks. This is an important limitation because natural speech often occurs in noisy conditions, there is considerable speaker variability, and linguistic units, such as the beginning and end of a word are not clearly marked in continuous speech. The proposed research explores how candidate words are retrieved from memory and evaluated during continuous speech using: (a) experimental studies in English with digitized natural speech and synthesized speech; (b) computational modeling; and (c) experimental and computational explorations with artificial languages. The experiments measure eye-movements to objects in a circumscribed visual world, extending the methodology pioneered by the PI and his collaborators. Participants will follow spoken instructions to pick up and move (with a mouse) line drawings of concrete objects on a computer monitor (e.g., `Pick up the candy. Now put it above the circle`.). Preliminary studies have established that: (a) the pattern and timing of eye-movements are remarkably sensitive to the uptake of information, allowing for a detailed mapping of the nature of the candidate set and how it changes over time during continuous speech; and (b) there is a simple quantitative mapping from hypothesized underlying speech recognition processes to the probability of making an eye-movement to a target object, allowing for precise testing of different theories of word recognition. Moreover, the basic task, either with pictures or real objects, can be naturally extended for use with infants, young children, and neurologically-impaired populations. The project should result in both methodological advances and in a body of empirical data important for scientists studying normal and impaired language processing, as well as for scientists developing speech recognition systems. |
0.915 |
1999 — 2002 | Aslin, Richard N | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Statistical Language Learning in Human Infants @ University of Rochester The goal of the project is to reveal how normal human infants, during the second six months of postnatal development, acquire the sound- structures that will become words in their native language. This process of early world learning, which occurs before infants begin to produce words in their speech, must involve the segmentation of stretches of fluent adult speech that correspond to words. Infants presumably use both acoustic cues, such as pauses and prosody (e.g., pitch and stress), and distributional cues, such as the statistical patterning of sequences of sounds, to solve the word-segmentation task. Using a preferential listening technique, preceded by a familiarization phase, 8-month-olds will be tested for their ability to segment multi-syllabic word-like units from artificial language corpora. These corpora will be brief (2-4 minutes) and will be created by a speech synthesizer to control for the presence (or absence) of acoustic cues to word boundaries. The proposed experiments will examine the relative importance of acoustic and statistical cues to work boundaries, the temporal ordering of statistical cues, the limitations on which statistical cues can be used, and the robustness of statistical cues in long-term memory. These studies on word segmentation, therefore, will not only provide important information about a fundamental aspect of early language acquisition, but they will also serve as a model system for the examination of other aspects of statistical language learning. |
1 |
2002 — 2005 | Aslin, Richard Newport, Elissa (co-PI) [⬀] Parker, Kevin Bavelier, Daphne (co-PI) [⬀] Zhong, Jianhui (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Acquisition of a Magnetic Resonance Imaging System to Assess Brain Plasticity @ University of Rochester With support from a National Science Foundation Major Research Instrumentation award, Dr. Richard Aslin and his colleagues at the University of Rochester will establish the Rochester Center for Brain Imaging (RCBI). The overall goal of this new center is to assess the plasticity of the adult and child brain as it adapts to altered and varied experiences. One type of alteration is the loss of sensory input in a single modality (e.g., the loss of vision or hearing because of blindness or deafness). Previous research at Rochester has shown that congenitally deaf individuals who use sign language do so with the same parts of the brain (the left hemisphere) that are usually used for spoken language, despite relying on the visual rather than the auditory modality. Deaf individuals also have greater sensitivity to patterns of movement in the peripheral visual field because they rely more on signed language inputs delivered in the visual modality. These patterns of brain plasticity are the result of altered sensory input during early development and have important implications for the brain's ability to compensate for deprivation and injury, provided that it has time during early development to adapt to these unusual circumstances. Similar mechanisms of plasticity may be present in adults as they learn a new task or compensate for brain injury. The Rochester group will use functional magnetic resonance imaging (fMRI) to study both long-term (developmental) and short-term aspects of brain plasticity in adults, children, and non-human primates. The research will provide important insights into the neural mechanisms of learning and plasticity and the keys to the brain's ability to adapt to novel experiences. |
0.915 |
2003 — 2007 | Aslin, Richard N | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Visual Statistical Learning in Human Infants @ University of Rochester DESCRIPTION (provided by applicant): The goal of this program of research is to determine how the developing human infant forms representations of the visual world. The focus of the research is on a class of powerful learning mechanisms that have been shown by the Principal Investigator and his colleagues to rapidly extract from sequences of auditory stimuli the statistical properties that form coherent units (e.g., words and melodies). Visual statistical learning will be studied to determine whether a similarly powerful set of mechanisms is present in a modality other than audition. Infants ranging in age from 3- to 12-months of age, as well as adults, will be tested on a variety of statistical learning tasks in which small visual shapes are arranged into scenes. At issue is how infants and adults learn that some of these shapes appear together (co-occur) across many different scenes, forming the basic building blocks for representing those scenes in memory. Four different techniques will be used with infants. The primary technique involves the repeated presentation of a sequence of 16-28 different scenes composed of 3-6 different shapes. After a decline in looking time (habituation) to these displays, infants will be presented with test displays containing coherent (high statistical relatedness) and incoherent (unrelated) shapes that were embedded in the scenes. The other three techniques involve forced-choice preferential looking, automated corneal reflection eye-tracking, and anticipatory eye-movements to learned categories. These techniques will be used to determine whether newly learned features activate attention in cluttered scenes, how these features are learned when low-level properties of the scenes compete for attention, how variations over time in the input statistics affect the accuracy of feature learning, and how other perceptual constraints affect feature learning. A key hypothesis of the Pl's statistical approach will be tested -- that learners represent the largest coherent unit in a complex array of elements, rather than also representing all of the embedded elements that are redundant with this larger unit. Taken together, these proposed studies will reveal how infants and adults learn new information from complex visual scenes and represent that information in a computationally efficient manner. Failure to learn efficiently could lead to deficits in the early phases of learning in infancy and negatively affect the formation of higher level categories. |
1 |
2009 — 2020 | Aslin, Richard Newport, Elissa L (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Statistical Approaches to Linguistic Pattern Learning @ University of Rochester DESCRIPTION (provided by applicant): The purpose of the proposed research is to provide a comprehensive account of the factors that affect how infants, children, and adults learn the categories of their native language from distributional information in linguistic input. The categories of a language consist of sets of words (e.g., noun, verb) that play a functionally equivalent role in grammatical sentences. Distributional information refers to the patterning of elements in a large corpus of sentences and includes how frequently those elements occur, what position they occupy in a sentence, and the context provided by neighboring elements. Our longstanding program of research on statistical learning in word segmentation (how learners determine which sound sequences form words) has documented the power, rapidity, and robustness of infants, children, and adults sensitivity to complex distributional information. Here we extend that program of research to a crucial aspect of learning higher-level structures of language. In our proposed studies, we use a miniature artificial language paradigm that affords us complete control over all the distributional cues in the input, something that is virtually impossible using real languages. Participants listen to a sample of utterances and make judgments about their acceptability. Crucially, during a learning phase, they do not hear all possible utterances that are legal in the artificial language; some are withheld for use in a later post-test. The post-test utterances either conform to the distributional patterns present in the learning phase, or they violate those patterns. The key test is whether participants judge novel-but-legal utterances to be acceptable, thereby showing the ability to generalize correctly beyond the input to which they were exposed. Studies of children provide additional support for learning the distributional cues by pairing utterances with videos of simple events. Studies of adults will be used for comparison, and will also present them with learning materials in the visual-motor domain to assess the detailed time-course of learning and the specificity of the results to auditory linguistic materials. Taken together, the results of these studies of infants, children, and adults will document the key structural variables in language learning that enable a distributional mechanism of category formation to operate and will highlight the ways these mechanisms may differ over age and domain. PUBLIC HEALTH RELEVANCE: Language development is one of the hallmarks of the human species, yet it is difficult to study because of the huge variation in early exposure to different amounts of linguistic input. The use of artificial languages that are acquired in the lab over a few hours provides a window on the mechanisms of language development. We will study language learning in the lab to gain a unique perspective on how the categories (noun, verb, etc) are formed from listening to the patterns of words in a small set of sentences. These studies will not only reveal a basic mechanism of language learning, but also establish benchmarks against which language delay can be compared. Moreover, understanding the mechanisms that lead to successful acquisition in normal children can help to identify loci of language disorders and design methods for remediating disorders. |
0.952 |
2010 — 2015 | Berger, Andrew [⬀] Aslin, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Idr - Enhanced Near-Infrared Monitoring of Brain Function in Infants @ University of Rochester 0931687 |
0.915 |
2013 — 2014 | Wu, Rachel [⬀] Deak, Gedeon Aslin, Richard |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Learning to Attend, Attending to Learn: Neurological, Behavioral, and Computational Perspectives @ University of Rochester Attention and learning are two of the most important aspects of cognition. While studying attention and learning separately has its benefits, it is also misleading. In the past few years, there has emerged a new wave of research demonstrating that attention constrains learning and that learning guides future attention. The studies in these two areas span disparate fields (developmental psychology, cognitive neuroscience, behavioral neuroscience, computational modeling). Although researchers are asking the same questions across different fields, in general, they do not attend the same conferences, rarely cite each other, and in most cases do not even know about each other's work. This 2-day workshop will bring together these diverse researchers to catalyze further interaction and promote innovative collaborative research. The workshop also will include moderators, who will encourage constructive critiques and discussion of theoretical and methodological limitations for the different approaches. |
0.915 |
2015 — 2017 | Aslin, Richard | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Rochester Understanding what infants understand about objects and words that they encounter in the world has been an important goal in developmental science, but the field understands relatively little about how infants perform either of these two tasks. Several neuroimaging methods have been used to determine how adult brains recognize familiar objects and words, but most of these methods are not suitable for use with infants. The goal of this research project is to deploy two neuroimaging methods that are amenable for use with infants, as a novel way to gain insights into the fundamental brain mechanisms that enable object recognition and word understanding in 3- to 12-month-old infants. One technique, electroencephalography (EEG) involves measuring electrical activity generated by the brain from sensors on the scalp. The other, functional near-infrared spectroscopy (fNIRS), shines near-infrared light through the skull and measures how it is absorbed by the brain at each location as an index of how active that part of the brain is. Recording both these measures while infants watch and listen to stimuli will provide important insights into how the infant brain processes this information. |
0.915 |
2016 — 2017 | Aslin, Richard | R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Probabilistic Computation in the Cortex of the Developing Human Brain @ Haskins Laboratories, Inc. Project Summary The overall objective of the present proposal is to test a specific hypothesis about how the developing human brain is able to learn new information from the visual and auditory environment in such an efficient manner during early infancy. Extensive behavioral evidence from infants confirms that they can rapidly learn new combinations of features, but it remains unclear what neural mechanism supports this learning. The hypothesis under examination in the present proposal is based on neural recordings from the visual cortex of developing ferrets, which showed that patterns of activity shifted from being stimulus driven to being predicted by small deviations from background (i.e., non-stimulus driven) activity. That is, the developing ferret brain created a probabilistic model of the most likely features in the environment and used that model as a baseline from which stimulus driven activity was compared. This probabilistic coding model is an efficient way for the brain to represent new visual features because it focuses its activity on the most likely stimuli in the environment and creates patterns of spontaneous activity that are tuned to the environmental mean. The specific aims of the present proposal are to use a newly emerging neuroimaging method, called functional near-infrared spectroscopy (fNIRS), to non-invasively measure the blood oxygenation correlates of neural activity in the visual and auditory regions of the infant brain at four ages: 6 weeks, 3 months, 6 months, and 12 months. Infants will be tested in darkness or silence and in three stimulus conditions in each sensory modality that include both complex features typical of their natural environment and simple features that rarely occur in their natural environment. The probabilistic coding model predicts a gradual progression across post-natal age in the similarity of patterns of neural activity between darkness/silence and natural environmental input, with a corresponding failure to show similarity between darkness/silence and the non-natural stimulus conditions. Should the probabilistic coding model be supported, it would enable assessments of infants from at-risk or special populations, such as Autism Spectrum Disorder, both to establish an early biomarker of brain disorders and to serve as a possible explanation for what property of the neural system is aberrant in these disorders. |
0.915 |
2019 — 2020 | Aslin, Richard N. Mcmurray, Bob |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Decoding the Neural Time-Course of Spoken Word Recognition @ Haskins Laboratories, Inc. Project Summary Word recognition is crucial not only for comprehending spoken language but for mapping spoken words onto text in reading. Individuals with language and reading deficits (e.g., Specific Language Impairment, Dyslexia, Autism, which together affect up to 16% of children) have been shown to have deficits in word recognition, making it crucial to understand this process. A hallmark of word recognition is that listeners activate neural representations of multiple candidate words that are consistent with the early acoustic input, and these candidates compete for recognition as they unfold in real-time. The overall goal of this proposal is to capitalize on recent developments in multivariate and machine- learning techniques for analyzing signals obtained from the human brain to measure the real-time unfolding of spoken word recognition. Although these techniques have been most widely used with fMRI data, we propose to extend them to EEG data because EEG is easily used with children and clinical populations, and provides access to the time-course of word recognition, thereby revealing underlying cognitive mechanisms of word recognition, such as lexical competition. Our preliminary findings using this EEG-based paradigm have demonstrated that we can decode the recognition of a specific word (among a set of 8-12 alternatives) at each msec time-step after stimulus onset. The method is sensitive to partial activation of competing words that share some phonological features with the target word, thereby revealing the dynamics of lexical competition as the word-recognition system settles on the final target. Our objectives are to conduct a series of small-scale experiments that achieve three aims. First, we develop and optimize the method with adults (e.g., the experimental procedure and computational implementation). Second, we validate the method with adults by measuring its test/re-test reliability, comparing its estimates of word recognition with traditional behavioral paradigms, and examining how lexical status, and semantic and orthographic expectations shape lexical competition revealed by the EEG measure. This will yield a new, non-invasive, and highly reliable method suitable for assessing spoken word recognition in adults, children, and special populations. Third, we will preliminarily extend the method to children to pave the way for future developmental studies. Taken together, accomplishing these three aims would provide an innovative and powerful tool for assessing a crucial component of language processing in a wide variety of typical and atypical populations. |
1 |