
James S. Magnuson - US grants
Affiliations: | Psychology | University of Connecticut and Haskins Labs, New Haven, CT, United States |
Area:
PsycholinguisticsWebsite:
http://magnuson.psy.uconn.eduWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, James S. Magnuson is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2002 — 2006 | Magnuson, James S | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Auditory Lexicon: Similarity, Learning &Processing @ University of Connecticut Storrs [unreadable] DESCRIPTION (provided by applicant): Language deficits have devastating effects on one's ability to function in society. Designing appropriate interventions depends in part on understanding spoken language processing in healthy adults. Indeed, similarity metrics based on spoken word recognition research have allowed the design of more sensitive tests for hearing and language deficits. In this proposal, four projects examine the effects on spoken word recognition of the temporal distribution of similarity in spoken words, learning, and top-down knowledge. Time course measures are obtained from eye tracking during visually guided tasks under spoken instructions. The eye tracking is complemented by more traditional paradigms, allowing direct comparisons of the measures and providing data for items not amenable to eye tracking. Natural English stimuli and artificial lexicons are used as stimuli. Real words do not fall into conveniently balanced levels on the dimensions of interest, while artificial lexicons allow precise control over phonological similarity and frequency, and therefore competition neighborhood. They also provide a paradigm for studying learning, whether in the case of new words or changes in the relative frequencies of competitors. The results of the projects are used to refine similarity metrics for spoken words and develop a computational model of spoken word processing and learning. |
0.982 |
2007 — 2010 | Fowler, Carol (co-PI) [⬀] Magnuson, James Viswanathan, Navin (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Compensation For Coarticulation: Implications For the Basis and Architecture of Speech Perception @ University of Connecticut Language users typically have the impression that understanding speech in their native tongue is instantaneous and effortless. This apparent ease belies a vastly complex chain of processes that must be engaged in order to derive meaning from the acoustic patterns of speech. Unlike computer speech recognition systems, human listeners adapt quickly to tremendous acoustic variability in the speech signal. This extremes of this variability can result, for instance, from unusual acoustic environments, new voices or accents, very fast speaking rates, and many other factors. Speech is one of the most difficult perceptual challenges that humans face, so research on its underlying mechanisms will not only further our understanding of human language, but may also help to unlock some of the deepest mysteries about the human mind. This basic knowledge may also serve to improve current speech technologies, and current methods of remediation for impairments in speech comprehension and production. |
0.915 |
2008 — 2014 | Magnuson, James | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: the Time Course of Bottom-Up and Top-Down Integration in Language Understanding @ University of Connecticut Context changes the way we interpret sights and sounds. A shade of color halfway between yellow and green looks more yellow when applied to a picture of a banana, but more green when applied to a lime. An acoustic pattern halfway between "p" and "b" is interpreted as "p" following "sto-" but as "b" following "sta-". But does context actually alter perception of sights and sounds, or only their interpretation? Cognitive scientists have long debated when and how "bottom-up" input signals (such as speech) are integrated with "top-down" information (context, or knowledge in memory). Do early perceptual processes protect a "correct," context-independent record of signals, or do perceptual processes immediately mix bottom-up and top-down information? One view is that accurate perception requires early separation of bottom-up and top-down information and late integration. An alternative is that early mixing of bottom-up and top-down information would make systems more efficient, by allowing context to immediately guide processing. In studies of language comprehension, this timing question is unsettled because of conflicting evidence from two measures of moment-to-moment processing. Studies tracking people's eye movements on objects upon verbal instructions support immediate integration: helpful information appears to be used as soon as it is available. Studies using ERPs (event related potentials, which measure cortical activity via scalp electrodes) suggest delayed integration: early brain responses appear to be affected only by bottom-up information. Results from the two measures have been difficult to compare because they have relied on very different experimental designs. In the proposed research the investigator will study the timing of top-down integration in human sentence processing using experimental designs that allow simultaneous comparisons of eyetracking and ERPs, with the goal of determining when and how top-down context is integrated with bottom-up signal information. |
0.915 |
2012 — 2016 | Magnuson, James S | P01Activity Code Description: For the support of a broadly based, multidisciplinary, often long-term research program which has a specific major objective or a basic theme. A program project generally involves the organized efforts of relatively large groups, members of which are conducting research projects designed to elucidate the various aspects or components of this objective. Each research project is usually under the leadership of an established investigator. The grant can provide support for certain basic resources used by these groups in the program, including clinical components, the sharing of which facilitates the total research effort. A program project is directed toward a range of problems having a central research focus, in contrast to the usually narrower thrust of the traditional research project. Each project supported through this mechanism should contribute or be directly related to the common theme of the total research effort. These scientifically meritorious projects should demonstrate an essential element of unity and interdependence, i.e., a system of research activities and projects directed toward a well-defined research program goal. |
Speech Production, Speech Perception, and Orthography: Reciprocal Influences @ Haskins Laboratories, Inc. Project II takes a new look at the role of articulaflon in speech perception ~ and reading. Other theories have proposed that articulatory gestures form the informational basis not just for speech producflon, but also speech perception, either as the basis for special purpose cortical mechanisms (Liberman & Mattingly, 1985), or because as-yet undiscovered information in the speech signal directly specifles articulation (Fowler, 1986). In Aim 1, we implement a computational model (an attractor network, with interconnectivity among units serving producflon, percepflon, and orthography) in order to concretely formulate the Articulatory Integration Hypothesis (AIH): co-development of speech production, percepflon, and reading should result in pervasive, interactive linkages that shape representations and performance in each domain. In this framework, articulation is not a form of special internal knowledge; it is just additional information available to the system that may especially facilitate speech percepflon under noisy or ambiguous condiflons. We test these predicflons in Aim 2, where we examine how learning to read changes speech production, and in Aim 3, where we examine in what ways the neural bases of speech perception are sensiflve to or organized by gestural informaflon. These aims will result in the development of perhaps the first unified computational-theoreflcal model of speech production, percepflon, and reading development. Our new look at articulaflon has the potenflal to provide new constraints on longstanding, fundamental challenges, in particular, revealing new details about the nature of the funcflonal and neural codes underlying speech perception. Better understanding the bases of speech percepflon wilt allow better theories of and interventions for language impairment. RELEVANCE (See instructions): This program is relevant to the understanding the development of spoken and written language competence, which is crucial for successful academic and life outcomes. The comprehensive computaflonal model and cutting-edge empirical investigaflons in Project II are essential in developing a deeper understanding of perception-producflon-reading links, which will provide new constraints on theories of language development and new insights into the phonological basis of reading. |
0.933 |
2012 — 2018 | Fitch, Roslyn Holly (co-PI) [⬀] Snyder, William (co-PI) [⬀] Pugh, Kenneth Magnuson, James Coelho, Carl (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Language Plasticity - Genes, Brain, Cognition and Computation @ University of Connecticut This Integrative Graduate Education and Research Traineeship (IGERT) award establishes a unique interdisciplinary training program that prepares Ph.D. scientists from cognitive (linguistics, psychology, communication disorders) and biological fields (molecular genetics, behavior genetics, neuroscience) to achieve a unified cognitive-biological understanding of human language development. Innovations from this approach will address societal challenges in education, technology, and health. |
0.915 |
2017 — 2019 | Magnuson, James | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Connecticut This award will support the organization of a two-day workshop on future challenges in language science, with integrated discussion of science communication (making language science research accessible to specialist peers, scientists in other fields, and the general public). The workshop will take place in Madison, WI, immediately following the 2018 Cognitive Science Society annual meeting. Language science is an interdisciplinary area drawing on theories and methods from linguistics, cognitive psychology, developmental psychology, and artificial intelligence, among other fields. The primary goal is basic scientific understanding of the human capacity for language and potential longer-term impact on technology, education, and health. |
0.915 |
2017 — 2022 | Snyder, William (co-PI) [⬀] Magnuson, James Mccoach, Dorothy (co-PI) [⬀] Chamberlain, Stormy Miller, Timothy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nrt-Utb: Science of Learning, From Neurobiology to Real-World Application: a Problem-Based Approach @ University of Connecticut Learning is the basis of human mental development and the process of learning continues throughout our lives. Perhaps more than anything else, how and what we learn shapes who we are. This project has the dual aims of achieving a deeper scientific understanding of learning, and communicating a deeper understanding of the science of learning to scientists and the public. Over several decades, diverse fields including genetics, neuroscience, linguistics, education, and psychology have generated a wealth of knowledge about myriad aspects of how humans learn, but a grand challenge remains: to integrate that knowledge into a unified understanding of learning based on these diverse fields, and on scales ranging from the gene to the neuron, brain, and human behavior. This National Science Foundation Research Traineeship (NRT) award to the University of Connecticut will help prepare the next generation of researchers to meet this challenge. The traineeship, focusing on the science of learning, will train fifty (50) PhD students, including twenty-five (25) funded trainees, from education, genetics, linguistics, psychology, neuroscience, and speech-language-hearing sciences. By emphasizing a problem-based approach (learning by doing), embracing collaboration across a broad a spectrum of disciplines, and integrating hands-on training in communication and leadership skills that enable effective multidisciplinary project design, management, and scientific discovery, this program will offer trainees unique preparation for a range of careers in academia, industry, and the public sector. |
0.915 |
2018 — 2021 | Allopenna, Paul Magnuson, James Theodore, Rachel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Connecticut One of the greatest mysteries in the cognitive and neural sciences is how humans achieve robust speech perception given extreme variation in the precise acoustics produced for any given speech sound or word. For example, people can produce different acoustics for the same vowel sound, while in other cases the acoustics for two different vowels may be nearly identical. The acoustic patterns also change depending on the rate at which the sounds are spoken. Listeners may also perceive a sound that was not actually produced due to massive reductions in speech pronunciation (e.g., the "t" and "y" sounds in "don't you" are often reduced to "doncha"). Most theories assume that listeners recognize words in continuous speech by extracting consonants and vowels in a strictly sequential order. However, previous research has failed to find evidence for invariant cues in the acoustic signal that would allow listeners to extract the important information. This project uses a new tool for the study of language processing, LEXI (for Linguistic-Event EXtraction and Interpretation), to test the hypothesis that individual acoustic cues for consonants and vowels can in fact be extracted from the signal and can be used to determine the speaker's intended words. When some acoustic cues for speech sounds are modified or missing, LEXI can detect the remaining cues and evaluate them as evidence for the intended sounds and words. This research has potentially broad societal benefits, including optimization of human-machine interactions to accommodate atypical speech patterns seen in speech disorders or accented speech. This project supports training of 1-2 doctoral students and 8-10 undergraduate students through hands-on experience in experimental and computational research. All data, including code for computational models, the LEXI system, and speech databases labeled for acoustic cues will be publicly available through the Open Science Framework; preprints of all publications will be publicly available at PsyArxiv and NSF-PAR. |
0.915 |
2022 — 2025 | Magnuson, James Myers, Emily (co-PI) [⬀] Brodbeck, Christian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ University of Connecticut More than half of the world's population speaks two or more languages fluently. Speaking more than one language allows you to communicate and interact with individuals in other countries and cultures through conversation and reading. It also has economic benefits, and possibly even cognitive and health benefits. Scientifically, there are many open questions about bilingualism for education (how can we best train people in new languages?), health (how can you best treat a bilingual person with language difficulty after a brain injury?), and technology (how can we make speech recognition systems as flexible as multilingual humans?). To better understand multilingual language processing, researchers will record neural responses to speech in people who know either one, two, or three languages, while they listen to the languages they know. The project is a transatlantic collaborative effort with US researchers partnering with Spanish researchers. High school, undergraduate, graduate, and postdoctoral trainees will receive training in cutting-edge computational and cognitive neuroscience and psycholinguistics, including data science and modeling skills transferable to a range of academic and non-academic STEM careers.<br/><br/>One of the fundamental questions that researchers will explore is how two or more languages co-exist within a single language system and how each is represented in the brain. Some prior research suggests there is a deep continuous coactivation of all the languages a person knows even when they are in a single language context, while other research suggests that under many circumstances, only the language relevant in the moment is activated. The project will use the tools of computational neuroscience to develop cognitive theories and implemented models of bilingual and trilingual language processing, which the research team will compare to neuroimaging data with high temporal resolution (magnetoencephalography or MEG). MEG will be collected while monolingual, bilingual, and trilingual individuals process speech from languages they know under conditions designed to promote attention to a single language (isolated words or continuous speech from only 1 language) or two languages (random interleaving of isolated words from 2 languages, or more ecological 'code-switching' between 2 languages). Researchers will use a state-of-the-art neural network model of human speech recognition developed with previous NSF support. They will use continuous speech tracking to relate neural activity to both theoretically-generated hypotheses regarding potential impacts of language co-activation and the behavior and internal activity of neural network models. In this way, researchers will be able to compare human brain responses and neural network model responses to statistical predictions of the expectation level for each successive speech sound (consonant or vowel) and word during presentation of continuous speech. Comparing different models to neural responses will help researchers address fundamental questions, such as whether all the languages a person knows are active whenever they hear any language, and whether this is helpful or causes interference. This research promises to deepen our understanding of multilingual language development and processing in the human brain.<br/><br/>A companion project is being funded by the State Research Agency, Spain (AEI).<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria. |
0.915 |