2002 — 2006 |
Magnuson, James S |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Auditory Lexicon: Similarity, Learning &Processing @ University of Connecticut Storrs
[unreadable] DESCRIPTION (provided by applicant): Language deficits have devastating effects on one's ability to function in society. Designing appropriate interventions depends in part on understanding spoken language processing in healthy adults. Indeed, similarity metrics based on spoken word recognition research have allowed the design of more sensitive tests for hearing and language deficits. In this proposal, four projects examine the effects on spoken word recognition of the temporal distribution of similarity in spoken words, learning, and top-down knowledge. Time course measures are obtained from eye tracking during visually guided tasks under spoken instructions. The eye tracking is complemented by more traditional paradigms, allowing direct comparisons of the measures and providing data for items not amenable to eye tracking. Natural English stimuli and artificial lexicons are used as stimuli. Real words do not fall into conveniently balanced levels on the dimensions of interest, while artificial lexicons allow precise control over phonological similarity and frequency, and therefore competition neighborhood. They also provide a paradigm for studying learning, whether in the case of new words or changes in the relative frequencies of competitors. The results of the projects are used to refine similarity metrics for spoken words and develop a computational model of spoken word processing and learning.
|
0.982 |
2007 — 2010 |
Fowler, Carol (co-PI) [⬀] Magnuson, James Viswanathan, Navin (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Compensation For Coarticulation: Implications For the Basis and Architecture of Speech Perception @ University of Connecticut
Language users typically have the impression that understanding speech in their native tongue is instantaneous and effortless. This apparent ease belies a vastly complex chain of processes that must be engaged in order to derive meaning from the acoustic patterns of speech. Unlike computer speech recognition systems, human listeners adapt quickly to tremendous acoustic variability in the speech signal. This extremes of this variability can result, for instance, from unusual acoustic environments, new voices or accents, very fast speaking rates, and many other factors. Speech is one of the most difficult perceptual challenges that humans face, so research on its underlying mechanisms will not only further our understanding of human language, but may also help to unlock some of the deepest mysteries about the human mind. This basic knowledge may also serve to improve current speech technologies, and current methods of remediation for impairments in speech comprehension and production.
With the support of the National Science Foundation, Dr. Magnuson is studying a speech perception phenomenon called "compensation for coarticulation" with the goal of refining current theories of speech perception. Compensation for coarticulation is a phenomenon whereby the perception of a sound is affected by the qualities of preceding or following sounds. Traditional explanations of this phenomenon appeal to active mechanisms of perceptual adjustment based on physical properties of the vocal tract and speech articulators. However, there are now three distinct explanations that account for overlapping subsets of results, each of which follows from a different theory of speech perception. Dr. Magnuson and his research team will use acoustic analyses and speech experiments with human speakers and listeners in order to distinguish between these differing explanations of compensation for coarticulation. The results of this project promise to advance our general understanding of the perceptual mechanisms that underlie speech and potentially many sensory experiences.
|
0.915 |
2008 — 2014 |
Magnuson, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: the Time Course of Bottom-Up and Top-Down Integration in Language Understanding @ University of Connecticut
Context changes the way we interpret sights and sounds. A shade of color halfway between yellow and green looks more yellow when applied to a picture of a banana, but more green when applied to a lime. An acoustic pattern halfway between "p" and "b" is interpreted as "p" following "sto-" but as "b" following "sta-". But does context actually alter perception of sights and sounds, or only their interpretation? Cognitive scientists have long debated when and how "bottom-up" input signals (such as speech) are integrated with "top-down" information (context, or knowledge in memory). Do early perceptual processes protect a "correct," context-independent record of signals, or do perceptual processes immediately mix bottom-up and top-down information? One view is that accurate perception requires early separation of bottom-up and top-down information and late integration. An alternative is that early mixing of bottom-up and top-down information would make systems more efficient, by allowing context to immediately guide processing. In studies of language comprehension, this timing question is unsettled because of conflicting evidence from two measures of moment-to-moment processing. Studies tracking people's eye movements on objects upon verbal instructions support immediate integration: helpful information appears to be used as soon as it is available. Studies using ERPs (event related potentials, which measure cortical activity via scalp electrodes) suggest delayed integration: early brain responses appear to be affected only by bottom-up information. Results from the two measures have been difficult to compare because they have relied on very different experimental designs. In the proposed research the investigator will study the timing of top-down integration in human sentence processing using experimental designs that allow simultaneous comparisons of eyetracking and ERPs, with the goal of determining when and how top-down context is integrated with bottom-up signal information.
The proposed work has important implications for the design of language technology. In contrast to computer systems, humans efficiently exploit top-down context, and quickly learn to adapt to new contexts. An obstacle to making computer systems as adaptable as humans is that we do not fully understand how humans balance bottom-up signals and top-down context. The proposed research also has implications for understanding and treating language impairments. For example, understanding how normal perceivers balance and integrate signal and context may help identify subtle bottom-up impairments that lead to unusual reliance on context. The investigator is committed to integrating research and training activities in this CAREER project, and will actively involve undergraduate and graduate students in the research. The investigator will also develop courses designed to prepare students for independent research by providing hands-on training in cognitive theories and time course methodologies.
|
0.915 |
2012 — 2018 |
Fitch, Roslyn Holly (co-PI) [⬀] Snyder, William (co-PI) [⬀] Pugh, Kenneth Magnuson, James Coelho, Carl (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Language Plasticity - Genes, Brain, Cognition and Computation @ University of Connecticut
This Integrative Graduate Education and Research Traineeship (IGERT) award establishes a unique interdisciplinary training program that prepares Ph.D. scientists from cognitive (linguistics, psychology, communication disorders) and biological fields (molecular genetics, behavior genetics, neuroscience) to achieve a unified cognitive-biological understanding of human language development. Innovations from this approach will address societal challenges in education, technology, and health.
Intellectual Merit: A full understanding of human language ? including genetic, neural, cognitive and environmental influences that shape language development and recovery from brain injury ? requires linking a vast array of biological and cognitive sciences. Conventional approaches to graduate training produce cognitive and biological language scientists with little technical overlap ? a barrier to communication. Trainees will develop expertise in their home domains and rigorous interdisciplinary training, providing broad and deep knowledge of theories and methods of other domains to communicate, collaborate and innovate in multidisciplinary teams. The training program uses a team-based model, in an environment designed to foster creativity and innovation ? along with integrated coursework and training in how to translate that research to educational, technological, and health applications.
Broader Impacts: This IGERT award will prepare a new generation of leaders to conduct (and train others to conduct) the team-based basic and applied research needed to achieve a unified biological-cognitive science of language. Key components include: training to communicate with educators, policy makers and the public in innovative ways that address societal challenges in technology and education; comprehensive efforts to increase participation among historically underrepresented groups in science; and preparing trainees for the increasingly international scientific community through on-site and internet-based opportunities to connect with leading cognitive and biological centers of language research around the world.
IGERT is an NSF-wide program intended to meet the challenges of educating U.S. Ph.D. scientists and engineers with the interdisciplinary background, deep knowledge in a chosen discipline, and the technical, professional, and personal skills needed for the career demands of the future. The program is intended to establish new models for graduate education and training in a fertile environment for collaborative research that transcends traditional disciplinary boundaries, and to engage students in understanding the processes by which research is translated to innovations for societal benefit.
|
0.915 |
2012 — 2016 |
Magnuson, James S |
P01Activity Code Description: For the support of a broadly based, multidisciplinary, often long-term research program which has a specific major objective or a basic theme. A program project generally involves the organized efforts of relatively large groups, members of which are conducting research projects designed to elucidate the various aspects or components of this objective. Each research project is usually under the leadership of an established investigator. The grant can provide support for certain basic resources used by these groups in the program, including clinical components, the sharing of which facilitates the total research effort. A program project is directed toward a range of problems having a central research focus, in contrast to the usually narrower thrust of the traditional research project. Each project supported through this mechanism should contribute or be directly related to the common theme of the total research effort. These scientifically meritorious projects should demonstrate an essential element of unity and interdependence, i.e., a system of research activities and projects directed toward a well-defined research program goal. |
Speech Production, Speech Perception, and Orthography: Reciprocal Influences @ Haskins Laboratories, Inc.
Project II takes a new look at the role of articulaflon in speech perception ~ and reading. Other theories have proposed that articulatory gestures form the informational basis not just for speech producflon, but also speech perception, either as the basis for special purpose cortical mechanisms (Liberman & Mattingly, 1985), or because as-yet undiscovered information in the speech signal directly specifles articulation (Fowler, 1986). In Aim 1, we implement a computational model (an attractor network, with interconnectivity among units serving producflon, percepflon, and orthography) in order to concretely formulate the Articulatory Integration Hypothesis (AIH): co-development of speech production, percepflon, and reading should result in pervasive, interactive linkages that shape representations and performance in each domain. In this framework, articulation is not a form of special internal knowledge; it is just additional information available to the system that may especially facilitate speech percepflon under noisy or ambiguous condiflons. We test these predicflons in Aim 2, where we examine how learning to read changes speech production, and in Aim 3, where we examine in what ways the neural bases of speech perception are sensiflve to or organized by gestural informaflon. These aims will result in the development of perhaps the first unified computational-theoreflcal model of speech production, percepflon, and reading development. Our new look at articulaflon has the potenflal to provide new constraints on longstanding, fundamental challenges, in particular, revealing new details about the nature of the funcflonal and neural codes underlying speech perception. Better understanding the bases of speech percepflon wilt allow better theories of and interventions for language impairment. RELEVANCE (See instructions): This program is relevant to the understanding the development of spoken and written language competence, which is crucial for successful academic and life outcomes. The comprehensive computaflonal model and cutting-edge empirical investigaflons in Project II are essential in developing a deeper understanding of perception-producflon-reading links, which will provide new constraints on theories of language development and new insights into the phonological basis of reading.
|
0.933 |
2017 — 2022 |
Snyder, William (co-PI) [⬀] Magnuson, James Mccoach, Dorothy (co-PI) [⬀] Chamberlain, Stormy Miller, Timothy |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nrt-Utb: Science of Learning, From Neurobiology to Real-World Application: a Problem-Based Approach @ University of Connecticut
Learning is the basis of human mental development and the process of learning continues throughout our lives. Perhaps more than anything else, how and what we learn shapes who we are. This project has the dual aims of achieving a deeper scientific understanding of learning, and communicating a deeper understanding of the science of learning to scientists and the public. Over several decades, diverse fields including genetics, neuroscience, linguistics, education, and psychology have generated a wealth of knowledge about myriad aspects of how humans learn, but a grand challenge remains: to integrate that knowledge into a unified understanding of learning based on these diverse fields, and on scales ranging from the gene to the neuron, brain, and human behavior. This National Science Foundation Research Traineeship (NRT) award to the University of Connecticut will help prepare the next generation of researchers to meet this challenge. The traineeship, focusing on the science of learning, will train fifty (50) PhD students, including twenty-five (25) funded trainees, from education, genetics, linguistics, psychology, neuroscience, and speech-language-hearing sciences. By emphasizing a problem-based approach (learning by doing), embracing collaboration across a broad a spectrum of disciplines, and integrating hands-on training in communication and leadership skills that enable effective multidisciplinary project design, management, and scientific discovery, this program will offer trainees unique preparation for a range of careers in academia, industry, and the public sector.
The training program has five major components: an intensive one-year seminar that surveys the science of learning across all participating fields, a hands-on practicum where trainees learn to design and implement multidisciplinary research, faculty-student research interest groups that serve as brainstorming launch pads for new scientific challenges, data stewardship modules, and integrated training in outreach and communication. The seminar provides a cross-disciplinary introduction to the science of learning and the challenge of technical communication, while the practicum emphasizes practical skills crucial in academic and nonacademic careers that graduate education too often lacks: project design and management, budgeting and resource allocation, and external communications. The research interest groups, which will evolve as promising research proposals emerge from the practicum, serve as both focal point for research and organizational structure for the participants. From their first day in the program, students face the challenge of how to clearly and effectively share ideas without assuming prior knowledge or relying on technical jargon, a skill that not only enables excellence in research, but empowers trainees to become ambassadors for their work to society as a whole. This program will also promote diversity in careers requiring advanced training through best practices in recruitment and retention.
The NSF Research Traineeship (NRT) Program is designed to encourage the development and implementation of bold, new potentially transformative models for STEM graduate education training. The Traineeship Track is dedicated to effective training of STEM graduate students in high priority interdisciplinary research areas, through comprehensive traineeship models that are innovative, evidence-based, and aligned with changing workforce and research needs.
|
0.915 |
2017 — 2019 |
Magnuson, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Real-World Language: Future Directions in the Science of Communication and the Communication of Science @ University of Connecticut
This award will support the organization of a two-day workshop on future challenges in language science, with integrated discussion of science communication (making language science research accessible to specialist peers, scientists in other fields, and the general public). The workshop will take place in Madison, WI, immediately following the 2018 Cognitive Science Society annual meeting. Language science is an interdisciplinary area drawing on theories and methods from linguistics, cognitive psychology, developmental psychology, and artificial intelligence, among other fields. The primary goal is basic scientific understanding of the human capacity for language and potential longer-term impact on technology, education, and health.
Invited speakers will 1) provide critical reviews of different theoretical perspectives, methodological approaches, and tools (such as eye tracking, electroencephalography, or functional magnetic resonance imaging); 2) focus on near- and long-term challenges facing language science; or 3) focus on science communication and education. In an effort to spark discussion and collaboration, research interest groups will be formed that will hold videoconference meetings in Fall, 2018. Plans include strategies for promoting student participation and inclusion of women and members of under-represented groups.
|
0.915 |
2018 — 2021 |
Allopenna, Paul Magnuson, James Theodore, Rachel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: An Integrated Model of Phonetic Analysis and Lexical Analysis Based On Individual Acoustic Cues to Features @ University of Connecticut
One of the greatest mysteries in the cognitive and neural sciences is how humans achieve robust speech perception given extreme variation in the precise acoustics produced for any given speech sound or word. For example, people can produce different acoustics for the same vowel sound, while in other cases the acoustics for two different vowels may be nearly identical. The acoustic patterns also change depending on the rate at which the sounds are spoken. Listeners may also perceive a sound that was not actually produced due to massive reductions in speech pronunciation (e.g., the "t" and "y" sounds in "don't you" are often reduced to "doncha"). Most theories assume that listeners recognize words in continuous speech by extracting consonants and vowels in a strictly sequential order. However, previous research has failed to find evidence for invariant cues in the acoustic signal that would allow listeners to extract the important information. This project uses a new tool for the study of language processing, LEXI (for Linguistic-Event EXtraction and Interpretation), to test the hypothesis that individual acoustic cues for consonants and vowels can in fact be extracted from the signal and can be used to determine the speaker's intended words. When some acoustic cues for speech sounds are modified or missing, LEXI can detect the remaining cues and evaluate them as evidence for the intended sounds and words. This research has potentially broad societal benefits, including optimization of human-machine interactions to accommodate atypical speech patterns seen in speech disorders or accented speech. This project supports training of 1-2 doctoral students and 8-10 undergraduate students through hands-on experience in experimental and computational research. All data, including code for computational models, the LEXI system, and speech databases labeled for acoustic cues will be publicly available through the Open Science Framework; preprints of all publications will be publicly available at PsyArxiv and NSF-PAR.
This interdisciplinary project unites signal analysis, psycholinguistic experimentation, and computational modeling to (1) survey the ways that acoustic cues vary in different contexts, (2) experimentally test how listeners use these cues through distributional learning for speech, and (3) use computational modeling to evaluate competing theories of how listeners recognize spoken words. The work will identify cue patterns in the signal that listeners use to recognize massive reductions in pronunciation and will experimentally test how listeners keep track of this systematic variation. This knowledge will be used to model how listeners "tune in" to the different ways speakers produce speech sounds. By using cues detected by LEXI as input to competing models of word recognition, the work provides an opportunity to examine the fine-grained time course of human speech recognition with large sets of spoken words; this is an important innovation because most cognitive models of speech do not work with speech input directly. Theoretical benefits include a strong test of the cue-based model of word recognition and the development of tools to allow virtually any model of speech recognition to work on real speech input, with practical implications for optimizing automatic speech recognition.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2022 — 2025 |
Magnuson, James Myers, Emily (co-PI) [⬀] Brodbeck, Christian |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns Us-Spain Research Proposal: Collaborative Research: Tracking and Modeling the Neurobiology of Multilingual Speech Recognition @ University of Connecticut
More than half of the world's population speaks two or more languages fluently. Speaking more than one language allows you to communicate and interact with individuals in other countries and cultures through conversation and reading. It also has economic benefits, and possibly even cognitive and health benefits. Scientifically, there are many open questions about bilingualism for education (how can we best train people in new languages?), health (how can you best treat a bilingual person with language difficulty after a brain injury?), and technology (how can we make speech recognition systems as flexible as multilingual humans?). To better understand multilingual language processing, researchers will record neural responses to speech in people who know either one, two, or three languages, while they listen to the languages they know. The project is a transatlantic collaborative effort with US researchers partnering with Spanish researchers. High school, undergraduate, graduate, and postdoctoral trainees will receive training in cutting-edge computational and cognitive neuroscience and psycholinguistics, including data science and modeling skills transferable to a range of academic and non-academic STEM careers.<br/><br/>One of the fundamental questions that researchers will explore is how two or more languages co-exist within a single language system and how each is represented in the brain. Some prior research suggests there is a deep continuous coactivation of all the languages a person knows even when they are in a single language context, while other research suggests that under many circumstances, only the language relevant in the moment is activated. The project will use the tools of computational neuroscience to develop cognitive theories and implemented models of bilingual and trilingual language processing, which the research team will compare to neuroimaging data with high temporal resolution (magnetoencephalography or MEG). MEG will be collected while monolingual, bilingual, and trilingual individuals process speech from languages they know under conditions designed to promote attention to a single language (isolated words or continuous speech from only 1 language) or two languages (random interleaving of isolated words from 2 languages, or more ecological 'code-switching' between 2 languages). Researchers will use a state-of-the-art neural network model of human speech recognition developed with previous NSF support. They will use continuous speech tracking to relate neural activity to both theoretically-generated hypotheses regarding potential impacts of language co-activation and the behavior and internal activity of neural network models. In this way, researchers will be able to compare human brain responses and neural network model responses to statistical predictions of the expectation level for each successive speech sound (consonant or vowel) and word during presentation of continuous speech. Comparing different models to neural responses will help researchers address fundamental questions, such as whether all the languages a person knows are active whenever they hear any language, and whether this is helpful or causes interference. This research promises to deepen our understanding of multilingual language development and processing in the human brain.<br/><br/>A companion project is being funded by the State Research Agency, Spain (AEI).<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |