2010 — 2012 |
Idsardi, William Hwang, So-One (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Doctoral Dissertation Research: Perception and Processing Rates in American Sign Language @ University of Maryland College Park
With NSF support and collaboration among researchers at the University of Maryland and Gallaudet University, doctoral student So-One Hwang will examine the perception and production of American Sign Language (ASL) as compared to spoken English, in order to investigate the contribution of modality and linguistic representations in language processing. Psycholinguistic experiments will be conducted to investigate the effect of visual processing in determining the time windows of integrating linguistic information, using behavioral methodology similar to that used to study speech. Although processing speech often feels like a seamless, continuous experience, studies have shown that the acoustic input is actually analyzed piece-meal, according to time windows that are closely linked to linguistic units. This has been shown by the cognitive restoration of locally-reversed speech, where sentences that are distorted in small chunks still remain intelligible. This phenomenon has provided insight into the temporal integration windows and the limits of the auditory system for language processing. Because signs in ASL take longer to produce than words of spoken English, and because the visual system can be more resilient to temporal distortions than the auditory system, it is hypothesized that significant differences will be found for the perception of ASL as compared to speech. The intelligibility of distorted ASL sentences will be determined by asking deaf participants to sign back what they can understand, and accuracy will be measured.
In the second part of this project, archived recordings of ASL and English will be used to compare production rates of languages that use different articulators. Using the most current understanding of the features of linguistic units in ASL and English, this study will lead to a better understanding of the differences and similarities in the production rates of languages using different modalities. In addition to contributions based on its findings, this research promotes multidisciplinary collaboration among investigators at two institutions, and will lead to an increased awareness of the importance of diversity and cross-modal approaches in language and cognitive science research.
|
0.915 |
2011 — 2015 |
Idsardi, William |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Proposal: Neuromagnetic Correlates of American Dialect Perception @ University of Maryland College Park
Humans are remarkably adept at recognizing other human voices, not only for the messages they communicate, but also for social and personal information about the person they are talking with. For example, when we answer the telephone we can easily recognize whether it is someone we know (Mom) or someone we don't (a telemarketer). Listeners also make rapid judgments about a speaker's age, gender and where they are from. This project investigates human behavioral and early brain responses to the recorded spontaneous speech of speakers with the same or different dialects and genders using a neuromagnetic brain imaging technology, magnetoencephalography (MEG). Investigators will examine the rating, identification and discrimination of American English dialect stimuli by listeners while their brain activity is passively recorded. These brain activity data will be correlated with dialect differences and behavioral measures. While there has been considerable work on early neural correlates of prosody, very little work has examined the neural basis of within-and across-category perception of speech characteristics that have real-life importance in areas of prejudice and discrimination. Researchers investigating social aspects of language have had limited access to brain imaging technologies, and consequently this collaborative research will contribute to speech perception, neuromagnetic brain research and sociolinguistics, and will foster new connections between these fields. Moreover, this research has other important social and legal implications because its results will bear directly on how courts can and will interpret automatic neural processes and higher order cognitive processing resulting in discrimination and prejudice.
|
0.915 |
2015 — 2020 |
Daume, Hal (co-PI) [⬀] Phillips, Colin [⬀] Idsardi, William Dekeyser, Robert (co-PI) [⬀] Newman, Rochelle (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nrt-Dese: Flexibility in Language Processes and Technology: Human- and Global-Scale @ University of Maryland College Park
Language learning, in humans and machines, has far-reaching relevance to global technology, commerce, education, health, and national security. This National Science Foundation Research Traineeship (NRT) award prepares doctoral students at the University of Maryland, College Park with tools to advance language technology and language learning. The program provides trainees with an interdisciplinary understanding of learning models from cross-training in linguistics, computer science, and psychological and neural sciences, and with the tools to work with multi-scale language data. The training program contributes to the public understanding of science through a policy internship program that engages trainees with federal agencies and Washington-area professional organizations. Moreover, by contributing to the development of a free public digital linguistic tool, Langscape, it will provide a valuable resource for researchers, the public, the government, and nongovernmental agencies to discover geographical and linguistic information about languages of the world.
Flexible and efficient language learning, in humans and machines, is the research focus of this NRT program. The research hypothesis is that improvements in learning in machines and in humans will come from the ability to use training data more efficiently at multiple scales. Through interdisciplinary team approaches, trainees will explore efficient use of language data, with a focus on the informativity of data to human and machine learning. Through a suite of training activities that includes intensive summer research workshops, engagement with undergraduates and K-12 schools, and policy internships, trainees will become flexible communicators in writing and speaking and also learn to apply their research to diverse contexts.
|
0.915 |
2017 — 2018 |
Idsardi, William Newman, Rochelle (co-PI) [⬀] Heffner, Christopher (co-PI) [⬀] Heffner, Christopher (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Doctoral Dissertation Research: Categorization and Segmentation Inside and Outside Language @ University of Maryland College Park
Humans hear the speech of others almost every day. Understanding that speech is often quite difficult, as can be seen when interacting with automated speech recognition technologies. Doing so requires the use of complex yet surprisingly effective cognitive abilities. But are the mental tools that humans use to understand speech used for speech only, or are there ones that are applied to multiple purposes? This project seeks to link language learning and perception to other tasks to determine the extent to which speech perception shares an underlying basis with other cognitive processes. This project will enrich the understanding of cognition. Furthermore, it could open up new avenues for designing technologies to better improve speech processing as well as lead to new methodologies to train people learning a second language.
To study the domain-specificity of speech perception, this project will center on two particular aspects of speech: category learning and segmentation. Accurate comprehension of spoken language demands the segmentation of continuous speech into discrete words, just as the perception of actions demands the segmentation of perceived activity into discrete events. And listeners must learn to deal with the variability in speech sounds in order to treat some sounds as belonging to the same category, just as they must group, say, disparate dog sounds as belonging to a single "barking" category. One experiment will investigate the extent to which rate information can affect the segmentation of events, while another will assess the extent to which biases that seem to be present in phonetic category learning can also be found in non-speech category learning. A third experiment will use magnetoencephalography (MEG) to probe the acquisition of certain types of speech sound categories. All told, the research will illuminate whether and which processes in language and in other domains parallel each other, which relates to the notion of modularity, the idea that the brain houses separate components that have evolved to perform individual functions in the world.
|
0.915 |