2008 — 2012 |
Deak, Gedeon Makeig, Scott (co-PI) [⬀] Poizner, Howard (co-PI) [⬀] Creel, Sarah |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Dhb: From Social Routines to Early Language: Tracking Neural, Cognitive, and Family Influences From Infancy Into Preschool @ University of California-San Diego
Human infants must learn complex skills to interact effectively with parents and other humans, but these social skills emerge at somewhat different ages in different infants. How can we explain this variability? How do infants attend to their social world, and thereby learn routines to interact effectively with other people? This project follows a group of 45 healthy toddlers who have been tested extensively from 3 to 18 months of age on a variety of changing cognitive and emotional responses to social stimuli. The same infants have been observed regularly at home in interactions with their parents. The current project asks how these toddlers' emerging social skills reflect their individual differences in cognition and emotion as infants, and on the different social input provided by their parents. The project focuses on changes in language and imitation skills from 18 to 24 months of age, and the brain dynamics that underlie these skills. The toddlers who were tested and observed starting at 3 months of age will be invited to participate again at 20 to 24 months of age. New sessions will use a unique system at UC San Diego: a Mobile Brain Dynamics (MoBI) facility for recording EEG (electroencephalographic) and body motion-tracking data simultaneously from two people. The project will use this system to record toddlers and parents as they engage in three types of interactions: 1) toddlers following parent's pointing (or line-of-gaze), 2) toddlers reacting to words spoken by parents, and 3) toddlers imitating parents' simple actions. These interactions represent important social achievements for toddlers. Advanced EEG analysis will be performed on electrical potentials measured on toddlers' and parents' scalps. At the same time special cameras will record the positions of their heads and arms. This design will therefore yield a continuous record of changes in the toddlers' and parents' brain electrophysiology (reflecting their thinking and emotional reactions) and body positions as they interact. In addition, toddlers will complete a battery of behavioral and language tests. This project will pioneer a new paradigm for studying the social development of young children, and yield the most complex and complete data available on how early social-attention behaviors relate to early language and imitation, and brain processes underlying these relations. The results will have implications for early childhood education, treatment of developmental disabilities, and parenting practices.
|
0.915 |
2011 — 2017 |
Creel, Sarah |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Speaker Variability and Spoken Language Comprehension @ University of California-San Diego
One of the core goals in developmental science is to understand how children make sense of their highly variable sensory input. For instance, how does a child know that a cup viewed from the top and a cup viewed from the side--two very different visual images--are the same object? How does a child know that her mother saying "cat" and her sibling saying "cat"--two very different-sounding versions--are the same word? Adults do these things with trivial ease. However, the child has to figure out what sound patterns matter: which sounds should be linked to a representation of furry animals, and which should be linked to the person talking. Moreover, because spoken language happens rapidly, one word after another, the child must compute all of this information very quickly.
Surprisingly little is known about the learning mechanisms that sort out the various sound patterns in spoken language. Children in the first year of life rapidly learn to tune out sound patterns that are not present in their native language, such as the difference between French nasalized and non-nasalized vowels. However, it is not known how children process sound patterns that are not directly related to meaning. This goal of this research is to understand how young language learners process talker-related sound variability--sound differences that do not change the meaning of a word, but vary with the vocal, social, and emotional characteristics of the person speaking. The research explores how children deal with talker variability: how it influences their learning of new vocabulary; what allows them to tune it out when recognizing words, but pay attention when recognizing talkers; how well they recognize voices and properties of voices such as gender and accent; and how this changes over development.
This research has the potential to transform the way researchers think about language acquisition. Is it a process of tuning out much of the sound variability present, or is it instead a process of accumulating finely-detailed acoustic knowledge? More broadly, the knowledge gleaned will help to improve learning of new sound patterns, such as words in second languages and speech in unfamiliar accents. By more fully exploring normal language development, it will contribute to the picture of what is missing or disrupted in child language deficits. It may suggest improvements to automatic voice recognition systems, which currently do not cope well with the natural acoustic variability among talkers. Finally, the research will help uncover how listeners learn to make inferences about people based on the way they talk. This award will support a variety of students (graduate, undergraduate, high school) interested in scientific fields, contributing to the future science and technology workforce. It will also enable the investigator to share experimental materials with other researchers, streamlining the research process and quickening the pace of scientific advancement.
|
0.915 |
2012 — 2016 |
Creel, Sarah |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Influences of High-Level Knowledge and Low-Level Perception in Accented-Speech Processing Across Development @ University of California-San Diego
We live in an increasingly interconnected world, and as a result, we often interact with speakers who sound different from us--they have different accents. Accent differences can result if two speakers of the same language grew up in different places (California vs. New Zealand), or if two speakers have different native languages (US English vs. Mexican Spanish). Because different accents use different sets of sounds to produce the same language, confusion can result. For instance, if a Spanish-accented speaker reports seeing a "sheep" outside, do they mean a wool-covered mammal or a large ocean-going vessel? This proposal examines two factors that may positively affect accented-speech comprehension in young children and in adults. One factor is perceptual learning, storing new mental representations of how Spanish-accented words sound. For instance, children may learn, at a subconscious level, that Spanish-accented versions of the English "ih" vowel are very acoustically similar to the English "ee" vowel. A second factor is language context, such as hearing an accented word in a semantically-plausible sentence like "We sailed across the ocean in a large _."
The potential positive impacts of this project are both scientific and social. Scientifically, the project will increase understanding of perceptual-category learning in children and adults. It will clarify how perceptual learning and high-level knowledge like sentence context jointly influence accented-speech comprehension, which can improve not only human perception, but will suggest potential improvements to how computers recognize speech. The project will also generate a set of high- and low-probability Sentences to study Accent Understanding in Child Experiments (SAUCE). This sentence set will be made publicly available for other scientists to use, accelerating the pace of experimentation in speech comprehension in areas as diverse as accent perception, speech processing in noisy environments, and speech perception with cochlear implants (artificial hearing devices). Spanish-English bilingual researchers, a group who tend to be underrepresented in STEM fields, will be recruited for their language expertise, and will be mentored as they assist in conducting the research, increasing overall scientific participation. Socially speaking, the project will suggest how listeners can best improve their accent comprehension, facilitating communication in schools, workplaces, and the media.
|
0.915 |
2019 — 2020 |
Creel, Sarah C |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Using Eye Tracking to Understand Speech Perception-Production Relationships in Young Children @ University of California, San Diego
Abstract In a few short years of life, children go from almost no knowledge of the sounds of their language to a rich knowledge, yet their speech production may not be adultlike until age 8 or later. How well does production approximate perception, and what contributes to perceptual representations? Most studies focus on adult input, but children also receive substantial child input (as much as 45% of total input), including their own vocalizations. The need to account for a major source of speech input, and lack of clarity about perception-production relationships, represent major theoretical gaps. We hypothesize that initially, adult-based percepts shape child production. Errorful child productions, due to motor difficulty matching targets, are then learned and may guide production and recognition. Goal. The goal of the proposal is to innovate a test of perception-production relationships in children which is both sensitive to recognition difficulty and natural for young children. Method. We innovate a paradigm to test a child's understanding of their own speech. We audio-record the child naming familiar pictures, then show them sets of pictures as they hear their recorded labels. Eye movements to pictures provide a sensitive, natural measure of recognition effort. Specific Aims SPECIFIC AIM 1 is to establish a new paradigm to test comprehension of one's own speech. SPECIFIC AIM 2 is to test the Articulatory Error Hypothesis, that children's inaccurate productions result from imperfect motor realizations of perceptual representations. If so, children should comprehend adult speech better than they comprehend their own speech (STUDY 1). SPECIFIC AIM 3 is to test the Multiple Representations Hypothesis, that children simultaneously possess not only adult but also self-speech representations. If so, then the child should understand their own speech better than another listener understands that child's speech (STUDIES 2-3). Significance. We innovate a methodology to assess perception-production relationships, and begin to account for an underexplored source of speech input. Findings may suggest a new route? perceptual training on child speech?for intervention in speech sound disorders, which affect 4% of young children and which impair communication, academics, and social interaction.
|
1 |