2007 — 2011 |
Yu, Chen [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Cross-Situational Statistical Word Learning:Behaviors, Mechanisms and Constraints @ Indiana University Bloomington
[unreadable] DESCRIPTION (provided by applicant): There are an infinite number of possible word-to-world pairings in naturalistic learning environments. Previous studies to solve this mapping problem focus on linguistic, social, and representational constraints at a single moment. The proposed research asks if the indeterminacy problem may also be solved in another way, not in a single trial, but across trials, not in a single encounter with a word and potential referent but cross-situationally. We argue that a cross-situational learning strategy based on computing distributional statistics across words, across referents, and most importantly across the co-occurrences of these two can ultimately map individual words to the right referents despite the logical ambiguity in individual learning moments. Thus, the proposed research focuses on: (1) documenting cross-situational learning in infants from 10- to 16-months of age, (2) investigating the kinds of mechanisms that underlie this learning through both theoretical simulations and experimental studies, and (3) studying how statistical learning builds on itself accumulatively. Understanding those mechanisms and how they might go wrong or be bolstered are surely fundamental to understanding the origins of developmental language disorders that delay or alter early lexical learning. Implementing procedures to benefit children with developmental disorders typically involves altering or highlighting aspects of the learning environment. This requires a principled understanding of the structure and regularities of that environment and processes of statistical learning. [unreadable] [unreadable] [unreadable]
|
1 |
2009 — 2013 |
Smith, Linda (co-PI) [⬀] Yu, Chen [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Sensorimotor Dynamics of Naturalistic Child-Parent Interaction and Word Learning
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
Children begin to comprehend words at 9 months. They say their first word at around 12 months. The pace of vocabulary learning then accelerates so that by 24 to 30 months, children add words at the staggering rate of 5 to 9 new words per day. There have been many studies focusing on documenting developmental progress in early language acquisition, and most theories of learning derived from those studies have focused on macro level descriptions that sound like explanations, such as "the mother tried to elicit the child's attention by waving the toy." These descriptions may capture higher-level human behaviors, but they fall short of a mechanistic account of how word learning works in real time. Toddlers learn words through millisecond by millisecond, second by second, and minute by minute events that are generated by actively engaging in the world, with objects, and with their social partners. But very little is known about how any of this works in real time and in the cluttered context of the real world interactions of toddlers and parents, contexts typically characterized by many interesting objects, shifts in attention by each participant, and goals (beyond teaching and learning words). In light of this, the series of experiments in this project will provide a systematic study of child-parent interaction and learning as coupled complex systems. The child's actions (head and eye movements, hand movements, picking up objects) create within the child dynamic dependencies of looking, seeing, touching and feeling. Each moment of perceptual and motor activity by the learner determines the next -- a head turn determines what is seen next, which may determine what is reached for and brought close to the eyes, which selects and generates the next view. Thus, the learner is a dynamic complex system. But the toddler is not alone when learning new words. Instead, a mature partner -- who is also a complex multimodal system -- offers words, gestures and actions. Critically, the streams of touches, sights and sounds from two participants are closely coupled, with one agent shaping the experiences and behaviors of the other. The study will measure the dynamic multimodal behavioral patterns within and across social partners as children and parents actively engage with and talk about objects in everyday contexts. The project will collect multiple streams of high-resolution, high-quality video and speech data from both participants. The dense and rich streams of multimodal data are useful only to the degree that one can find meaningful patterns in those dynamic streams that bring new insights into real-time learning events. To this end, the project will develop new methods of data analysis, visualization and data mining to quantify fine-grained behavioral patterns within an individual's cognitive, perceptual and motor systems and across social partners. This constitutes a significant advance in theoretical approaches to early word learning and one that also has broad applications. Measuring interaction patterns within and between complex systems is a critical problem across science -- from cells, to brains, to coupled physical systems, to human-computer interaction, to groups of animals, to teams of people. Thus, this research will bring new methods and analytic tools for measuring the information in coupled interactive systems.
Understanding learning mechanisms in the context of a dynamic, everyday learning environment is essential to understanding typical development, individual differences, and atypical development. Designing effective procedures to benefit children with developmental delays requires a principled understanding of that dynamic environment as it relates to the cognitive learning system. Thus, the work will provide scientists, educators, and parents with an understanding of children's early cognitive processes and general principles to facilitate child-parent social interaction and early language learning. Moreover, building anthropomorphic machines that can acquire language automatically may be best accomplished by emulating how toddlers learn language. Artificial intelligence systems with human-like language skills have important utilities in real-world applications. Finally, this approach is methodologically novel. Not only will it provide new findings, but the research will be a proving ground for the development and invention of these new techniques -- techniques that may be applied in many different domains of social and behavioral studies, such as typical and atypical cognitive development, collaboration and joint problem solving, and adult social interactions.
|
1 |
2013 — 2017 |
Yu, Chen [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Sensorimotor Dynamics of Parent-Child Interactions Build Word Learning Skills @ Indiana University Bloomington
DESCRIPTION (provided by applicant): Everyday social activities such as toy play with parents are the context for learning as it unfolds in real time. A well coordinated child-caregiver interaction seems likely to lead to better learning while a decoupled or non-coordinated interaction may disrupt learning and development. Both parent and child play an active role in early communication and word learning as children signals their choices of communication and also determine what environmental information is most relevant to their own developmental needs, and as parents react to those signals in a sensitive manner and provide relevant information to ease the challenge of children's matching linguistic symbols to their referents. The goal of the proposed research is to achieve a deeper understanding of the sensorimotor basis of early social coordination and its potentially critical roles in later language learning an other development milestones. Toward this goal, the proposed research has three key components: 1) a set of longitudinal and cross-sectional experiments will collect multiple streams of sensorimotor data from child-parent toy play to discover fine-grained patterns characteristic of early developmental changes in child-parent social interactions which will provide new evidence on the developmental origins of these skills; 2) we will link sensorimotor dynamics in child-parent interaction with standardized, highly reliable behavioral measures that have been widely used, with the goal to understand how children's moment-to-moment social interactions with social partners may build generalizable word learning skills; 3) we will link social coordination in toy play with parental responsiveness and individual differences in development milestones, which will provide deeper insights into the consequential and longer-term role of early parent-child interactions in developmental process.
|
1 |
2015 — 2018 |
Smith, Linda [⬀] Yu, Chen (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Comp Cog: Collaborative Research On the Development of Visual Object Recognition
Human visual object recognition is fast and robust. People can recognize a large number of visual objects in complex scenes, from varied views, and in less than optimal circumstances. This ability underlies many advanced human skills, including tool use, reading, and navigation. Artificial intelligence devices do not yet approach the level of skill of everyday human object recognition. This project will address one gap in current knowledge, an understanding of the visual experiences that allow skilled object recognition to develop, by capturing and analyzing the visual experiences of 1- to 2-year-old toddlers. This is a key period for understanding human visual object recognition because it is the time when toddlers learn a large number of object categories, when they learn the names for those objects, and when they instrumentally act on and use objects as tools. Two-year-old children, unlike computer vision systems, rapidly learn to recognize many visual objects. This project seeks to understand how the training experiences (everyday object viewing) of toddlers may be optimal for building robust visual object recognition. The project aims to (1) understand the visual and statistical regularities in 1- to 2-year-old children's experiences of common objects (e.g., cups, chairs, trucks, dogs) and (2) determine whether a training regimen like that experienced by human toddlers supports visual object recognition by state-of-the art machine vision.
Considerable progress in understanding adult vision has been made by studying the visual statistics of "natural scenes." However, there is concern about possible artifacts in these scenes because they typically photographs taken by adults and thus potentially biased by the already developed mature visual system that holds the camera and frames the pictures. Also, photographed scenes differ systematically from the scenes sampled by people as they move about and act in the world. Accordingly, there is increased interest in egocentric views collected from body-worn cameras, the method used in the present work. Toddlers will wear lightweight head cameras as they go about their daily activities, allowing the investigators to capture the objects the toddlers see and the perspectives and contexts in which they see them. The research will analyze the frequency, views, visual properties, and range of seen objects for the first 100 object names normatively learned by young children, providing a first description of the early learning environment for human visual object recognition. These toddler-perspective scenes will be used as inputs to machine learning models to better understand how the visual information in the scenes supports and constrains the development of visual object recognition. Machine-learning experiments will determine which properties and statistical regularities are most critical for learning to recognize common object categories in multiple scene contexts. Data collected will be shared through Databrary, an open data library for developmental science.
|
1 |
2018 — 2021 |
Yu, Chen [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
What You See Is What You Learn: Visual Attention in Statistical Word Learning @ Indiana University Bloomington
PROJECT SUMMARY Individual differences in the quantity and quality of parent talk and individual differences in infant visual attention predict later vocabulary development, which in turn has the cascading consequences of later cognitive development and school achievement. The proposed research studies how infants prior to their first birthday begin to learn object names, and does so in a unique approach, focusing on how visual information from the infant perspective coincide with parent naming and on how infant looking behavior selects the data to be aggregated and how that selected data changes incrementally in statistical learning. Toward this goal, we will collect a corpus of infant-perspective scenes from 8-12-month-old infants as they play with their parent in a toy room and as parents naturally name objects during play. We will analyze the referential ambiguity of the scenes that co-occur with parent naming events, by showing the scenes to infants and tracking their gaze direction in free viewing. We will use the gaze data to quantify ambiguity in terms of the uncertainty, correctness, and informativeness of the scenes as to the intended object referent. We will then construct training sets for cross-situational learning experiments from the collected scenes by manipulating the mix of high and low ambiguity trials. We will test a series of hypotheses about how infants aggregate information to learn multiple object names. Moreover, we will feed the trial-by-trial gaze data of individual infants to models to predict final learning outcomes, with the goal of specifying attentional and memory processes that support learning. Our overarching aim of the project is to show that the infant-perspective scenes co- occurring with early naming events have properties that guide and train infant visual attention and in so doing support the learning of names and their referents through the aggregation of information across multiple naming events.
|
1 |