Chen Yu - US grants
Affiliations: | Indiana University, Bloomington, Bloomington, IN, United States |
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Chen Yu is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2007 — 2011 | Yu, Chen [⬀] | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Cross-Situational Statistical Word Learning:Behaviors, Mechanisms and Constraints @ Indiana University Bloomington [unreadable] DESCRIPTION (provided by applicant): There are an infinite number of possible word-to-world pairings in naturalistic learning environments. Previous studies to solve this mapping problem focus on linguistic, social, and representational constraints at a single moment. The proposed research asks if the indeterminacy problem may also be solved in another way, not in a single trial, but across trials, not in a single encounter with a word and potential referent but cross-situationally. We argue that a cross-situational learning strategy based on computing distributional statistics across words, across referents, and most importantly across the co-occurrences of these two can ultimately map individual words to the right referents despite the logical ambiguity in individual learning moments. Thus, the proposed research focuses on: (1) documenting cross-situational learning in infants from 10- to 16-months of age, (2) investigating the kinds of mechanisms that underlie this learning through both theoretical simulations and experimental studies, and (3) studying how statistical learning builds on itself accumulatively. Understanding those mechanisms and how they might go wrong or be bolstered are surely fundamental to understanding the origins of developmental language disorders that delay or alter early lexical learning. Implementing procedures to benefit children with developmental disorders typically involves altering or highlighting aspects of the learning environment. This requires a principled understanding of the structure and regularities of that environment and processes of statistical learning. [unreadable] [unreadable] [unreadable] |
1 |
2009 — 2013 | Smith, Linda (co-PI) [⬀] Yu, Chen [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Sensorimotor Dynamics of Naturalistic Child-Parent Interaction and Word Learning @ Indiana University This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5). |
1 |
2013 — 2017 | Yu, Chen [⬀] | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Sensorimotor Dynamics of Parent-Child Interactions Build Word Learning Skills @ Indiana University Bloomington DESCRIPTION (provided by applicant): Everyday social activities such as toy play with parents are the context for learning as it unfolds in real time. A well coordinated child-caregiver interaction seems likely to lead to better learning while a decoupled or non-coordinated interaction may disrupt learning and development. Both parent and child play an active role in early communication and word learning as children signals their choices of communication and also determine what environmental information is most relevant to their own developmental needs, and as parents react to those signals in a sensitive manner and provide relevant information to ease the challenge of children's matching linguistic symbols to their referents. The goal of the proposed research is to achieve a deeper understanding of the sensorimotor basis of early social coordination and its potentially critical roles in later language learning an other development milestones. Toward this goal, the proposed research has three key components: 1) a set of longitudinal and cross-sectional experiments will collect multiple streams of sensorimotor data from child-parent toy play to discover fine-grained patterns characteristic of early developmental changes in child-parent social interactions which will provide new evidence on the developmental origins of these skills; 2) we will link sensorimotor dynamics in child-parent interaction with standardized, highly reliable behavioral measures that have been widely used, with the goal to understand how children's moment-to-moment social interactions with social partners may build generalizable word learning skills; 3) we will link social coordination in toy play with parental responsiveness and individual differences in development milestones, which will provide deeper insights into the consequential and longer-term role of early parent-child interactions in developmental process. |
1 |
2015 — 2018 | Smith, Linda [⬀] Yu, Chen (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Comp Cog: Collaborative Research On the Development of Visual Object Recognition @ Indiana University Human visual object recognition is fast and robust. People can recognize a large number of visual objects in complex scenes, from varied views, and in less than optimal circumstances. This ability underlies many advanced human skills, including tool use, reading, and navigation. Artificial intelligence devices do not yet approach the level of skill of everyday human object recognition. This project will address one gap in current knowledge, an understanding of the visual experiences that allow skilled object recognition to develop, by capturing and analyzing the visual experiences of 1- to 2-year-old toddlers. This is a key period for understanding human visual object recognition because it is the time when toddlers learn a large number of object categories, when they learn the names for those objects, and when they instrumentally act on and use objects as tools. Two-year-old children, unlike computer vision systems, rapidly learn to recognize many visual objects. This project seeks to understand how the training experiences (everyday object viewing) of toddlers may be optimal for building robust visual object recognition. The project aims to (1) understand the visual and statistical regularities in 1- to 2-year-old children's experiences of common objects (e.g., cups, chairs, trucks, dogs) and (2) determine whether a training regimen like that experienced by human toddlers supports visual object recognition by state-of-the art machine vision. |
1 |
2018 — 2021 | Yu, Chen [⬀] | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
What You See Is What You Learn: Visual Attention in Statistical Word Learning @ Indiana University Bloomington PROJECT SUMMARY Individual differences in the quantity and quality of parent talk and individual differences in infant visual attention predict later vocabulary development, which in turn has the cascading consequences of later cognitive development and school achievement. The proposed research studies how infants prior to their first birthday begin to learn object names, and does so in a unique approach, focusing on how visual information from the infant perspective coincide with parent naming and on how infant looking behavior selects the data to be aggregated and how that selected data changes incrementally in statistical learning. Toward this goal, we will collect a corpus of infant-perspective scenes from 8-12-month-old infants as they play with their parent in a toy room and as parents naturally name objects during play. We will analyze the referential ambiguity of the scenes that co-occur with parent naming events, by showing the scenes to infants and tracking their gaze direction in free viewing. We will use the gaze data to quantify ambiguity in terms of the uncertainty, correctness, and informativeness of the scenes as to the intended object referent. We will then construct training sets for cross-situational learning experiments from the collected scenes by manipulating the mix of high and low ambiguity trials. We will test a series of hypotheses about how infants aggregate information to learn multiple object names. Moreover, we will feed the trial-by-trial gaze data of individual infants to models to predict final learning outcomes, with the goal of specifying attentional and memory processes that support learning. Our overarching aim of the project is to show that the infant-perspective scenes co- occurring with early naming events have properties that guide and train infant visual attention and in so doing support the learning of names and their referents through the aggregation of information across multiple naming events. |
1 |