Area:
Object recognition, neuroimaging, action, development
We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Karin Harman James is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
2009 — 2013 |
James, Karin Harman |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Role of Action in the Development of Visual Object Recognition @ Indiana University Bloomington
DESCRIPTION (provided by applicant): Processes of visual object recognition play a central role in multiple aspects of human cognition. However, very little is known about the development of these processes beyond the earliest stages of infancy. A long literature suggests a deep and fundamental coordination between vision and action in adults. One way in which action should matter critically to visual development is that one's own actions on objects generate the dynamic visual experiences that one receives about objects. Self-generated actions literally create one's views of objects. In the proposed research, we seek to understand the role of developmental changes in self-generated action in the development of visual object representations and recognition. The participants will be children between 18 and 24 months of age -- an age range during which prior work indicates that significant developmental change in object recognition occurs. This age range is also a period of rapid language expansion in typically developing children, and a period in which atypical patterns of language, motor, and more diffuse cognitive development are often first identified. Thus, the study of self-generated dynamic views of object structure will provide a new window into mechanisms and early diagnosis of atypical cognitive development. We focus on 3 aspects of visual object representations, suggested to be important in studies of high level vision in adults, which our preliminary studies indicate have roots in the 18 to 24 month period: 1) Preferred views -- the causes and consequences of the emergence of a preference for viewing the planes of objects (sides, front, back, top, and bottom) rather than views of areas between planar surfaces;2) Axes of elongation -- the causes of developmental changes in how children hold and move objects relative to the objects'axes of elongation, and the consequences for children's perception and representation of those objects;and 3) Global structure -- the ability to recognize objects from sparse characterizations of their geometric structure. We will use both cross-sectional and training studies to investigate the roles of action in each of these aspects of typically developing children's visual object recognition. PUBLIC HEALTH RELEVANCE: Children with perceptual and motor disorders are at risk for learning, language and attentional disorders. This work studies the cascading consequences of visual-motor development in typically developing infants (18-24-month-olds), a necessary scientific foundation to effective early diagnoses and therapies.
|
1.009 |
2014 — 2017 |
James, Karin |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: the Role of Gesture in Word Learning
When people talk, they gesture. Gesturing helps speakers to organize their thoughts and words, and also helps listeners to more fully understand the information being conveyed. Instructional techniques have begun to incorporate use of speech-accompanying gestures to facilitate learning. However, it is unclear how and why gestures aid learning. The goal of this research is to investigate whether, and if so how, observing gestures differs from observing actions performed on objects in facilitating young children's word learning. The research employs both behavioral measures of learning and brain imaging techniques to address this issue. The research will ultimately help to optimize learning environments.
This project addresses the hypothesis that gesture's facilitative effect on learning is distinct from effects of observing actions because gestures are more abstract forms of representation, in that they highlight the important components of an action without being tied to a specific learning context. The project explores the ways in which gestures versus actions facilitate word learning in 4 to 5 year old children. The research goals include (1) comparing the impact that learning a word through gesture, compared to action, has on the word's learning trajectory; (2) evaluating how well words are generalized and retained over time after they are learned through gesture or action; (3) exploring the neural mechanisms underlying each of these types of learning experiences. The results will clarify whether and how gesture promotes learning that goes beyond the particular, and extends over time.
|
0.915 |