We are testing a new system for linking grants to scientists.
The funding information displayed below comes from the
NIH Research Portfolio Online Reporting Tools and the
NSF Award Database.
The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please
sign in and mark grants as correct or incorrect matches.
Sign in to see low-probability grants and correct any errors in linkage between grants and researchers.
High-probability grants
According to our matching algorithm, Seana Coulson is the likely recipient of the following grants.
Years |
Recipients |
Code |
Title / Keywords |
Matching score |
1998 |
Coulson, Seana M |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Event Related Brain Potentials--Semantics and Pragmatics
neural information processing; semantics; evoked potentials; brain electrical activity; syntax; brain interhemispheric activity; behavioral /social science research tag; clinical research; human subject;
|
0.922 |
2009 — 2012 |
Coulson, Seana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Understanding Multi-Modal Discourse: Cognitive Resources and Speech-Gesture Integration @ University of California-San Diego
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5.
Anyone who has had a phone conversation about how to how to change the oil in a car or how to make lasagna knows the importance of explanatory gestures for communication. Multimodal discourse involves the use of both visual (gestures) and auditory (speech) information during communication. Scientists still have much to learn about how people combine information in gestures with information in speech and whether people vary in their ability to use gestural information. This project evaluates the importance of working memory for combining linguistic information in speech with gestural information. Participant's brain activity will be recorded as they watch videotapes of a person talking about concrete topics such as the shape of objects, their relative sizes, and other spatial information that is difficult to convey through speech alone. The impact of gestural information will be measured by comparing brain activity recorded when the same person watches videos in which the speaker does gesture with those in which he does not. The importance of the different working memory systems will be assessed by seeing how language comprehension suffers when participants are asked to remember irrelevant words (verbal working memory), dot patterns (visuospatial working memory), and pictures of meaningless body positions (sensorimotor working memory). The impact of differences in learning style and cognitive abilities on gesture comprehension will also be examined
By discovering the relative importance of verbal, visuospatial, and sensorimotor working memory for understanding face-to-face communication, this research will aid the design of more effective teaching and training methods. The project may guide the development of teaching methods that are specially adapted for people with different learning styles and could help children and adults with communicative deficits by maximizing the effects of gestural information. The project supports an early career scientist and provides summer jobs for college students, including those from minorities under-represented in science.
|
1 |