
Ken Nakayama - US grants
Affiliations: | Harvard University, Cambridge, MA, United States | ||
Medical Research Institute of San Francisco | |||
Smith-Kettlewell Eye Research Institute, San Francisco, CA, United States |
Area:
VisionWebsite:
http://visionlab.harvard.edu/Members/Ken/nakayama.htmlWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Ken Nakayama is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
1985 — 1991 | Nakayama, Ken | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Visual Image Motion Processing in Humans @ Smith-Kettlewell Eye Research Institute |
1 |
1988 | Nakayama, Ken | S10Activity Code Description: To make available to institutions with a high concentration of NIH extramural research awards, research instruments which will be used on a shared basis. |
@ Smith-Kettlewell Eye Research Institute The Smith-Kettlewell Eye Research Foundation is a center for research in vision and oculomotility, currently awarded $3.2M in PHS grants (recommended and committed). These grants fund studies of eye movements in humans and monkeys, and studies of human vision that could be improved or extended with eye movement monitoring. However, except for a scleral search coil system (the coil is only wearable for short periods, requires an available physician, and is not tolerated by some subjects), our Foundation has no facilities for accurate 2-dimensional recording of human eye move- ments. Indeed, apart from the Purkinje-image Eyetracker, no non- contacting 2-D eye position recording method with reasonable spatial and temporal resolution is available. The Eyetracker Experimental Station would provide a centralized facility for 2-dimensional, binocular human eye position monitoring, eye-position-contingent display generation, and data collection, analysis, storage, and distribution. The monitoring, computation, and display system would have 1 min arc spatial resolution and 1 msec synchronous temporal resolution over a 24 deg by 24 deg binocular field. Powerful data analysis would be supported, as well as data transmission to other systems. The system would consist of: an SRI binocular Double Purkinje-image Eyetracker; high resolution, fast phospher, vector display scopes; a Masscomp MC-5500 multiprocessor lab computer with RTR experiment control software (both supplied by SKERF); a Masscomp graphics terminal with data analysis and display software; Ethernet links to three remote systems. Projects scheduled to use the system include: eye movements and velocity discrimination; vergence and conjunctive search; visual motion processing and smooth pursuit; pursuit plasticity; naso- temporal asymmetries in motion processing; nystagmus and contrast sensitivity; convergence and stereo matching; binocular saccadic plasticity. |
1 |
1998 — 1999 | Nakayama, Ken | R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Visual Attention Shifts Caused by Direction of Gaze Cues @ Harvard University DESCRIPTION (Adapted from applicant's description): In the normal course of development at around the end of the first year of life, human infants respond to a change in an adult's alignment of gaze by turning in the same direction as if to see where the adult is looking. This phenomenon, known as joint attention, is believed to play an important role in referential communication. Deficits in joint attention have also been associated with autism. Using computerized displays of real faces, the investigators have demonstrated in pilot work that infants as young as 3 months are sensitive to eye gaze and that this influences their orienting behavior to peripheral probes. This research proposal will follow on from this pilot work with a set of experiments to determine why orienting to eye gaze is so elusive in young infants, what is the effect of dissociating eye and head movements, and, finally, what is the contextual role of the face in gaze monitoring? |
1 |
2001 — 2012 | Nakayama, Ken | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Visual Psychophysics of Human Face Processing @ Harvard University DESCRIPTION (provided by the applicant): It has been assumed in the study of vision that there are distinct sub-systems or modules. In the field of visual object recognition this is most clearly evident in the question--Are faces special? This can be divided into two research questions. (1) Does the processing of faces operate according to the same or different computational principles as the processing of other objects? (2) are there a separate anatomical regions in the visual system for the processing of facial images, distinct from the processing of visual objects? To address these fundamental issues, we turn to visual psychophysics which has proven itself, both as a method to isolate functional subsystems and as a way to link such systems to underlying neural substrates. We rely on two new procedures to evaluate holistic processing: spatial summation and categorical perception (CP) noise. We will use psychophysical procedures to investigate the range and scope of holistic processing in the recognition personal identity and facial emotion. We will examine holistic processing under a range of stimulus conditions to better determine its characteristics, this includes varying spatial frequency and fragmenting the facial image. We will also determine whether holistic processing is present in highly practiced subordinate recognition tasks. We will also delineate the functional anatomy of face processing using fMRI and relying on variants of the face inversion effect to more clearly identify those parts of the brain that are participating in face-specific processes. We ask whether these areas will coincide with putative face and object areas. We will also use event related fMRI to see the co-localization of detection and recognition specific processes with face and object areas. We will also study face detection, developing a measure of holistic processing, thus enabling a comparison of holistic processing in both recognition and detection. WE will look for a dissociation of face detection and recognition processes in prosopagnosic patients and evidence for a subcortical component of face detection in blindsight patients. |
1 |
2004 — 2008 | Nakayama, Ken | N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Proposal: Hsd-Dhb-Mod the Grammars of Human Behavior @ Harvard University The PIs will undertake an experimental study and computational modeling of the internal representations and associated processes that underlie action perception and understanding by observers, and action planning and execution by actors. To facilitate both careful experimentation and formal theory, the PIs will approach the behavior representation problem primarily through the visual system, asking how do we understand the actions of others using our vision? That is, how do we perform mappings from image sequences depicting simple actions to the corresponding internal representations that allow action recognition, imitation, etc? The PIs will further explore higher-level cognitive representations and mechanisms used to categorize, reason about, and judge the movements and actions of others. The approach is based on a novel formal theory of the mental representations and processes subserving action understanding and planning, which the PIs believe provides a compact but powerful and extensible computational approach to the analysis and synthesis of complex actions (and action sequences) based on a very small set of atomic postural elements ("key frames" or "anchors") and the corresponding probabilistic, grammatical rules for their combination. This probabilistic "pose grammar" approach to action representation is similar to state of the art techniques used for speech recognition (e.g., hidden Markov models), but with key postural silhouettes taking the place of phonemes; such augmented transition grammars also nicely reflect sophisticated new control-theoretic techniques in robotics for robust anthropomorphic movement. The action representational system is not monolithic, but rather occupies a spectrum of informational structures at hierarchical levels corresponding to different behavior "spaces": mechatronic space, used in movement planning and production; cognitive space, involving representations for action recognition, analysis, and evaluation; visual motion space, which encodes and organizes visual motion caused by human action; and linguistic motion space, comprised of conceptual/symbolic action encoding. Excluding here the latter space, the PIs' theoretic, computational, and experimental efforts seek to clarify and formally describe both the nature of the representations in these spaces and, crucially, the mapping of representations across spaces. Notably, they explore a candidate action representation, referred to as a visuo-motor representation, which, in facilitating the understanding of observed actions, may recapitulate and resonate with the actual motor representations used to generate movement. Moreover, they present a promising approach for obtaining this representation from discrete action elements or anchors. |
1 |
2009 — 2012 | Nakayama, Ken Pepperberg, Irene |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Comparative Vision and Attention @ Harvard University Many researchers study how nonhuman animals see the world. To date, however, only certain types of comparative studies have been possible, given the limitations of animal learning and response capabilities. With funding from the National Science Foundation, Drs. Nakayama and Pepperberg at Harvard and Brandeis Universities will address questions about visual processing in Grey parrots. Taking advantage of Grey parrots' ability to mimic human speech, Pepperberg was able to train a Grey parrot to verbally respond to simple optical illusions (e.g., the Müller-Lyer illusion, in which two lines appear to humans to vary in length but in reality do not). The parrot?s responses indicated that it also perceived the illusions. Drs. Nakayama and Pepperberg will train additional birds to learn to label various colors and shapes using the sounds of English speech. The current project will then examine whether parrots, like people, can (a) complete the shape of a partially covered object (e.g., see a square partially occluded by a circle as still being a square), and (b) "see" objects that aren't actually there, like a triangle that seems to appear (to humans and primates) between three pac-man-like partial circles that are arranged in a triangular manner, something formally known as an "illusory contour" or "Kanizsa figure." One might expect a parrot to be able, for example, to infer the presence of a predator that isn't fully observable, but no one has been able to ask any nonhuman such questions directly. Future research will involve more complex tasks designed to study how birds pay attention to objects in their visual environment. |
1 |
2014 — 2017 | Nakayama, Ken Cox, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Medium: Deep Annotation: Measuring Human Vision to Improve Machine Vision @ Harvard University Machine learning is the science of designing computational systems that can learn from data, much as humans do. However, while many machine learning approaches rely on humans to provide labels for training examples that are used for learning, human-provided labels represent just a tiny fraction of the information that can be gleaned from humans. This project brings together a multidisciplinary team with expertise spanning computer science, neuroscience and psychology to pioneer a new paradigm in machine learning that seeks to better mimic human performance by incorporating new kinds of information about human behavior. |
1 |
2016 — 2020 | Nakayama, Ken Gajos, Krzysztof (co-PI) [⬀] Enos, Ryan (co-PI) [⬀] Li, Na |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
@ Harvard University STEM education provides both technical training and the development of cognitive skills, such as designing experiments, testing hypotheses, and analyzing data. While traditional STEM training is essential for developing a highly skilled technical workforce, the cognitive skills developed through this training are beneficial in almost every type of career. To provide cognitive skills training to undergraduates in psychology, who typically do not receive this type of education, this project will develop a computer program, named TELLab, that allows psychology students to design experiments and gather data using the internet. Using this program, students will have the opportunity to experience first hand the challenges of doing science, learning skills and concepts, and most importantly, formulating and solving problems of personal interest to them. |
1 |