2006 — 2012 |
Ghazanfar, Asif |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: the Neuro-Cognitive Evolution of Speech-Reading
With a CAREER award from the National Science Foundation, Dr. Asif Ghazanfar at Princeton University will further develop a primate model system to investigate the neural bases for integrating communication signals across sensory modalities. Previous work from his group and others suggest that many perceptual processes related to social communication by monkeys are similar to the processes exhibited by human infants and adults. Like humans, macaque monkeys produce unique facial expressions when producing different vocal signals and they can also perceptually match the appropriate facial expression with a vocalization. The eye movement patterns that monkeys use to process these "multisensory" social inputs are also similar to those used by human adults and children when they view human faces producing speech.
Building upon these findings, the major aim of this project will be to understand the role that brain areas in the macaque temporal lobe play in integrating faces and voices. Specifically, Dr. Ghazanfar's team will investigate how dynamic facial expressions are integrated with vocal expressions in the auditory cortex and high-level visual cortex. By examining the roles of facial postures and dynamics, eye movements and social experience, they hope to uncover principles of visual-auditory neuronal interactions related to social cognition.
In addition to providing new insights into normal communication processes, this research could also help us to better understand disabling abnormalities in the development of social skills. That is, despite the fact that dysfunctions of the temporal lobe in humans contribute to a variety of debilitating communication disorders, the underlying neural mechanisms remain relatively unexplored by neurobiologists. For example, autistic children fail to develop skills related to social signal processing. The hallmark of autism is an inability to behave in a socially-appropriate manner; people with autism do not process the relevant sensory cues necessary for normal social interactions with other individuals. In both the auditory and visual domains, autistic children have great difficulties interpreting facial and vocal signals and fail to properly integrate the two modalities. This deficit is a specific impairment in face and voice processing, and does not extend to other types of visual or auditory signals. The goals of this research thus have direct relevance to understanding the neurobiology of communication disorders in general and autism in particular.
|
0.915 |
2007 — 2017 |
Ghazanfar, Asif |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multisensory Integration of Faces and Voices in the Primate Temporal Lobe
DESCRIPTION (provided by applicant): The major aim of our research is to understand how dynamic visual and auditory components of vocal expressions (e.g., speech) are combined behaviorally and physiologically to enhance communication. For example, holding a conversation among a group of individuals at a party means that all around you are the sounds of voices, laughter and music. In this mixture of sounds, the problem your brain is confronted with is to deftly detect when a person is saying something and discriminate what she is saying. To make its task easier, our brains do not rely entirely on the person's voice, but also take advantage of the movement of the person's face while she speaks. The motions of the mouth provide spatial and temporal cues. These multidimensional cues enhance detection and discrimination of voices. The focus of our work will be on what role the auditory cortex plays in integrating faces and voices, and how its role may different from that of the more traditional association areas such as the superior temporal sulcus. We have four main hypotheses. First, we hypothesize that the magnitude of the behavioral advantage, in terms of multisensory benefits on reaction times, will relate to the response magnitude and response latency of auditory cortical neurons. To address this, we will record from the lateral belt auditory cortex during the performance of audiovisual vocal detection task in noise. Second, we predict that the auditory cortex will show a rhythm preference for normal speech relative to slowed or sped up speech and that this preference will also result in greater spiking output, greater spike-speech phase locking or both. Third, we hypothesize that the role of this rhythm is to chunk the auditory signal into manageable units to allow for further, more efficient processing of the fine structure of vocalizations. We will then test the possibility that a rhythmic visual signal could compensate for disruptions in the rhythmicity of the auditory component of vocalizations; we will test this both behaviorally and at the level of auditory cortical signals. Fourth, we hypothesize that processes occurring in the superior temporal sulcus during the same detection and discrimination tasks will be different from those occurring in the auditory cortex. This difference will be primarily because the superior temporal sulcus receives supra-threshold inputs from both the auditory and visual modalities, whereas the auditory cortex only receives a modulatory, sub-threshold influence from the visual modality. Our work has the potential to illuminate the neurophysiological mechanisms that go awry in a number of communication disorders. First, relative to typically-developing children, children with autism spectrum disorders exhibit impaired neural processing and impaired detection of audiovisual speech in noisy backgrounds. Second, a recent theory of dyslexia suggests that dyslexics are impaired at linking phonological sounds with vision. Third, relative to normal individuals, schizophrenic patients are particularly impaire at discriminating audiovisual versus auditory-only speech in noisy backgrounds. One likely substrate for these impairments is the temporal lobe, where faces and voices are first combined neurophysiologically.
|
0.915 |
2018 — 2021 |
Ghazanfar, Asif |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Social Neurobiology of Vocal Production and Perception
PROJECT SUMMARY The functions of vocal communication occur not just through a shared understanding of the semantics and syntax of a common language, but also through a temporal coordination of behavior between the two individuals as well. This temporal coordination emerges spontaneously in any given conversation and is known as vocal turn-taking. Given the central importance of vocal turn-taking in everyday human social interactions, it is natural to ask what are its neural bases. Understanding the neural bases of vocal turn-taking may provide insights into not only the basic mechanisms of a ubiquitous form of social interaction but may also illuminate the mechanisms that go awry in disorders that include social dysfunction. In several psychiatric disorders, like Parkinson's disease and autism, there are problems with vocal turn-taking that ultimately may lead to a lack of social connectedness. What is needed is some basic knowledge of the neural mechanisms of vocal production and perception, and their coordination during turn-taking, before we can develop a framework for investigating the underlying neural causes of behavioral impairments that limit patients' capacities to become communicatively-engaged with other individuals. We will quantitatively characterize turn-taking behavior in an animal model system and investigate its neurobiology. Specifically, we will investigate during vocal production, perception and real-time turn-taking the roles played by a medial frontal cortical structure?the anterior cingulate cortex (ACC)?and a subcortical network known as the ?social behavior network? (SBN). A handful of studies established the ACC's important role in vocal production, but none investigated ACC's role in vocal perception. The SBN is a set of interconnected subcortical areas that are common across all vertebrates and that regulates a range of social behaviors (feeding, aggression, reproduction and parental care); it includes structures such as the basal ganglia, amygdala, periaqueductal gray area and hypothalamus (among other regions). Its putative role in communication is unknown but many of its nodes overlap with those that are involved in vocal production. Our aims will 1) develop a computational model of vocal turn-taking to generate specific neural hypotheses; 2) use new imaging technology to investigate the large-scale network underlying vocal production and perception; and 3) use microstimulation and electrophysiology to directly test hypotheses gleaned from the model and imaging data.
|
0.915 |