Face perception is crucial for normal social interactions, and human observers excel at rapidly and reliably categorizing visual patterns as faces or non-faces. Even subtle impairment in this ability can have devastating consequences, as evidenced by autism and other developmental disorders. Although the neural mechanisms underlying face perception have been a major focus of primate electrophysiology and brain-imaging research, the computational mechanisms underlying how the brain processes faces are still far from clear. With support from the National Science Foundation, Dr. Ming Meng of Dartmouth College is addressing this question by synthesizing techniques drawn from psychophysics, human brain imaging, statistical data mining, and computer vision. First of all, this project is measuring brain activation in response to image sets that have been compiled by using computer vision. These images vary in their image-level facial similarity, ranging from non-faces to genuine faces. Computer vision systems may falsely categorize many of the face-like non-faces as faces. By contrast, categorical perception enables human observers to make unambiguous perceptual judgments on whether these images are faces or non-faces. Second, to understand the face processing neural network, it is more important to understand causal relationships among several brain regions than average response activation of each separate brain region. This project is applying state-of-the-art data mining techniques to provide the basis for dynamic causal modeling of the direct and indirect relationships among various factors, such as, low-level visual features, face semblance and the face/non-face categorization, linking neural processing stages from primary visual analysis to perceptual decision. Finally, to determine bottom-up versus top-down modulation on categorical face perception, sophisticated psychophysical paradigms are being used to investigate potential interactions between visual awareness and the neural correlates of categorical facial analysis. Through this approach, stimulus-driven and feed-forward models of face processing that are assumed to be independent of visual awareness can be compared to models that involve cognitive feedback modulations and visual awareness.
The results of this project are expected to lead to a better understanding of how the human brain processes visual information in the context of face perception, a domain that provides one of the most compelling examples of sensory organization. Understanding the process of how the human visual system analyzes faces may ultimately lead to the successful design of artificial face recognition systems. Moreover, continuous image-level facial similarity analysis versus binary categorical judgment in face perception underlies the crucial social ability to rapidly recognize a face. Teasing the two analyses apart may help characterize the neural pathology in the developmental disorders that involve face perception deficits. Based on the results, a cognitive therapy can be strategically designed to pinpoint such deficits, and to therefore help children and adults with these disorders.