2009 — 2010 |
Huber, David Ernest |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
A Stochastic Judgment Model of Recall: Separating Measurement, Memory and Correla @ University of California San Diego
DESCRIPTION (provided by applicant): This competitive revision application is submitted in response to notice NOT-OD-10-032, titled "NIH Announces the Availability of Recovery Act Funds for Competitive Revision Applications (R01, R03, R15, R21, R21/R33, and R37) through the NIH Basic Behavioral and Social Science Opportunity Network (OppNet)". The parent R03 developed a new analysis tool for relating confidence judgments (e.g., "I'm certain I was correct") to memory responses (e.g., "I believe I saw that face before"). Because confidence judgments are given separately from memory responses, there may be 1) measurement error in the confidence judgment;2) measurement error in the memory response;or 3) these two behavioral responses might be based on different kinds of information. The analysis tool uses patterns of data across two or more conditions to separately quantify these three mechanisms, each of which can lead to confidence judgments that appear to be inaccurate. This revision seeks to advance this analysis tool through its application to confidence judgments of perceptual responses (e.g., "that's the face I just got a brief glimpse of"). A preliminary study suggests that confidence judgments may act in a fundamentally different manner for perception as compared to memory. As expected under the assumption that confidence judgments introduce an additional source of measurement error, confidence judgments of memory were found to be less accurate than suggested by forced-choice memory decisions. However, the opposite was found for confidence judgments of perception. In its original form, the analysis tool cannot account for this pattern of data. Instead, these results suggest that the measurement error associated with perceptual confidence is coupled to the error of perceptual information. Besides validating this extension of the analysis tool through simulation studies, three behavioral experiments will test predictions of this account. These experiments will investigate the interplay between memory and perception as it relates to confidence judgments. In particular, these experiments will ascertain whether confidence judgments of perception use long-term memorial information (e.g., "that's a familiar face") to optimally adjust the reported level of confidence. PUBLIC HEALTH RELEVANCE: Perceptual disorders are often measured through the inability to rapidly identify visual objects such as faces or words, but these deficits may reflect a change in the certainty associated with potential responses rather than a deficit in the underlying perceptual information. By collecting confidence judgments as well as perceptual identification responses, the proposed work will develop a set of measurement tools that can differentiate between the possible sources of perceptual deficits. These measurement tools will be relevant to differential diagnoses of perceptual disorders as well as clinical therapies designed to alleviate perceptual difficulty.
|
1 |
2009 — 2012 |
Huber, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Modeling Perception and Memory: Studies in Priming @ University of California-San Diego
Collaborative Research: Modeling Perception and Memory: Studies in Priming
David E. Huber, University of California-San Diego Richard M. Shiffrin, Indiana University
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
It is said that "seeing is believing", and we take it for granted that vision operates efficiently and accurately. This suggests that vision is easy. However, failed attempts at producing computer vision demonstrate exactly the opposite--vision is perhaps the most difficult operation performed by the brain, requiring one third of the neocortex. The NSF-funded research project being conducted by David Huber at the University of California, San Diego and Richard Shiffrin at Indiana University focuses on an important question in visual perception: How is it that we can keep separate what we are currently viewing from that which came immediately before? In truth, vision is constantly "blurring" together information over time, such as when viewing the smooth motion at the cinema that is produced by a sequence of still images shown in rapid succession. However, while reading, our eyes constantly move from one word to the next, and yet unlike a movie, we see each word separately and do not confuse it with the previous words. To accomplish this, the brain must have a trick for deciding when the previous image should be combined with the next image and when each should be kept separate. Huber and Shiffrin hypothesize that the process of identifying each word or each movie image causes it to be suppressed so as to reduce inappropriate blending with the next word or image. In the case of a movie, the images appear too briefly, and the blending produces apparent movement. In the case of reading, our eyes dwell on each word exactly the right amount of time to fully identify and suppress each word so as to reduce confusion with the next word. Huber and Shiffrin investigate this ability to separate visual images in a variety of tasks, including reading, face identification, and rapid detection of change, to name just a few examples. If their hypothesis is correct, manipulating the timing of stimuli should produce analogous behavioral effects in all of these situations. Beyond laboratory studies, this hypothesis may also improve computer vision systems in situations requiring rapid identification. For instance, computer controlled cameras at the airport might be used to identify faces of suspects, but this requires separating one face from another when there is a crowd of faces moving quickly past the camera. The results of this research may also be relevant to disorders such as autism, schizophrenia, and dyslexia, which often involve a component of distorted or abnormal perception. For instance, one account of dyslexia suggests that reading difficulties arise from an inappropriate blending of letters and words. Understanding the manner in which the brain separates visual information over time may help with the diagnosis, interpretation, and treatment of these perceptual deficits.
The human perceptual system receives a constant stream of continually changing information. For example, the eyes move several times each second, providing different views of different objects or words. This project investigates the dynamic process of separating in time and space information pertaining to previous sources (e.g., a previously viewed word) from information pertaining to the current source (e.g., the currently viewed word). Behavioral studies will address the process of discounting that serves to reduce perceptual separation errors due to source confusion. This discounting process can be understood at multiple levels of description and the proposed experiments test complimentary and related mathematical models at the causal and neural levels of analysis. Two causal models use Bayesian statistical techniques and focus on optimizing perception in a noisy world perceived with a limited capacity processing system; discounting is implemented as "explaining away" between competing sources. The neural model implements discounting through habituation that arises with the transient depletion of synaptic resources. In combination, these models demonstrate why perceptual discounting exists and the particular manner in which it is implemented. A wide variety of experimental paradigms involve the rapid presentation of visual objects and the proposed studies use these models to investigate whether perceptual source confusion and discounting may provide a unified account of these phenomena. Besides visual short-term priming with words, the proposed studies examine the popular perceptual and cognitive paradigms of repetition blindness, flanker effects, the attentional blink, negative priming, semantic satiation, and affective priming. All of these paradigms involve presenting a picture, word, or symbol on a computer screen followed by a second presentation that is either identical, positively related, or negatively related to the first presentation. An important goal of this endeavor is to provide a unified account of these perceptual phenomena that are currently considered in isolation by researchers.
|
0.915 |
2017 |
Cowell, Rosemary Alice [⬀] Huber, David Ernest |
RF1Activity Code Description: To support a discrete, specific, circumscribed project to be performed by the named investigator(s) in an area representing specific interest and competencies based on the mission of the agency, using standard peer review criteria. This is the multi-year funded equivalent of the R01 but can be used also for multi-year funding of other research project grants such as R03, R21 as appropriate. |
Using Fmri to Measure the Neural-Level Signals Underlying Population-Level Responses @ University of Massachusetts Amherst
Project Summary: The goal of this proposal is to advance our ability to accurately infer the properties of neu- ral-level responses from the more coarse-grained information obtained with non-invasive imaging in humans. To achieve this goal, the project will capitalize on feature-selective cortical responses. For example, many neu- rons in visual cortex exhibit a tuning function such as a response profile in which firing rate is greatest for one orientation of a line, and falls off for orientations progressively less similar to that orientation. Promising new methods for analyzing functional Magnetic Resonance Imaging (fMRI) data reveal analogous feature-tuning in the blood oxygenation level-dependent (BOLD) signal. Because these voxel-level tuning functions (VTFs) are superficially analogous to the neural tuning functions (NTFs) observed with electrophysiology, it is tempting to interpret VTFs as mirroring the characteristics of the underlying NTFs that contribute to them. However, this interpretation is not justified because there are multiple alternative accounts by which changes in the NTFs could produce a given change in the VTF. To distinguish between these accounts, we need a means of map- ping VTFs back to NTFs. That is, for fMRI to provide insights into neural-level mechanisms, the inverse prob- lem of mapping voxel-level fMRI signals back to neural-level responses must be solved. The proposed work will tackle this inverse problem by considering a plausible set of neural-level changes that may give rise to an observed change in voxel-level fMRI responses, and determining which model of neural-level change is most likely using either model recovery or hierarchical Bayesian estimation, followed by model selection. The goal will be accomplished via three specific aims: (1) Determine the conditions that allow us to distinguish between alternative models of neural-level modulation for a simple modulation of orientation-selective VTFs (stimulus contrast); (2) Identify the neural-level mechanisms underlying modulations of orientation-selective VTFs induced by other manipulations of perceptual or cognitive state; and (3) Identify the neural-level mecha- nisms underlying modulation of two further classes of VTF. The approach for all three aims entails: i) collecting optimal fMRI data, ii) applying alternative models of neural-level modulation to the fMRI data to account for voxel-level modulations, iii) performing model selection based upon model recovery or hierarchical Bayesian estimation, iv) comparing the outcome of model selection with ?ground truth? from electrophysiology. The pro- ject will thereby develop an experimental and model selection procedure for revealing the neural-level mecha- nisms that underlie modulations in feature-selective voxel responses observed with fMRI. Moreover, it will ena- ble the comparison of data from animal studies investigating fine-grained neural mechanisms with data from non-invasive imaging in humans, for a range of perceptual and cognitive phenomena.
|
0.933 |