Yael Niv, PhD - US grants
Affiliations: | 2008- | Psychology and Neuroscience Institute | Princeton University, Princeton, NJ |
Website:
http://www.princeton.edu/~nivlabWe are testing a new system for linking grants to scientists.
The funding information displayed below comes from the NIH Research Portfolio Online Reporting Tools and the NSF Award Database.The grant data on this page is limited to grants awarded in the United States and is thus partial. It can nonetheless be used to understand how funding patterns influence mentorship networks and vice-versa, which has deep implications on how research is done.
You can help! If you notice any innacuracies, please sign in and mark grants as correct or incorrect matches.
High-probability grants
According to our matching algorithm, Yael Niv is the likely recipient of the following grants.Years | Recipients | Code | Title / Keywords | Matching score |
---|---|---|---|---|
2011 | Niv, Yael | R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Fmri Investigations of How We Learn What Is Relevant For a Decision @ Princeton University DESCRIPTION (provided by applicant): Substance abuse is a disorder that has been associated with a compromise of reward learning and decision making mechanisms in the brain, notably, the midbrain dopamine system and its striatal targets. Prominent theories suggest that drugs of abuse interact with dopamine, "hijacking" normal learning and instead directing behavior towards the procurement and consumption of the drug. In recent years, the computational framework of reinforcement learning has been leveraged to make great strides in understanding the role of dopamine in reward learning and decision making, and how slight modifications of its signals could lead to such detrimental effects. Reinforcement learning models of decision making describe how basal-ganglia structures learn to evaluate different stimuli in terms of their future reward value, and how dopaminergic activity affects such learning processes. But how does the brain identify which, of all the available stimuli, are the relevant ones to represent and evaluate? This "representation learning" problem has been largely ignored in both the experimental and the computational literature, and may lie at the heart of substance abuse disorder. Substance abusers are not wholly irrational decision makers;in fact, research shows that they employ normal economic decision making to the purchase of drugs of abuse. Nevertheless, a clear abnormality is their fixation on stimuli predicting drug rewards (and their sometimes great creativity in learning to manipulate these to obtain drugs) to the exclusion of consideration of predictors of alternative rewards such as salary from holding a job and the support of one's family. This proposal is motivated by the hypothesis that this skewed attention may be the result of an abnormal representation learning process that causes an over-representation of drug-reward predicting cues. The goal of this proposal is to carry out behavioral and fMRI investigations of the computational and neural basis of representation learning and its interaction with reward learning in the human brain. The studies will employ a novel decision making task that has been specifically designed to highlight this interaction. The hypothesis to be tested is that the prefrontal cortex constructs representations of the world, identifying and directing attention to stimulus dimensions that are relevant for the task at hand, and constructing representations that can be used by the basal ganglia in the process of reinforcement learning. Moreover, we hypothesize that dopaminergic circuitry and related prediction error signals mediate the interaction between representation learning in the prefrontal cortex and reinforcement learning in the basal ganglia. The research proposed will provide initial testing of these hypotheses by detecting neural signals related to representation learning and uncovering the computational strategies by which representation learning proceeds in humans. Understanding representation learning processes, their realization in neural circuitry, and how they are influenced by drug-sensitive neuromodulators such as dopamine, will inform theories of what it is that goes awry in drug-influenced decision making, and provide new directions for diagnosis and treatment of substance abuse. PUBLIC HEALTH RELEVANCE: Substance abuse is a serious problem of public health that centers around the brain's decision-making mechanisms for obtaining rewards. While much research has concentrated on how drugs of abuse might alter reward learning mechanisms, little attention has been devoted to the more fundamental (and perhaps more fragile) process of learning which of all the available stimuli are relevant to a decision and should be attended to and learned about. This project proposes to study this "representation learning" process and how it interacts with reward learning, both neurally and computationally. It is hoped that better understanding of representation learning strategies and their neural implementation will allow us to identify how these are affected by drugs of abuse, and will help in developing new treatment strategies targeted at these additional aspects of the disorder. |
1 |
2012 — 2015 | Niv, Yael Botvinick, Matthew [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns: Collaborative Research: Neural Correlates of Hierarchical Reinforcement Learning @ Princeton University Research on human behavior has long emphasized its hierarchical structure: Simple actions group together into subtask sequences, and these in turn cohere to bring about higher-level goals. This hierarchical structure is critical to humans' unique ability to tackle complex, large-scale tasks, since it allows such tasks to be decomposed or broken down into more manageable parts. While some progress has been made toward understanding the origins and mechanisms of hierarchical behavior, key questions remain: How are task-subtask-action hierarchies initially assembled through learning? How does learning operate within such hierarchies, allowing adaptive hierarchical behavior to take shape? How do the relevant learning and action-selection processes play out in neural hardware? |
1 |
2012 | Niv, Yael | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural and Computational Mechanisms of Selective Attention in Experience-Based De @ Princeton University DESCRIPTION (provided by applicant): Neural and computational mechanisms of selective attention in experience-based decision making In order to make correct decisions, we must learn from our past experiences. Learning has long been conceptualized as the formation of associations between stimuli and outcomes. But how should we define these stimuli in real-world decision making environments that are complex and multidimensional? It would seem most optimal to learn about all available stimulus features (height, color, shape, etc.). However, in natural environments only few dimensions are relevant to performance of any given task. Attending to and learning about only those dimensions that are relevant to the task at hand (and ignoring all others) improves performance, speeding learning and simplifying generalization to future stimuli that are slightly different. How do we know what dimensions are relevant to a given task, and should be attended to and learned about? Considerable behavioral work in cognitive psychology has explored the dynamics of attention learning-how we learn what to attend to-within the context of categorization and concept formation. However, little is known about the neural basis of attention learning, and how attention interacts with implicit trial-and-error reinforcement learning processes. The goal of this project is to study the neural and computational substrates of attention learning in humans, and to understand how attention mechanisms interact with learning mechanisms in the brain. We propose to use a combi- nation of computational modeling, behavioral experiments and functional neuroimaging in order to 1) determine the neural substrates of attention learning in the human brain, 2) track learning-driven changes in attention to different dimensions of a stimulus directly, and 3) establish individual differences in attention for learning separately from attention for decision. The overarching neural hypothesis to be tested is two-fold: we hypothesize that neural mechanisms for reinforcement learning in the basal ganglia operate on an attentionally-filtered representation of the environment that is conveyed to the striatum by fronto-parietal cortical afferents. Moreover, we hypothesize that this attentional filter is dynamically adjusted according to the outcomes of ongoing decisions. Throughout, we will not assume that attention learning consists of one unitary process but rather investigate the possibility that individuals use different strategies to varying extents. In particular, building on our previous research and on findings in the categorization literature, we will focus on two computational strategies for attention learning-a serial hypothesis testing strategy, and a gradually focusing parallel attention strategy-that are differentially indicated in different individuals. Our results will significantly advance the basic scientific understanding of cognitive decision making processes, elucidating the neural mechanisms underlying a critical component of decision making. From a practical perspective, understanding the computational and neural underpinnings of individual differences in attention learning will potentially allow tailoring of learning tasks to different individuals. Moreover, the neural processes underlying attention learning are likely to be involved in clinical disorders such as schizophrenia, attention deficit disorder and drug abuse disorder. In the long term, the proposed research will potentially impact on the study and treatment of these disorders. PUBLIC HEALTH RELEVANCE: The proposed work will make use of an interdisciplinary combination of computational methods with neuroscientific and behavioral data to advance basic scientific knowledge about the interaction between learning and attention in everyday decision-making scenarios. From a broad perspective, our results will not only shed light on basic principles of decision making, but will also have implications for attention-related disorder such as schizophrenia and attention-deficit disorder and for tailoring learning and decision-making tasks for specific individuals. |
1 |
2013 — 2014 | Niv, Yael | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural and Computational Mechanisms of Selective Attention in Decision Making @ Princeton University DESCRIPTION (provided by applicant): Neural and computational mechanisms of selective attention in experience-based decision making In order to make correct decisions, we must learn from our past experiences. Learning has long been conceptualized as the formation of associations between stimuli and outcomes. But how should we define these stimuli in real-world decision making environments that are complex and multidimensional? It would seem most optimal to learn about all available stimulus features (height, color, shape, etc.). However, in natural environments only few dimensions are relevant to performance of any given task. Attending to and learning about only those dimensions that are relevant to the task at hand (and ignoring all others) improves performance, speeding learning and simplifying generalization to future stimuli that are slightly different. How do we know what dimensions are relevant to a given task, and should be attended to and learned about? Considerable behavioral work in cognitive psychology has explored the dynamics of attention learning-how we learn what to attend to-within the context of categorization and concept formation. However, little is known about the neural basis of attention learning, and how attention interacts with implicit trial-and-error reinforcement learning processes. The goal of this project is to study the neural and computational substrates of attention learning in humans, and to understand how attention mechanisms interact with learning mechanisms in the brain. We propose to use a combi- nation of computational modeling, behavioral experiments and functional neuroimaging in order to 1) determine the neural substrates of attention learning in the human brain, 2) track learning-driven changes in attention to different dimensions of a stimulus directly, and 3) establish individual differences in attention for learning separately from attention for decision. The overarching neural hypothesis to be tested is two-fold: we hypothesize that neural mechanisms for reinforcement learning in the basal ganglia operate on an attentionally-filtered representation of the environment that is conveyed to the striatum by fronto-parietal cortical afferents. Moreover, we hypothesize that this attentional filter is dynamically adjusted according to the outcomes of ongoing decisions. Throughout, we will not assume that attention learning consists of one unitary process but rather investigate the possibility that individuals use different strategies to varying extents. In particular, building on our previous research and on findings in the categorization literature, we will focus on two computational strategies for attention learning-a serial hypothesis testing strategy, and a gradually focusing parallel attention strategy-that are differentially indicated in different individuals. Our results will significantly advance the basic scientific understanding of cognitive decision making processes, elucidating the neural mechanisms underlying a critical component of decision making. From a practical perspective, understanding the computational and neural underpinnings of individual differences in attention learning will potentially allow tailoring of learning tasks to different individuals. Moreover, the neural processes underlying attention learning are likely to be involved in clinical disorders such as schizophrenia, attention deficit disorder and drug abuse disorder. In the long term, the proposed research will potentially impact on the study and treatment of these disorders. |
1 |
2016 — 2020 | Niv, Yael | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Orbitofrontal Cortex as a Cognitive Map of Task States @ Princeton University Project summary: The orbitofrontal cortex as a cognitive map of task states The orbitofrontal cortex (OFC) has remained one of the most mysterious areas in the prefrontal cortex, with suggested functions ranging from inhibition of prepotent actions to valuation in economic decision making. OFC dysfunction is implicated in a wide range of decision-related disorders, chief among them compulsive disorders such as addiction and OCD. Recently, we hypothesized that the OFC represents the current state of the task within a ?cognitive map? of task space, providing a summary of task-relevant information to decision-making and learning areas elsewhere in the brain (Wilson et al., 2014, Neuron). In particular, theoretical considerations and previous empirical data suggest that the OFC is especially important for representing task states that are ?partially observable??states that include information that is not directly available in the environment, such as internal information from working memory. This hypothesis offers a unifying theoretical framework for interpreting a wide variety of existing ?ndings, and has already gained considerable traction in the ?eld (e.g., the paper has been cited over 50 times and was mentioned in over half the talks in a recent conference on the OFC). However, the theory has not yet been tested directly, as previous data can also be explained by alternative interpretations. Here we propose to test the hypothesis that the OFC represents task states, and to contrast and differentiate this function from the dominant competing hypothesis according to which the OFC represents reward expectancies. In Aim1, we will test whether the OFC codes the states of an age-judgment task that requires encoding of unobservable information as a critical part of the task state, and that does not involve rewards. We will use fMRI to measure OFC activity in humans, and utilize multivariate analysis methods to test whether the task states can be decoded in OFC, and whether this state representation correlates with and predicts task performance. In Aim 2, we will differentiate the state coding and value coding functions of the OFC by adding rewards to the age-judgment task and testing whether rewards are decodable in OFC when they are instrumental to task performance versus incidental to task performance. Our theory predicts that rewards will be represented in OFC only if they are required as part of the task state. Throughout, we will also analyze representations in related brain areas such as the dorsolateral prefrontal cortex, the hippocampus and high-level visual cortices, to determine the unique function of the OFC, and to establish the relationship between task states in the OFC and task-relevant information encoded elsewhere in the brain. Our ?ndings will impact on the current understanding of the role of OFC in both normal and aberrant learning and decision making, and will help explain why the OFC is important for some tasks but not others. Moreover, our work will establish the utility of decoding internal task states from non-invasive brain imaging data for predicting behavior and for analyzing individual differences in task representations. This is especially relevant to understanding the precise nature of decision-making de?ciencies in disorders such as substance abuse and other compulsive disorders where the OFC is strongly implicated. |
1 |
2019 — 2021 | Niv, Yael | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
A Computational Psychiatry Investigation of the Effects of Mood On Reward Learning and Attention @ Princeton University A Computational Psychiatry Investigation of the effects of Mood on Reward Learning and Attention The relationship between mood and reward processing is bidirectional. On the one hand, mood is affected by the experience of rewards and punishments, such that mood tends to improve after better-than-expected outcomes and deteriorate after outcomes that are worse than expected. On the other hand, mood itself biases reward processing via its effects on cognitive processes such as attention and reinforcement learning (RL). As such, pathological mood states in mood disorders such as major depressive disorder and bipolar disorder may be the result of aberrant patterns of interaction between mood, reward learning, and attention. Recently, we and others have begun to use computational models to unravel the complex patterns of reciprocal interaction between mood, reward learning, and attention (e.g., Eldar & Niv, 2015; Eldar et al., 2016). However, these models' critical predictions regarding the neurocomputational substrates of mood disorders have not yet been tested. In particular, we predict that bipolar disorder and major depression can be distinguished from one another at both a behavioral and a neural level, in terms of different patterns of abnormal interaction between mood, RL, and attention. Here, we propose to test this prediction using convergent methodologies from computational psychiatry including human patient studies, large-scale online data collection and functional magnetic resonance imaging. In Aim 1, we will test whether bipolar disorder and major depression are characterized by distinct patterns of interaction between mood, RL, and attention. We will use behavioral experiments with two custom-designed tasks to measure the strength of the mood-RL interaction and the mood-attention interaction, respectively. Computational models will be fit to data from these tasks in both subjects with mood disorders and in matched controls. In Aim 2, we will assess the utility of mood-RL and mood-attention interactions as markers of vulnerability to mood disorders in the general population. We will use web-based data collection with the same two tasks as in Aim 1 to explore links between mood-RL and mood-attention interactions and the subclinical expression of mood disorders in a general population sample. Finally, in Aim 3 we will identify the neural circuits mediating the effect of mood on RL. We will acquire fMRI data on the mood-RL task from healthy subjects and from patients with bipolar disorder and major depressive and will use these data to describe the neurocomputational interactions of mood and reward in health and disease. This project will use state-of-the-art tools from computational psychiatry to test and refine a neurocomputational model of mood. Guided by the predictions of this model, we will assess patterns of interaction between mood, reinforcement learning, and attention in three different contexts: a psychiatric behavioral sample, a large-scale online sample of the general population, and a sample with fMRI data to help us assess the neural substrates of mood-cognition interactions. Taken together, these aims will allow us to assess a neurocomputational model of mood that has the capacity to transform the clinical understanding of mood disorders including bipolar disorder and major depression. |
1 |
2019 — 2020 | Niv, Yael | R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Quantifying the Latent-Cause Inference Process in Humans @ Princeton University Quantifying the Latent-Cause Inference Process in Humans Events in our lives can occur in seemingly random manner, yet we tend to make meaning by inferring underlying hidden, or latent, causes for events. When this process goes well, we accurately attribute our circumstances to their true cause such that we can behave appropriately and learn for the future. When this process goes wrong, we might make false attributions, overgeneralizing any negative outcomes to our own behavior, or inventing idiosyncratic accounts for every situation. We use latent cause inference in all aspects of our lives ? from interpreting our visual world to complex social decision-making. In my lab, we have developed computational models of latent-cause inference and used this framework to successfully predict learning (Gershman et al., 2013), memory (Gershman et al. 2014), and even social evaluation (Shin & Niv, under review). However, this previous work has focused on the conceptual level, only testing qualitative predictions of our framework. This is because there has been no task that allows the measurement and quantification of latent cause inference in individuals. Developing a precise, quantitative model of this process will be critical for understanding the neurobiological circuits that support successful inference as well as when and why they can fail. Computationally, the process of latent cause inference can be parameterized in a formal Bayesian model that relies on three parameters: how likely it is that a new cause occurs, how variable or homogeneous are the events that a cause tend to create, and how long is each cause active. Different people may have different settings for these parameters, corresponding to fundamental tendencies in interpreting the world that may vary across individuals and situations. Here, we develop a novel paradigm in which participants view ambiguous stimuli and cluster them according to their perceptual features. This allows us to quantify, for each individual, the parameters that they are using when making inferences. Thus, our task will allow precise quantification of the subprocesses involve in latent cause inference for the first time. In Aim 1, we will characterize latent-cause inference in a large online sample of human subjects and relate parameters of the inference process to transdiagnostic dimensional constructs of mental illness. In Aim 2, we will establish test-retest reliability of our measurements and determine whether parameters of the process correspond to stable individual traits, or rather follow the symptom state of the individual. This project will use state-of-the-art methods for characterizing complex cognitive processes using precise, quantitative models and collecting large-scale quantities of data by running experiments through an online platform. We will use the model to quantify the process of latent cause inference in individual subjects and map model parameters to self-report measures of mental-health related constructs. By using a variety of transdiagnostic questionnaires, we are well equipped to discover key factors that can be predicted by parameters of latent cause inference. Testing across a large, heterogeneous population (Aim 1) and across time points within subjects (Aim 2) will allow us to quantify the range of human latent- cause inference behavior and the reliability of these measures across situations (states) and individuals (traits). These findings will also inform our future fMRI studies using the same task to investigate the circuitry underlying this process. |
1 |
2020 — 2021 | Niv, Yael | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
@ Princeton University Adolescence is characterized by changes in decision-making, accompanied by the progressive development of the prefrontal cortex and reconfiguration of brain networks that support goal-directed decision-making. Adolescence is also the typical age of clinical onset and peak prevalence for many forms of mental illness. Recent advances in computational modeling of cognitive processes have enabled the quantification of parameters that govern learning and decision and characterization of how they differ in mental illnesses. There are several differentiating properties of learning and decision making processes in the brain: learning can be model-free (based on past trial and error) vs. model-based (learning the structure of a task and computing a best course of action given that structure), Pavlovian (with innate sensitivities to different motivationally relevant outcomes) vs. instrumental (arbitrarily adaptive), and learning occurs from positive and negative consequences. Furthermore, responses can be biased toward action or inaction, and can be more or less exploratory (variable). We will use three reinforcement-learning tasks that, together with computational models, index these multiple differentiable features of learning and decision making, in order to jointly define an individual ?computational phenotype? of learning and decision processes. In Aim 1 this computational phenotype will be defined in a large online sample age 10-25 in order to map changes in symptom dimensions across adolescent development. In Aim 2 we will use neuroimaging to characterize the relationship between decision-making phenotypes and neural connectivity in children, adolescents, and young adults. In Aim 3 we will characterize the relation between decision-making phenotypes and clinical symptomatology in a diagnostically heterogeneous sample of adolescents with generalized anxiety, depression, ADHD or OCD. Throughout, computational modeling of task behavior and self-reported symptom dimensions will build on state-of-the-art hierarchical modeling of multimodal and multi-task data. The research activities described in this proposal hold the potential to improve our understanding of the cognitive and neural mechanisms that underpin adolescent psychopathology, a question of broad societal impact given the prevalence and cost of mental illness, and the super-additive benefits of early detection and treatment. |
1 |
2020 — 2021 | Niv, Yael | R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
@ Princeton University Reward learning is a fundamental cognitive function, and the brain has a dedicated neuromodulatory system ? based on dopamine ? that supports this process. Changes to the dopamine system that are triggered by exposure to drugs of abuse are thought to underlie the behavioral changes observed in addiction. Here we propose to use a treasure trove of previously recorded neural data from throughout the mesocorticostriatal circuitry that supports reward learning, to elucidate the computational role of each component of the circuit, their interactions, and how these components are affected by cocaine. Our brains constantly generate predictions about what rewards might be available, and compare these predictions to actual outcomes. The neuromodulator dopamine is thought to report these ?prediction error? signals, the result of the ongoing comparison between expected and obtained rewards, that are key to updating predictions so they are more accurate in the future. Predicting the timing of rewards, and not just their identity or value, is an important component of this process, but it remains a mystery how the brain forms and uses predictions about time in reward learning. Based on a novel theoretical model we recently developed, we will test the computational role of three key brain areas that comprise the brain circuit critical for reward learning, using a state-of-the- art methods from machine learning to jointly decode the learning processes that drive neural activity from multiple brain areas along with behavior as rats perform a reward learning task. In Aim 1, we hypothesize that neural activity in the orbitofrontal cortex is uniquely important for representing high level ?task states? and will test for patterns in OFC neural activity that follow the hidden structure of the task. In Aim 2, we will decode the representation of reward predictions about the amount and timing of rewards, and test whether they are separable in VS neural activity. In Aim 3, we will test how activity in VS and OFC controls dopamine activity, and in particular how each input component enables prediction errors to be temporally precise. In Aim 4, we will test how exposure to cocaine changes neural activity that represents reward predictions in the VS, and the impact of this disruption on dopamine prediction errors in the VTA. This innovative multi-level study will leverage numerous existing neural and behavioral data from rats performing a well-validated reward-learning task, to reveal the computational, neural and behavioral mechanisms of the reward prediction and learning circuitry in the brain, and the source of their disruption in addiction. |
1 |