2017 — 2019 |
Drugowitsch, Jan |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Leveraging Decision-Making Variability to Identify Underlying Computations
Decisions based on uncertain perceptual evidence are an ubiquitous component of everyday behavior. Much research has focused on the computational and neural basis of how our nervous system accumulates this uncertain evidence to make efficient decisions. While extremely successful to explain average behavior, variability around this average has either been mostly ignored or attributed to sensory noise or stochastic action selection. As we have recently shown, however, a large fraction of behavioral variability actually arises from approximations in core computations leading to these decisions. This is a critical finding, as mental disease, such as schizophrenia or OCD, is known to involve impairments in handling uncertain information. Thus, misattributing the locus of behavioral variability leads to misinterpreting the key computational determinants of decision errors. This, in turn, might lead to misidentifying the decision-making computations altered in mental disease. We will avoid this pitfall by leveraging behavioral variability to investigate the computational and neural mechanisms which drive human behavior under uncertainty. Based on this principle, we will investigate each component of the decision-making process, starting from how the central nervous system processes noisy and/or ambiguous sensory signals to extract decision-relevant evidence, over the format of the evidence that is subsequently accumulated, to the variability in evidence accumulation itself. We will do so through a combination of computational modeling, and behavioral and MEG experiments in _hEl.althy_
|
1 |
2020 |
Drugowitsch, Jan Uchida, Naoshige [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Distributional Reinforcement Learning in the Brain.
Project Summary The field of artificial intelligence (AI) has recently made remarkable advances that resulted in new and improved algorithms and network architectures that proved efficient empirically in silico. These advances raise new questions in neurobiology: are these new algorithms used in the brain? The present study focuses on a new algorithm developed in the field of reinforcement learning (RL), called distributional RL, which outperforms other state-of-the-art RL algorithms and is regarded as a major advancement in RL. In environments in which rewards are probabilistic with respect to its occurrence and size, traditional RL algorithms have focused on learning to predict a single quantity, the average over all potential rewards. Distributional RL, by contrast, learns to predict the entire distribution over rewards (or values) by employing multiple value predictors that together encode all possible levels of future reward concurrently. Remarkably, theoretical work has shown that a class of distributional RL, called ?quantile distributional RL?, can arise out of a simple modification of traditional RL that introduces structured variability in dopamine reward prediction error (RPE) signals. This project set out to test the hypothesis that the brain utilizes distributional RL to predict future rewards. Aim 1 will explore the characteristics of distributional RL theoretically and make predictions that allow for testing distributional RL in the brain. Theoretical investigations and simulations will be used to determine how value representations in distributional RL differ from pre-existing population coding schemes for representing probability distributions (probabilistic population codes, distributed distributional codes, etc.). Aim 2 will examine the activity of neurons that are thought to signal RPEs and reward expectation and test various predictions of distributional RL. Specifically, the activity of dopamine neurons in the ventral tegmental area and neurons in the ventral striatum and orbitofrontal cortex will be compared to key predictions of distributional RL. Aim 3 will use optogenetic manipulation to causally demonstrate the relationship between RPE signals and distributional codes.
|
0.934 |
2021 |
Drugowitsch, Jan Wilson, Rachel (co-PI) [⬀] |
R34Activity Code Description: To provide support for the initial development of a clinical trial or research project, including the establishment of the research team; the development of tools for data management and oversight of the research; the development of a trial design or experimental research designs and other essential elements of the study or project, such as the protocol, recruitment strategies, procedure manuals and collection of feasibility data. |
The Encoding of Uncertainty in the Drosophila Compass System
Summary Strategic behaviors often take account of uncertainty. For example, if we are presented with two conflicting pieces of information, we give less weight to the more uncertain source of information ? i.e., the source of information that leads to lower accuracy overall. Notably, even insects behave as if they make strategic use of their own uncertainty. Importantly, the neural correlates of uncertainty are essentially unknown. In this collaborative project, we will use modeling and neural imaging to identify the neural correlates of uncertainty. We will focus on the ?compass? in the Drosophila brain. The intrinsic neurons of the compass (EPG neurons) form a topographic map of heading direction. At any given moment, there is a ?bump? of neural activity in the EPG population which rotates like a compass needle as the fly turns. The position of the bump is influenced by internal self-motion cues, external visual cues, and external wind direction cues. In previous theoretical work, the EPG ensemble has been modeled as a ring attractor network. In general, ring attractors do not represent uncertainty in the variable they are encoding. Most experiments characterizing compass neuron activity have been performed either under conditions of extreme certainty (e.g., a bright visual cue), or extreme uncertainty (e.g., complete darkness). Therefore, it remains unclear how the system behaves under moderate uncertainty, and if, under such conditions, it can still be well-described by standard ring attractor networks. Ideally, the compass network would represent not only the fly's estimated heading direction, but also the uncertainty associated with that estimate, so that behavioral strategies could be adjusted accordingly. In this project, we will investigate (1) how uncertainty is represented, and (2) how it affects spatial learning. We will use a combination of algorithmic modeling, network modeling, and in vivo imaging experiments combined with virtual reality environments.
|
1 |