2016 — 2019 |
Wilbrecht, Linda (co-PI) [⬀] Dahl, Ronald [⬀] Collins, Anne |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sl-Cn: Science of Learning in Adolescence: Integrating Developmental Studies in Animals and Humans @ University of California-Berkeley
This Science of Learning Collaborative Network brings together researchers from across the University of California-Berkeley and the University of California-San Francisco, to advance scientific understanding of developmental changes that occur in the learning processes of children and adolescents. Growing evidence shows that learning processes and the underlying brain systems go through important developmental changes. These changes begin during infancy and early childhood, but they also extend much later in pubertal maturation and adolescent development. A team-science approach will be used to address these complex issues. The collaborative research network includes expertise in the developmental science of adolescence, and in the science of learning in both human and animal models. A deeper understanding of the developmental changes in specific learning processes in adolescence will inform educational methods and interventions. With greater developmental precision, it should be possible to design more effective education for specific age groups. The long-term goals are to help transform the adolescent "window of vulnerability" (when so many youth become bored and disengaged from school) into a "window of opportunity" (a natural period of curiosity, exploration, and unique learning opportunities).
This collaborative research network builds upon (and helps to integrate) four distinct areas in the science of learning: a) the developmental science of adolescence; b) animal models of brain development in adolescence; c) animal models of learning, and d) computational modeling of learning in humans and animals. The network members will work together to develop new methods, tasks, and analyses that better isolate specific learning variables under transition at adolescence. By tracking pubertal measures as well as age, the work is expected to illuminate the role of puberty onset in developmental transitions in learning, independent from age. The use of mouse models will enable experiments that delineate the role of specific aspects (and timing) of puberty in relation to these specific changes in learning. The integration of human and animal models in parallel experiments will establish a bridge between the fields of developmental science, computational neuroscience, cognitive neuroscience, and systems neurobiology. Scientists and trainees will participate in 'cross-training' opportunities through network meetings, contributing to building a stronger interdisciplinary culture of interaction and collaboration. Undergraduate trainees from underserved backgrounds will also participate in the network.
The award is from the Science of Learning-Collaborative Networks (SL-CN) Program, with funding from the SBE Division of Behavioral and Cognitive Sciences (BCS), the SBE Office of Multidisciplinary Activities (SMA), and the CISE Division of Computer and Network Systems (CNS).
|
0.915 |
2020 — 2021 |
Collins, Anne G.e. [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Neural Computations Supporting Hierarchical Reinforcement Learning @ University of California Berkeley
The neural computations supporting hierarchical reinforcement learning - Project Summary. This project explores how humans learn at multiple hierarchical levels in parallel, and how this supports human intelligence. Human decisions are typically hierarchically structured: we make high-level decisions (making a cup of coffee), which constrain lower level decisions (grinding coffee beans, boiling water, etc.), which themselves constrain simpler and simpler decisions and motor actions. This hierarchy in decisions is paralleled by a hierarchy in our representation of our environment: some sensory signals trigger simple decisions (a red light signals a stop), while other signal a broader, more abstract behavioral change (rain signals a set of adaptations when driving). Thus, complex hierarchical structure underlies the way we respond to our environment in seemingly simple, everyday tasks. This ability is supported by the prefrontal cortex, which represents states and decisions at multiple degrees of hierarchical abstraction. My previous work shows that hierarchical representations support transfer and generalization while learning, an ability that artificial agents still struggle to match human performance in. However, how we learn to form these hierarchical representations is poorly understood, despite how crucial it is for human intelligence. The proposed work will examine how multiple, parallel hierarchical loops between prefrontal cortex and the basal ganglia support reinforcement learning at multiple hierarchical levels in parallel, and how this promotes flexible behavior. To this end, we will address three aims: 1. We will show that the same reinforcement learning computations happen in parallel at multiple levels of abstraction, as hypothesized by our computational model of prefrontal- subcortical networks. 2. We will demonstrate that humans partition learning problems into multiple sequential subgoals so they can learn multiple simple strategies instead of one complex strategy, and that reusing these simple strategies promotes fast exploration and learning. 3. We will show that hierarchical learning does not rely exclusively on rewards, but that novelty signals are crucial for identifying subgoals and learning through curiosity. Across all three aims, we will use behavioral experiments in conjunction with computational modeling to characterize how humans learn hierarchically. In addition, we will use EEG and fMRI to identify the neural computations underlying the cognitive systems inferred from behavior and modeling. This project will provide new insights into the computational mechanisms that give rise to learning, and thus provide a better handle on the sources of learning dysfunction observed in many psychiatric diseases, including schizophrenia, depression, anxiety, ADHD, and OCD. Additionally, it will provide new tools, in the form of experimental protocols and precise computational models, for studying learning across populations and species.
|
0.915 |
2020 — 2023 |
Collins, Anne |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Roles of Working Memory in Reinforcement Learning @ University of California-Berkeley
Humans learn how to make rewarding choices in multiple ways, often in parallel. Some learning mechanisms are fast and flexible, but mentally effortful; others are slow and inflexible, but effortless, learning the value of different choices in different situations. This project investigates the possibility that the mechanisms are not independent. Specifically, the investigators test the hypothesis that effortful processes contribute to identifying what the slow and effortless ?reinforcement learning? processes learn the value of: what features of the situation are relevant, what aspects of the choices matter. In that sense, the simpler reinforcement learning process, usually thought to be automated and instinctive, may be improved by the exertion of cognitive effort such as attention and short term memory, and impaired by a lack thereof. Better understanding the role of cognitive effort in effortless reinforcement learning processes should strengthen our ability to identify sources of learning impairment and optimize learning in the many everyday life situations where we need to learn, be it new software, parenting, or interacting with others in new environments. Better understanding of the mechanisms that support human learning will provide inspiration for improved algorithms for artificial intelligence.
Learning in humans is the result of a carefully orchestrated set of processes interacting in parallel. Some processes, like working memory, rely on executive function to store information in a flexible format that is effortful to maintain and use. Other processes, like reinforcement learning, store information in a less flexible but more robust and virtually effortless format, encoding the value of choices. In this project, the investigators study how executive functions may additionally support reinforcement learning processes. This project uses novel experimental protocols to examine how weakening executive functions affects learning, and apply novel computational models to disentangle the learning processes. A goal is to establish a computational architecture to explain how executive function supports reinforcement learning processes. This project will significantly advance our understanding of the computational mechanisms that underly learning in humans. The project will highlight the importance of considering how different processes that contribute to learning interact, and the fact that even learning processes considered to be mostly automated depend on ?intelligent? executive functions. This project has important broader implications to learning in everyday life as well as to the use of artificial intelligence in technological advances. Future findings could help design more effective pedagogical approaches and lead to more adaptive and individualized teaching, impacting many domains where learning is essential, including education, public health, software design, with significant implications for individuals with learning impairments. Several young scientists will be trained during this project, in particular in highly sought computational modeling skills.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |