1993 — 1995 |
Buonomano, Dean V |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Cortical Conditioning of Somatotopic Receptive Fields @ University of California San Francisco |
0.929 |
2000 — 2003 |
Buonomano, Dean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Temporal Processing and Short- and Long-Term Plasticity @ University of California-Los Angeles
PI: Buonomano Abstract
Information contained in the temporal patterns of neuronal activity is fundamental to most forms of sensory processing, perhaps most notably speech. How is temporal information decoded by the nervous system? What are the neural mechanisms that permit neurons to develop selective responses to temporal patterns on the time scale of tens to hundreds of milliseconds? The hypothesis guiding the current proposal is that local cortical networks are intrinsically capable of decoding temporal information. Specifically, that short-term plasticity produces time-dependent changes in the balance of opposing excitatory and inhibitory events, and that these time-dependent changes in network state allow neurons to respond differentially to the distinct temporal features of stimuli. Neuronal responses to time-varying stimuli are determined not only by the strength of their excitatory inputs, but by a balance of excitatory and inhibitory inputs, each of which is modulated by short-term forms of plasticity. The projects described here are aimed at understanding how multiple synaptic and cellular mechanisms interact, and the learning rules that govern the long-term plasticity of each process, including short-term plasticity itself. Together, these studies should contribute to the understanding of how the brain times events, as well as to understanding cognitive deficits that may involve temporal processing (such as some forms of dyslexia), and to the generation of artificial systems capable of complex pattern recognition.
|
1 |
2001 — 2005 |
Buonomano, Dean V |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Plasticity and the Decoding of Temporal Information @ University of California Los Angeles
DESCRIPTION (provided by the applicant): Information contained in the temporal patterns of neuronal activity is fundamental to neural processing. The origin of the temporal information is generally in the nature of the stimuli themselves, such as the temporal features of speech, but may also be neurally generated at early stages of sensory processing. How is temporal information decoded by the nervous system? What are the neural mechanisms that permit neurons to develop selective responses to temporal patterns on the time scale of tens to hundreds of milliseconds? The hypothesis guiding the current proposal is that local cortical networks are intrinsically capable of decoding temporal information. Specifically, that short-term plasticity produces time-dependent changes in the balance of opposing excitatory and inhibitory events, and that these time-dependent changes in network state allow neurons to respond differentially to the distinct temporal features of stimuli. Neuronal responses to time-varying stimuli are determined not only by the strength of their excitatory inputs, but by a balance of excitatory and inhibitory inputs, each of which is modulated by short-term forms of plasticity. The projects described here are aimed at understanding how multiple synaptic and cellular mechanisms interact, and the learning rules that govern the long-term plasticity of each process, including short-term plasticity itself. Towards this goal the Specific Aims of the current proposal will include: (1) Characterizing the interaction between short and long-term associative plasticity of excitatory synapses; (2) Determining whether the same protocols that induce plasticity of EPSPs, produce long-term plasticity of IPSPs and/or changes in presynaptically mediated forms of short-term plasticity; (3) Computational analysis of whether the orchestrated regulation of multiple synapse types in parallel - as observed in Aims 1 & 2 - may underlie the generation of interval selective neurons; (4) Experimental analysis of the dynamics of neocortical circuits and the ability of individual neurons embedded in these circuits to exhibit temporally selective responses. Together these studies should contribute to the understanding of how the brain times events, as well as to understanding cognitive deficits that may involve temporal processing (such as some forms of dyslexia), and to the generation of artificial systems capable of complex pattern recognition.
|
0.958 |
2006 — 2010 |
Buonomano, Dean O'dell, Thomas [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Spike Timing-Dependent Plasticity in the Hippocampus @ University of California-Los Angeles
At many excitatory synapses in the brain the strength of synaptic transmission is not fixed but instead can be regulated by patterns of synaptic stimulation. These long lasting changes in synaptic strength, known as long-term potentiation (LTP) and long-term depression (LTD), have a crucial role in the storage of new information during memory formation and are a prominent feature of synaptic transmission in brain regions that have an important role in learning and memory, such as the hippocampus. Although it is not yet clear how physiological patterns of synaptic activity might induce LTP and LTD in vivo, a number of studies indicate that the induction of LTP and LTD is critically dependent on the precise timing and order of single pre- and postsynaptic action potentials, a phenomenon known as spike timing-dependent plasticity or STDP.
STDP not only represents a physiologically realistic basis for understanding how LTP and LTD can be induced by normal patterns of neuronal activity but also provides a computationally powerful formulation for how these forms of synaptic plasticity might be involved in complex computational tasks such as coincidence detection and sequence learning. Importantly, STDP has been primarily studied in the immature brain and recent studies suggest that the properties of STDP are very different in the adult hippocampus. Thus, the functional role of STDP in the adult hippocampus is unclear. In this project a combination of electrophysiological, pharmacological, and modeling approaches will be used to study three fundamental aspects of STDP at excitatory synapses in the adult hippocampus. First, studies of cells in the highly interconnected neuronal network found in the hippocampal CA3 region will be used to determine whether STDP has a crucial role in allowing interconnected cells to form networks that avoid runaway excitation or epileptic-like activity. Second, studies in the hippocampal CA1 region, where cells are not highly interconnected but instead form simple feed-forward networks, will be used to investigate whether STDP exists in a latent or hidden form at synapses in feed-forward networks. Finally, experiments will be done to examine whether modulatory neurotransmitters known to have an important role in learning and memory can modulate the induction of STDP in the hippocampus. Together, the results of these experiments will provide key information regarding how STDP may be involved in hippocampal information processing and hippocampal-dependent forms of learning in the adult brain. Moreover, the project will provide a unique opportunity for the multidisciplinary training of both undergraduate and graduate students in a highly interactive project using a combination of cellular neurophysiological and computational approaches.
|
1 |
2007 — 2008 |
Buonomano, Dean V |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Computations and Unsupervised Learning in Recurrent Neural Networks @ University of California Los Angeles
[unreadable] DESCRIPTION (provided by applicant): The human brain in general, and the cerebral cortex in particular, remains the most sophisticated computational system known to man. Elucidating the mechanisms underlying the cerebral cortex's ability to process information is critical for understanding both normal cortical processing, and a myriad of neurological disorders produced by abnormal cortical function. The current research begins with the assumption that a necessary step towards understanding cortical processing will be to elucidate two related problems: temporal processing and neural dynamics. First, temporal processing (the decoding of temporal information) is of fundamental importance for most forms of sensory processing, perhaps most notably speech. Yet the neural mechanisms underlying even simple forms of temporal computations are not known. Second, neural dynamics (evolving changes in the spatial-temporal patterns of neural activity) is thought to play a pivotal role in many forms of neural computation, including temporal processing. However, given the inherently complex and highly nonlinear nature of neural activity in recurrent networks composed of spiking neurons, we do not yet understand (1) how dynamics emerges and is controlled (e.g. to avoid epileptic activity), and (2) how dynamics is tuned in an experience- dependent manner to allow learning. It is generally assumed that synaptic plasticity (as well as plasticity at other loci) ultimately underlies the control and behavior of cortical networks. However, while a number of forms of plasticity have provided powerful learning rules when implemented into feed-forward networks, the implementation of effective learning rules in recurrent networks has proven largely intractable. In the current proposal we will examine whether multiple learning rules in parallel, operating synergistically, can lead to effective learning in recurrent networks. We will pay particular attention to the simultaneous control of excitatory and inhibitory synapses, which is hypothesized to be critical in recurrent networks. The proposal consists of two Aims. In the first Aim we will use a set of synaptic learning rules to train a large-scale recurrent network to develop stable and robust responses to spoken phonemes - in other words to encode complex stimuli in a spatial-temporal pattern of spikes. The second Aim will use a novel learning rule to teach output units to respond selectively to classes of phonemes - that is, to decode the patterns of the cortical network. If progress is made in this direction the research described here will enhance not only our understanding of normal cortical processing, but in the understanding of abnormal cortical states that arise in neurological disorders characterized by abnormal temporal processing and neural dynamics; such diseases include autism, Fragile X, dyslexia, and epilepsy. Behavior and cognition are ultimately an emergent property of the dynamic interaction of hundreds of thousands of neurons embedded in complex networks. While significant progress has been made towards understanding cellular and synaptic properties in isolation, elucidating how the activity of hundreds of thousands of neurons underlie cortical computations remains an elusive and fundamental goal in neuroscience. The research described here will use large-scale simulations of cortical networks to define the synaptic learning rules that allow computations to emerge from recurrent cortical networks. Specifically, the ability of artificial neural networks to learn to discriminate spoken phonemes will be examined. Understanding how computations emerge from massive networks of interconnected elements is a necessary step towards understanding normal cortical function, as well as the pathological cortical abnormalities observed in a myriad of neurological disorders. [unreadable] [unreadable] [unreadable]
|
0.958 |
2007 — 2011 |
Buonomano, Dean V |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Experience-Dependent Plasticity and Dynamics in Vitro @ University of California Los Angeles
[unreadable] DESCRIPTION (provided by applicant): Cortical computations are the result of neural dynamics: the spatial-temporal patterns created by the flow of activity through cortical networks. It is from neural dynamics that the computations that underlie cognition emerge. Additionally, it is ultimately not the effect genes or cells in isolation that underlie cognitive abnormalities, such as those observed in neurofibromatosis or autism, but how they alter function at the network level. Over the past decades significant progress has been made regarding the learning rules governing synaptic strength, as well as in the description of experience-dependent changes in cortical processing using in vivo and imaging approaches. However, less progress has been made in bridging these levels of analyses; there is an explanatory gap in the ability of synaptic and cellular properties to account for the emergent properties of neural networks. Indeed, the mechanisms by which the properties of millions of synapses and thousands of neurons are adjusted to produce, not only a controlled (as opposed to epileptic) flow of activity, but a computation as a result of neural dynamics are not understood. The goal in the current proposal is to use cortical networks in vitro as a 'reduced preparation' to study the fundamental principals underlying neural processing and dynamics within local cortical networks. Cortical neurons develop selective responses to stimuli in an experience-dependent manner. This selectivity plays a fundamental role in sensory processing and pattern recognition, and appears to develop independently of whether the stimuli are auditory, somatosensory or visual in nature. The learning rules responsible for the emergence of selectivity are presumably not coherently engaged in traditional in vitro preparations, since these normally 'develop' in the absence of any input structure, much like the visual or auditory system being deprived of patterned input. We will chronically expose cortical networks in vitro to simple input patterns to address a number of fundamental issues, including: 1) whether neurons in vitro can develop stimulus-selective responses in an 'experience-dependent' fashion, and 2) to elucidate the computational role of spontaneous network dynamics. By advancing our understanding of how sensory-experience shapes network function and dynamics, this research will contribute to the elucidation of both normal and pathological cortical function. Furthermore the development of an in vitro model of cortical function will prove valuable to the development of experimental models for diseases affecting cortical function, such as neurofibromatosis and Alzheimer's disease. [unreadable] [unreadable] [unreadable]
|
0.958 |
2011 — 2014 |
Buonomano, Dean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Temporal and Spatiotemporal Processing in Recurrent Neural Networks With Unsupervised Learning @ University of California-Los Angeles
The brain's ability to perform complex forms of pattern recognition, such as speech discrimination, far exceeds that of the best computer programs. One of the strengths of human pattern recognition is its seamless processing of the temporal structure and temporal features of stimuli. For example, the phrase "he gave her cat food" can convey two different meanings depending on whether the speaker emphasizes the pause between "her" and "cat," or "cat" and "food." Attempts to emulate the brain's ability to discriminate such patterns using artificial neural networks have had only limited success. These models, however, have traditionally not captured how the brain processes temporal information. Indeed most of these models have treated time as equivalent to a spatial dimension, in essence assuming that the same input is buffered and played at different delays. Similarly, more traditional approaches to pattern recognition, which generally rely on discrete time bins, also do not capture how the brain processes temporal information. The goal of the current research is to use a framework, referred to as state-dependent networks or reservoir computing, to simulate the brain's ability to process both the spatial and temporal features of stimuli. A critical component of this framework is that temporal information is automatically encoded in the state of the network as a result of the interaction between incoming stimuli and internal states of recurrent networks.
This project will develop a general model of spatiotemporal pattern recognition focusing on speech discrimination. The model will incorporate plasticity, a critical characteristic of the brain that has eluded previous state-dependent network models. Plasticity is a cardinal feature of the brain's computational power. For example, in the context of speech recognition, even at the age of 6 months, the brains of babies are tuned to recognize sounds of their native language. This ability is an example of experience-dependent cortical plasticity and it relies in part on synaptic plasticity and cortical reorganization. Incorporating synaptic plasticity into recurrent networks has proven to be a very challenging problem as a result of the inherent nonlinear and feedback dynamics of recurrent networks. The current project will use a novel unsupervised form of synaptic plasticity--based on empirically observed forms of plasticity referred to as homeostatic synaptic plasticity--to endow state-dependent networks with the ability to adapt and self-tune to the stimulus set the network is exposed to. This project interfaces recent advances in theoretical neuroscience and novel approaches in machine learning. The results will help develop artificial neural networks that capture the brain's ability to process temporal information and reorganize in response to experience.
|
1 |
2012 — 2013 |
Buonomano, Dean V |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Abnormal Network Dynamics and 'Learning' in Neural Circuits From Fmr1-/- Mice @ University of California Los Angeles
DESCRIPTION (provided by applicant): Learning, behavior, and cognition, are ultimately an emergent property of the dynamic interactions of millions of neurons embedded within complex networks. Similarly, pathological brain states, such as mental retardation and autism, are ultimately expressed as a result of abnormal network function. The cognitive deficits that characterize some neurological diseases may not be caused by isolated molecular or cellular abnormalities, but rather from the effects of multiple interacting molecular and cellular abnormalities on network function. Indeed, the complexity and diversity of the neural and behavioral phenotypes associated with some diseases, have led to the suggestion that mental retardation and autism should also be studied from the perspective of abnormal network function. However, despite significant advances towards understanding the molecular, synaptic, and cellular mechanisms of neuronal function, as well as towards identifying the anatomical and systems level abnormalities associated with diseases, relatively little progress has been made in bridging these levels of analysis. That is, relatively little is known about how normal or abnormal behavior, learning, and cognition emerge from the interaction of neurons embedded in complex networks. The research described here is aimed at bridging this gap by studying the abnormal network properties in mice lacking the gene that causes Fragile X syndrome. The overarching hypothesis is that in Fragile X syndrome there is a deficit in the ability to coordinate the many different cellular and synaptic properties that govern network dynamics. We will extend preliminary findings on network abnormalities in cortical circuits from Fmr1-/- mice, and determine if the network dynamics is appropriately regulated in response to chronic patterned activity. Additionally, we will determine if a form of in vitro 'learning' is altered in cortical crcuits from the mouse model of FXS. If we confirm that deficits in in vitro 'learning' are present in isolated cortical networks from Fmr1-/- mice we will provide an important link between cortical circuit function and cognitive abnormalities. Furthermore, establishing deficits in what can be considered a form of in vitro learning, offers a strategy for rational therapeutic approaches for treatment. PUBLIC HEALTH RELEVANCE: Some neurological diseases-including mental retardation and autism-may not arise simply from isolated abnormalities at the molecular or cellular level, but from how multiple interacting factors at the molecular level alter processing at the level of networks of neurons. The current project is aimed at understanding network level abnormalities in what is among the most common causes of mental retardation and autism, Fragile X syndrome. Understanding the network abnormalities that are causally related to cognitive deficits will provide a strategy for rational therapeutic approaches for treatment.
|
0.958 |
2012 — 2013 |
Buonomano, Dean V |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Learning Temporal Patterns: Computational and Experimental Studies of Timing @ University of California Los Angeles
DESCRIPTION (provided by applicant): The human brain remains the most sophisticated computational system known to man. Elucidating the mechanisms underlying the cerebral cortex's ability to generate behavior and cognition is critical for understanding both normal cortical processing and a myriad of neurological disorders produced by abnormal cortical function. A necessary step towards this goal will be to understand how the brain tells time and processes temporal information. Here we focus on the problem of generating and learning complex spatiotemporal patterns. The studies proposed here are based on the hypothesis that the internal dynamics of recurrent neural networks underlies some forms of timing in the range of hundreds of milliseconds to a few seconds, and on the recently proposed paradigm that time is encoded in the continuously changing activity pattern of a neuronal population. The proposal consists of two aims. In the first we will use a novel human psychophysical task to study the learning of temporal patterns and test explicit theoretical predictions of our hypothesis. In the second aim we will develop a computational model of timing as an implementation of the proposed paradigm, to determine whether it can account for the experimental results. PUBLIC HEALTH RELEVANCE: The ability to tell time and process temporal information is of fundamental importance to sensory and motor processing, behavior, learning, and cognition. And it is increasingly clear that the cognitive abnormalities in a number of neurological diseases-including learning disabilities, Parkinson's disease, and schizophrenia- are associated with deficits in the ability to normally process temporal information. Thus elucidating both normal and pathological brain function will require that we unveil the mechanisms that allow the brain to tell time. The current project focuses on this problem, not only by directly studying temporal processing, but by taking the important step towards understanding how complex computations emerge from the dynamics of recurrent neural circuits.
|
0.958 |
2012 — 2019 |
Buonomano, Dean V |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Timing and Learning in in Vitro Cortical Networks @ University of California Los Angeles
? DESCRIPTION (provided by applicant): The ability to tell time, predict the movement of other animals, and decode complex temporal patterns- such as those in speech-are among the brain's primary functions. As such, we would put forth that it will not be possible to understand the fundamental principles underlying brain function without understanding how the brain tells time and processes temporal information. We have proposed that precisely because timing is such a fundamental component to brain function that neural circuits in general are capable of processing temporal information-that is, timing does not rely on specialized or centralized circuits. This theory posits that timing emerges from the neural dynamics of recurrent neural circuits, and predicts that even neural circuits in vitro may be able to learn to tell time. We hae recently provided evidence that timing is a general computation of cortical networks. Specifically, by chronically exposing slices in the incubator to patterned stimuli (mimicking sensory experience) we have shown that the neural dynamics reproduces the temporal features of the experienced stimuli. Here we will use novel electrical and optogenetic methods to demonstrate that cortical circuits in vitro can learn temporal patterns, and elucidate the underlying neural mechanisms of timing and cortical computations. We propose that studying network-level forms of learning in reduced preparations is necessary to understand cortical computations because of the limitations of in vivo studies. Additionally, the ability to study network behavior and `learning' in vitro will provide a means to study pathological circuit level computations using tissue from animal models of cognitive disorders.
|
0.958 |
2014 — 2017 |
Buonomano, Dean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Dynamic Attractor Computing: a Novel Computational Approach Applied Towards Temporal Pattern and Speech Recognition @ University of California-Los Angeles
Harnessing the brain's computational strategies has been a long sought objective of computational neuroscience and machine learning. One reason this goal has remained elusive is that most neurocomputational frameworks have not effectively captured a fundamental computational feature of the brain: the ability to seamlessly encode, represent, and processes temporal information. The current project seeks to address this shortcoming by using the neural dynamics inherent to recurrent neural networks to generate temporal patterns and process temporal information.
The ability to generate the fine motor patterns necessary to play the piano or parse the complex temporal structure of speech, are but two examples of the human brain's sophisticated ability to generate and process complex temporal patterns. Notably, both these examples also illustrate an additional feature of the brain's computational abilities: "temporal warping." We can play the same musical piece at different speeds, or understand speech spoken at slow or fast rates. The mechanisms underlying the brain's ability to process temporal information in a flexible and temporally invariant fashion are a key focus of the current proposal. Recent theoretical and experimental studies have favored the view that the brain does not have sampling rates, time bins or explicit delay lines; but rather encodes time and the temporal features of stimuli through the internal dynamics of recurrent neural networks. The computational potential of these recurrent neural network models, however, has been limited for two reasons: 1) while it is well established that the recurrent connections of neural circuits are plastic, it has proven challenging to incorporate plasticity into simulated recurrent neural network models; 2) the dynamic regimes with the most computational potential are precisely those that also exhibit chaos--voiding much of their computational potential because the dynamics is not reproducible across trials. Building on a novel framework, this project tunes the weights of the recurrent connections in a manner that "tames" the chaotic dynamics of recurrent networks. The approach creates locally stable trajectories (dynamic attractors) which provide a novel and potentially powerful computational approach that can elegantly encode temporal information, and retain internal memories of recent events. Of particular relevance to the current project is to demonstrate that these networks can produce families of similar neural trajectories that flow at different speeds, thus allowing the network to generate the same complex motor pattern at different speeds. This project will also determine if the principles of dynamic attractors and time warping can be applied in the domain of sensory processing, using speech recognition as a test bed for the brain's ability to discriminate complex spatiotemporal stimuli.
|
1 |
2016 — 2020 |
Buonomano, Dean V Masmanidis, Sotiris Constantinos |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Network Mechanisms of the Learning and Encoding of Timed Motor Responses @ University of California Los Angeles
The brain is an inherently dynamic system, it evolved under strong selective pressures to allow animals to interact with the environment in real-time, and predict and prepare for future events. For these reasons, understanding neural dynamics, and how the brain tells and encodes time is fundamental to understanding brain function. The importance of neural dynamics and timing to brain function emphasizes the need for techniques that allow for the collection and analysis of massively parallel single neuron recordings across multiple structures in behaving animals. This project will combine novel electrophysiological, behavioral, analytical and computational methods to reverse engineer the neural circuits underlying learning and timing. The first aim is to combine large-scale neural recordings with computational approaches to determine how time is represented in the striatum and prefrontal cortex, two interacting brain areas that are closely implicated in temporal processing. We will specifically examine whether encoding of time relies on absolute, relative, or stimulus-specific coding mechanisms. Recordings will be carried out in awake, head-fixed mice trained on a classical trace reward conditioning task in which two cues predict reward with a different delay period. When animals learn the cue-reward association, they engage in robust anticipatory licking that precedes the reward presentation; moreover, the timing of this behavior is dependent on the cue-reward delay time. The second aim is to combine electrophysiology and optogenetics to determine if temporal coding in the striatum and prefrontal cortex is perturbed by transiently disrupting network activity. The hypothesis is that if dynamics of the timing circuits are perturbed then the ensuing activity patterns will be irreversibly altered, thus reducing the accuracy or precision of timed behavioral responses. The third aim is to develop a novel computational framework based on recurrent neural networks models that can predict future patterns of neural ensemble activity based on present patterns. The ultimate goal of this work is to integrate highly innovative electrophysiological and computational methods for reverse engineering brain circuit function at the level of networks of hundreds of neurons in the striatum and prefrontal cortex. RELEVANCE (See instructions): Learning to produce appropriately timed actions is fundamental to many aspects of behavior, and disruption of the brain circuits underlying this process is implicated in many neurological and psychiatric disorders. This project will develop an integrated approach to studying the mechanisms of timed motor behavior by combining large-scale neural recordings from multiple brain areas and computational modeling of neural networks.
|
0.958 |
2020 |
Buonomano, Dean V Golshani, Peyman (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multiplexing Working Memory and Timing: Encoding Retrospective and Prospective Information in Transient Neural Trajectories. @ University of California Los Angeles
Abstract A general principle of brain function is the ability to store information about the past to better predict and prepare for the future. Working memory and timing are two computational features that evolved to allow the brain to use recent information about the past to accomplish short-term goals. Working memory refers to the ability to transiently store information in a flexible manner, while timing refers to the ability to generate well timed motor responses, modulate attention in time, and predict when external events will occur. To date working memory and timing have primarily been treated as independent processes. Here we propose that because the brain seeks to use information about the past to predict the future, that working memory and timing are often multiplexed. Specifically, that the neural patterns of activity recorded during the delay period of many working memory tasks encodes both retrospective information about the past, as well as prospective predictions about the future. To test this hypothesis, we have developed novel variant of the delayed- nonmatch-to-sample task, in which the first cue predicts the duration of the delay, that is, how long an item must be held in working memory. This task will allow us to determine if network level population responses encode both retrospective information about the past and prospective information about delay duration. Preliminary results from a supervised recurrent neural network model predict that the temporal structure of the neural patterns of activity elicited by both cues will be different. This prediction will be tested using large scale Ca2+-imaging to characterize the spatiotemporal patterns of activity in brain areas associated with working memory and timing. Additionally, optogenetic perturbation experiments and longitudinal characterization of the emergence of neural patterns of activity will be performed. These experiments, will in turn, be used to ground computational studies aimed at understanding the neuronal and circuit level learning rules that underlie the emergence of patterns that encode working memory and time.
|
0.958 |
2020 — 2023 |
Buonomano, Dean |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Neural Sequences as a Robust Dynamic Regime For Spatiotemporal Time Invariant Computations. @ University of California-Los Angeles
The temporal dimension is of fundamental importance to understanding the brain because one of the brain's primary function is temporal in nature: the brain uses information about the past (memories) to predict the future. As a result of the inherently temporal nature of brain function the brain has evolved mechanisms to tell time, encode time, and perform time-dependent computations. These computations endow animals with the ability to quickly learn to anticipate external events (for example when a red light should change), and to recognize and generate complex temporal patterns (such as those that underlie speech or Morse code). A feature of the brain's computational abilities is referred to as "temporal scaling," for example, the ability to talk, play music, or tap a Morse code message at different speeds. The neural mechanisms underlying timing and temporal scaling remain poorly understood. Furthermore, although dramatic advances have taken place in the field of machine learning, current machine learning approaches do not capture how the brain performs temporal computations or achieves temporal scaling. Emerging experimental data suggest that the brain may encode time and implement temporal scaling through a number of different dynamic regimes including ramping (increasing firing rates with time) or neural sequences (transient sequential activation of neurons). This project seeks to understand how time-dependent computations are performed in recurrent neural networks, and proposes that neural sequences provide an optimal solution to the problem of temporal scaling. This project will contribute to advances in the ability of artificial systems to capture the computational power of the brain. Associated education and outreach efforts are closely related to the research.
Two main approaches will be used. First, machine-learning based supervised recurrent neural networks will be trained on a number of different timing tasks--including a Morse code task that requires producing a complex temporal pattern at different speeds--in order to determine if neural sequences represent a general solution to the problems of encoding time and temporal scaling. Second, neuronal and synaptic properties that are mostly absent from current machine learning approaches will be used to develop a model of how neural sequences emerge and undergo temporal scaling in a biologically plausible fashion. Specifically, cortical synapses exhibit short-term synaptic plasticity, in which the strength of synapses change in a use-dependent manner over the course of hundreds of milliseconds, these dynamics can in turn be modulated--accelerating or slowing short-term synaptic plasticity. It is hypothesized that this modulation of short-term synaptic plasticity is one way the brain implements temporal scaling. Overall, this project will lead to novel biological principles being applied towards machine learning, and further advance the ability to emulate the brain?s computational strategies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |