2007 — 2009 |
Olshausen, Bruno (co-PI) [⬀] Sommer, Friedrich |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns Data Sharing: Central Facility and Services @ University of California-Berkeley
Proposal No: 0749049 PI: Friedrich T. Sommer
Award Abstract:
This award supports services and infrastructure for sharing of computational neuroscience data as part of an exploratory activity aimed at catalyzing rapid and innovative advances in computational neuroscience and related fields. The core facility will provide transparent access to shared resources in a manner that scales up to large data sets. Services will be designed to lessen the burden on contributors to make their data or other resources available and to optimize the ability of the user community to identify and use those resources. Community- and market-oriented mechanisms will be developed to identify resources of particular significance for the field, and to solicit feedback from relevant communities. It is anticipated that the availability of high quality data will offer unprecedented opportunities for new types of discoveries, development of new methods, and development of new interdisciplinary collaborations. This new activity will also assist and drive teaching in computational neuroscience, through the exchange of datasets, stimuli, and analysis and modeling tools among modelers, experimentalists, and students.
|
0.915 |
2007 — 2011 |
Sommer, Friedrich |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Exploring Neurobiological Strategies of Visual Scene Analysis Using Oscillations in Recurrent Neural Circuitry @ University of California-Berkeley
Recurrent connections and intrinsic rhythmic neural activity are ubiquitous in biological visual pathways, where they first appear in the retina. Although the functional role of these properties of the retinal circuit is not fully resolved, recent work suggests that they might help convey information about visual context or even the gist of a scene. Further, new experimental work shows that the oscillatory patterns of activity generated by retinal networks are relayed from the thalamus to the cortex. As yet, however, retinal oscillations are usually ignored in systems of computer vision, even those based on neural principles. If artificial systems matched human performance in visual perception, this lack of attention to the biological circuitry might not be worthy of note. But this is not the case. Humans do a far better job than computers in routine tasks like analyzing cluttered scenes or recognizing objects in noisy backgrounds or under different lighting conditions. Thus this project aims to develop biologically inspired models that include oscillating networks to improve scene analysis in artificial vision.
The new models will incorporate local and distributed connections in the retinal circuit and also take advantage of the scheme of efficient sparse coding, a powerful new concept for understanding sensory processing. By taking both local and spatially extensive circuits into account, the expectation is that the model will be able to encode two complementary types of information about the stimulus. Past work has shown that changes in spike rate with respect to the stimulus encode information about local features. This type of rate (or stimulus-locked) coding can be modeled by small scale circuits. The new models will include ongoing oscillatory activity that is generated internally by large recurrent networks but is also modulated by sensory input. Further, visually evoked changes in the temporal structure of these intrinsic oscillations occur at finer time scales than visually evoked changes in rate. Thus, in principle, visual information could be encoded by spike timing with respect to intrinsic rhythms. Moreover, since the retinal oscillations are generated by distributed networks, it is likely that they provide information about global feature of the stimulus. Thus, if successful, the new models will be able to capture, at once, information about local detail and the gist of scene.
Numerous applications would benefit from a deeper understanding of how information is encoded in the early visual system. For example, the models that will result from the research proposed here have value for the development of visual prosthetics as well as for technical applications that involve image-processing from new methods for image compression adapted to sensory perception to the problem of automated object segmentation, scene analysis and recognition.
|
0.915 |
2009 — 2015 |
Olshausen, Bruno (co-PI) [⬀] Sommer, Friedrich |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-Addo-New: Crcns.Org - Online Repository For High-Quality Neuroscience Data and Resources For Computational Neuroscience @ University of California-Berkeley
This project will develop and operate a community infrastructure, CRCNS.ORG, to enable the sharing of data needed by the computational neuroscience community, to enhance and foster collaborations among theoretical and experimental researchers, and to further the development and testing of computational theories of brain function. This infrastructure will widen the spectrum of techniques applied to brain data, enabling discoveries that go beyond the scopes of individual laboratories.
The infrastructure targets the communities of neuroscience and related fields such as computer science, physics, mathematics, statistics, and engineering in which investigators seek access to high-quality neurophysiology data, including electrical, magnetic, and optical recordings from single neurons, neural ensembles, and brain regions. Development activities are aimed at lowering the barriers to contributing, accessing, and using neurophysiology data. Standardized methods will be developed for storing and annotating data in a self-describing, hierarchical format, and enabling flexible on-line access. Scalable methods will be developed to enable users to find potentially useful data and to provide means for online visualization and some on-line analysis. Operations activities will support users and data contributors as well as community outreach activities. Three summer training courses will be held to introduce students and researchers to methods and conventions concerning organization, visualization, and analysis of neuroscience data, and how to use the specific resources of the repository.
|
0.915 |
2012 — 2017 |
Sommer, Friedrich |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Making Sense of Incomplete Sensor Data @ University of California-Berkeley
This project focuses on how to reveal the whole from only partial measurements. Often, physical variables can only be incompletely measured. For example, the state-of-the-art techniques to record brain activity (multi-electrode recordings of local field potentials or fMRI measurements) give only a partial account of the activity patterns of large numbers of neurons. In the course of this project, Fritz Sommer, Ph.D., and his collaborators from the University of California, investigate and develop methods for recovering complex multi-dimensional data structure from incomplete measurements. In the general case, when measurements are made (or "subsampled") at a rate lower than a mathematical limit called the Nyquist limit, the original signal cannot be fully reconstructed. However, the recent theory of compressed sensing (CS) demonstrated that subsampling below the Nyquist limit can be lossless if the signal to be compressed has sparse structure. The established theory of CS explains when full recovery from incomplete measurements is possible and provides efficient algorithms for full data reconstruction. This result is extremely relevant because many important classes of sensor signals, such as natural images and sounds, have an approximately sparse structure.
The current project explores whether the principle of CS can allow reconstruction of data structure from measurements that subsample an unknown signal in an unknown fashion. Standard CS cannot reconstruct the signal in such situations because the algorithm requires knowledge of how the signal was subsampled and what its structure is. Dr. Sommer and his team plan to develop methods for data reconstruction that can be performed with the subsampled data alone. The idea is to combine CS, a principle about measuring a signal with sparse structure, with sparse coding (SC), a principle for learning efficient representations of signals with sparse structure. Preliminary results suggest that this combination of methods, called adaptive compressed sensing (ACS), can indeed "learn" the map for recovering the full data from the subsamples alone (Isely et al. 2011). The team will investigate under what conditions a similar result holds for the large class of real-world sensor data that are not exactly sparse but can be well-approximated by sparse representations. Also being developed are methods on how to draw inferences and make decisions based on incomplete measurements. In particular, a pilot investigation in collaboration with Dr. Bosco Tjan's lab at the University of Southern California will explore whether ACS can be applied to improve the decoding of fMRI data.
|
0.915 |
2015 — 2017 |
Sommer, Friedrich T |
R25Activity Code Description: For support to develop and/or implement a program as it relates to a category in one or more of the areas of education, information, training, technical assistance, coordination, or evaluation. |
Berkeley Course On Mining and Modeling of Neuroscience Data @ University of California Berkeley
? DESCRIPTION (provided by applicant): This proposal is to administer and further develop a successfully established two-week summer training course titled Mining and Modeling of Neuroscience Data which is held at UC Berkeley. The course teaches methods for analyzing neurophysiology data, that is, measurements of the neural activity over time, co-registered with behavior or stimuli. With the Obama BRAIN initiative in full swing, rich neurophysiology data will become available at a high rate. The course has the goal to help build a workforce for leveraging these data and is designed to fill in a significant gap in training opportunities in the intersection between neuroscience and computational methods (computer science, mathematics, statistics, physics, engineering). Specifically, attendees of the course will be individuals either with a quantitative background and interest in neuroscience or with a background in neuroscience who wish to learn cutting edge approaches for the analysis of neuroscience data. To recruit students from Computer Science and Mathematics the project is partnered with the Simons Institute of the Theory of Computing and the Mathematical Sciences Research Institute. The training provided by this course will help increase the pool of researches who can apply existing and develop novel methodology for analyzing and modeling large, complex neurophysiology data sets. Increasing this type of quantitative knowledge in neuroscience will be essential to enhancing the understanding of the brain and developing approaches to treat disorders of the brain.
|
1 |
2015 — 2018 |
Sommer, Friedrich |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-German Data Sharing: Integrating Distributed Data Resources to Enable New Research Approaches in Neuroscience @ University of California-Berkeley
This project seeks to develop new methods of describing and managing neuroscience data in order to accelerate scientific progress in many fields of neuroscience and deepen understanding of the brain. The project will produce software tools to enable annotation and integration of distributed data and help leverage the wealth of data emerging from current large-scale projects such as the Human Brain Project in Europe and the BRAIN Initiative in the US. These results will impact medical application areas such as brain machine interfaces, devices for sensor prosthetics and also application areas such as computer vision. Further, the methods developed might be generalizable to other domains of biology and medicine where traditional rigid approaches for organizing data are inapplicable. This could lead to the discovery of causes and treatments of diseases that would not have been made otherwise.
Neurophysiology data, which contain recordings of brain activity, are becoming more commonly shared on the web but they are still very hard to use. To improve the usability of shared neurophysiology data sets, a standardized and expandable system will be developed for annotating the data with metadata required for their understanding. Furthermore, semantic web technology will be employed to represent, index, and integrate data and metadata, across distributed locations on the web. Improving the organization of metadata for shared neurophysiology data will be key for enabling studies that integrate across data sets, such as new types of meta-analyses or data mining methods. This project builds on existing online resources for neurophysiology data created by the project partners, CRCNS.org and G-NODE.org, and on pervious work by the INCF neurophysiology data sharing task force. Training of international students and researchers in annual summer courses at UC Berkeley and LMU Munich will improve career opportunities that allow individuals across disciplines to make discoveries and advancements in neuroscience.
A companion project is being funded by the Federal Ministry of Education and Research, Germany (BMBF).
|
0.915 |
2017 — 2020 |
Saremi, Saeed Olshausen, Bruno (co-PI) [⬀] Sommer, Friedrich |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Extracting and Understanding Sparse Structure in Spatiotemporal Data in Neuroscience and Other Applications @ University of California-Berkeley
Sparse coding and manifold learning are two methods that, each in its own right, have proven essential for understanding the structure in complex high dimensional data. The goal of this project is to combine these two methods to yield a qualitatively more powerful approach to analyze data. The investigators will develop the mathematics of sparse coding of spatiotemporal data and combine it with approaches from manifold learning. The tools emerging from this research will bring benefits to society since they are applicable to many areas of technology and medicine, such as signal processing, image and video coding, medical imaging, neural data analysis, neuroprosthetics, and can be expected to have implications for understanding information processing in the visual cortex.
Sparse coding is a concept originally developed in neuroscience to account for sensory representations in the brain, which now sees widespread use in many image and signal processing and data analysis tasks. However, there are critical limitations with current approaches to sparse coding. One major issue is that sparse representations can be brittle, changing abruptly over time or in response to small changes in the input, and they can be quite sensitive to parameter settings, initial conditions, and the particular choice of sparse solver. Another limitation is that if the data lie in a low dimensional manifold, such as sound waveforms or images, the connection between the sparse codes of the data and the geometry of the underlying low dimensional space is lost. The team conjectures that both of these limitations should be addressed together. Building on previous work and their own preliminary studies, they will develop a theoretical framework for sparse coding to reveal conditions under which the results of sparse coding are unique. Based on these theoretical insights, they will design novel algorithms for robustly revealing persistent sparse structure in spatiotemporal data. Finally they will develop a new signal transform, called sparse manifold transform, that combines traditional sparse coding with manifold learning.
|
0.915 |
2019 — 2020 |
Sommer, Friedrich T |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Building Analysis Tools and a Theory Framework For Inferring Principles of Neural Computation From Multi-Scale Organization in Brain Recordings @ University of California Berkeley
Summary The BRAIN initiative is enabling ground-breaking techniques for brain recordings that will permit a unique view onto the dynamics of neural activity. However, inferring brain function from multi-channel physiological recordings is challenging. A key difficulty is that individual neurons and mesoscopic, often rhythmic, cell populations interact in complicated and recurrent ways. Such complex neuronal dynamics is hard to analyze but very likely important to the functioning of the brain. This proposal will address this problem by developing (1) tools for analyzing brain activity; (2) a theoretical framework for expressing underlying computations and generating experimental predictions. The starting point of the project is our earlier discovery that phase structure in oscillatory local field potentials (LFP) of hippocampal areas CA1/CA3 carry location information in exquisite detail (Agarwal et al. 2014). We will release software tools that make the methods for phase decoding and extracting meaningful LFP components available to the broader community. Further, in collaboration with experimental labs we will research the mechanistic underpinnings of this discovery in hippocampus (Buzsaki NYU, Foster, UC Berkeley), and explore how similar approaches can leverage phase diversity in cortical gamma oscillations (Fries, MPI Frankfurt). The research goal is to develop analysis tools for decoding and extraction of functional components (Aims 1 and 2), applicable to a broad range of multivariate brain recordings of hippocampal and cortical activity. Further, we will develop a flexible two-level theory framework with software tools (Aim 3) to help neuroscientists, in particular experimenters, to formulate putative abstract computations underlying a brain function under study, and build a concrete mechanistic circuit model of those computations. The computational description level will leverage ideas of vector symbolic architectures, a class of connectionist models originally proposed for describing cognitive reasoning (Plate, 1995; Kanerva, 1996). Models produced by the software tool will concisely encapsulate assumptions about the computation and its implementation of a brain function and produce predictions that can be tested in a next generation of recording experiments. The proposed theory framework will be tested in building models for navigation in hippocampus and for visual processing in areas V1 and V4 in cortex.
|
1 |
2022 — 2025 |
Sommer, Friedrich |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Iis: Ri: Medium: Lifelong Learning With Hyper Dimensional Computing @ University of California-Berkeley
The use of artificial intelligence (AI) has enabled computers to solve some problems that were out of reach just a decade ago, such as recognizing familiar objects in images, or translating between languages with reasonable accuracy. In each case, a specific task (such as "translate spoken Mandarin into spoken Spanish") is defined, data is collected (consisting, say, of utterances in the two languages), and an AI system is trained to achieve this functionality. To further expand the scope of AI, it is important to build systems that are not just geared towards highly-specific and static predefined tasks, but are able to take on new tasks as they arise (new words, new accents, and new dialects, for instance). This is often called "lifelong learning", and it means, basically, that the systems are adaptive to change. This project develops an approach to lifelong learning using a brain-inspired framework for distributed computing, yielding machines that potentially can solve tasks more flexibly and consume significantly less power than traditional AI systems. It will: (1) advance the ability of AI systems to handle changing environments, (2) enable a host of new low-power AI systems with applications such as environmental sensing, (3) strengthen mathematical connections between computer science and neuroscience, and (4) serve as the basis for educational and outreach activities.<br/><br/>This project will develop lifelong learning within the framework of "hyperdimensional computing", a neurally-inspired model of computation in which information is encoded using randomized distributed high-dimensional representations, often with limited precision (e.g., with binary components), and processing consists of a few elementary operations such as vector summation. We will build HD algorithms for some fundamental statistical primitives -- similarity search, density estimation, and clustering -- and then use these as building blocks for various forms of lifelong learning. These will rest on mathematical advances in (1) the analysis of sparse codes produced by expansive random maps and (2) algorithmic exploitation of kernel properties of high-dimensional randomized representations. Our algorithms will be implemented in hardware, deployed on a network of low-power sensors, and evaluated experimentally in a lifelong learning task involving air quality sensing.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |