1995 — 1996 |
Olshausen, Bruno A. |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Efficient Visual Coding Strategies @ Cornell University Ithaca
This project is an attempt to understand how the visual cortex extracts and represents the structure present in natural scenes. The approach is to formulate visual coding strategies based on theoretical considerations of efficiency and optimality, and to use these codes as the basis for understanding known cortical cell response properties and predicting heretofore unknown properties. There are three parts to this project. The first part will investigate the statistical regularities that occur in natural images and attempt to relate these to the feature selective properties of cortical cells. The second part of the project will be to formulate a neurobiologically plausible model for the development of position- and size-invariant representations of spatial structure. The third part will involve a collaboration with an ongoing neurophysiological investigation in order to formulate and test models of image stabilization (position invariance) in area VI.
|
0.948 |
1998 — 2002 |
Olshausen, Bruno A. |
R29Activity Code Description: Undocumented code - click on the grant title for more information. |
Efficient Coding in Visual Cortex @ University of California Davis
DESCRIPTION (Adapted from applicant's abstract): A major limitation to our understanding of visual cortical function is the lack of computational theories capable of making useful, testable predictions for what the cortex should be doing. The purpose of this study is to investigate what may be learned about information processing in visual cortex from efficient coding principles. Methods will be developed for representing the structure in images based on probabilistic inference, and these will be related to known neurobiological substrates in a detailed manner in order to make predictions about visual cortical function. Understanding how the cortex processes visual information is an important step in developing therapies for patients who have lost aspects of visual function due to cortical damage, as well as in the development of visual prostheses capable of providing appropriate cortical stimulation from artificial vision devices. The aspects of visual cortical function that the study aims to shed light on are the properties of feature selectivity, form-invariance, and the role of feedback connections in shaping neural response properties and in mediating visual perception. These issues will be addressed as part of five specific aims. The first is to develop a functional model of the horizontal connections in area V1 based on the statistical structure of natural images. This model will be related to the structure of long-range horizontal fibers in order to make predictions about the role of this form of feedback within V1. The second aim is to develop a model neural system capable of learning the structure of objects independent of variations in position, size, or other geometric transformations. The model will be used to help understand how form-invariance is established in cortical neurons. The third aim is to formulate a model of occlusion in images, which will be used to shed light on how figure-ground segregation could be performed by cortical mechanisms. The fourth aim is to develop a functional model of top-down cortical feedback based on a hierarchical image model. The existence of such a system that utilizes top-down feedback to solve practical problems in vision will help to elucidate a possible role for two-way information processing in the cortical hierarchy. The fifth aim is to test these models through psychophysical experiments. The results of these studies will lead to advances in our understanding of information processing in visual cortex, and possibly shed light on the nature of cortical information processing in general.
|
1 |
2002 — 2005 |
Olshausen, Bruno A. |
R13Activity Code Description: To support recipient sponsored and directed international, national or regional meetings, conferences and workshops. |
Sensory Coding and the Natural Environment @ Gordon Research Conferences
DESCRIPTION: (provided by applicant) This is a proposal to support a biennial international meeting on the topic of Sensory Coding and the Natural Environment, along with a web resource that will provide a directory of people and publications in the field, as well as a medium for exchanging data and algorithms. The theme of the meeting is highly interdisciplinary, drawing upon expertise in systems and cognitive neuroscience, perceptual psychology, statistics, signal processing, and computer science. The aim is to model and understand sensory processes in relation to the statistical structure of the natural environment. This approach is broadly applicable to any sensory modality of any organism. A number of studies over the past decade have shown that the sensory coding strategies of many animals may be understood in terms of efficient coding strategies applied to natural scenes especially in the visual and auditory domains of both vertebrates and invertebrates. This approach is thought to have great potential for shedding light on neural information processing strategies, as well as advancing the development of neural prostheses capable of transforming natural images and sound into a format interpretable by the brain. Two previous meetings have been held on this topic, in 1997 and 2000, and the number of investigators now working in this field, not to mention those entering it, has outgrown these small, informal meetings. More importantly, there is a need to educate both students and current investigators about the techniques, methodologies, and types of results emerging from this field. Funding from this conference grant will enable us to invite experts in the field to a biennial Gordon Research Conference, as well as to provide travel grants and registration fee subsidies to students and post-docs interested in attending the meeting and learning about the field. The web site will complement this effort by providing continuity between the meetings as well as bringing work in the field to the attention of a wider audience.
|
0.906 |
2005 — 2008 |
Wu, Shyhtsun Rowe, Jeffrey Olshausen, Bruno Chuah, Chen-Nee (co-PI) [⬀] Levitt, Karl (co-PI) [⬀] Yoo, S.j.ben |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Nbd: Intelligent and Adaptive Networking For the Next Generation Internet @ University of California-Davis
This project investigates the Next Generation Network Technology and Systems capable of understanding and learning the high-level perspective of the network. The proposed approach pursues a new cognitive intelligent networking paradigm that maintains the success of today's Internet but which also incorporates cognitive intelligence in the network--a new networking technique that provides the ability for the network to know what it is being asked to do, so that it can step-by-step take care of itself as it learns more. In particular, we explore new networking architecture and network elements that will lead to a future network with (a) improved robustness and adaptability, (b) improved usability and comprehensibility, (c) improved security and stability, and (d) reduced human intervention for operation and configuration. This project pursues a set of comprehensive studies that seek innovations through the design and modeling of a new brain-reflex cognitive intelligence architecture, an intelligent programmable network elements architecture, and an intelligent network control and management design.
Broader Impact: The team approach covering neuroscience, datamining, computer science, systems engineering, artificial intelligence, and networking will provide rich opportunities for students to learn beyond their primary fields of study. New courses developed by the faculty members will disseminate the new material covering neuroscience and information technology.
|
0.915 |
2006 — 2007 |
Olshausen, Bruno |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sger Collaborative Research: Hierarchical Models of Time-Varying Natural Images @ University of California-Berkeley
Title: Collaborative Research: Hierarchical Models of Time-Varying Natural Images
PIs: Bruno Olshausen and David Warland
The long-term goal of this research is to develop a computational model of visual perception that achieves the same degree of robust intelligence exhibited in biological vision systems. The proposed research will advance the state of the art in the analysis of time-varying images by building models that capture the robust intelligence of the mammalian visual system. These models will allow the invariant structure (form, shape) to be modeled independently of its variations (position, size, rotation) and will be composed of multiple layers that capture progressively more complex forms of scene structure in addition to modeling its transformations. Mathematically, these multi-layer models have a powerful bilinear form and their detailed structure is learned from natural time-varying images using the principles of sparse and efficient coding.
The early measurements and models of natural image structure have had a profound impact on a wide variety of disciplines including visual neuroscience (e.g. predictions of receptive field properties of retinal ganglion cells and cortical simple cells in visual cortex) and image processing (e.g. wavelets, multi-scale representations, image denoising). The approach taken by this project extends this interdisciplinary work by learning higher-order scene structure from sequences of natural time-varying images. Given the evolutionary pressures on the visual cortex to process time-varying images efficiently, it is plausible that the computations performed by the cortex can be understood in part from the constraints imposed by efficient representation. Modeling the higher order structure will also advance the development of practical image processing algorithms by finding good representations of the scene for the image-processing task at hand. Completion of the specific goals of this project will provide new generative models of time-varying image formation and tools with which to analyze the statistics of natural scenes.
Most image processing problems are greatly simplified by finding a good representation of the data. As a result, this research has practical applications for deriving improved means for representing, indexing, and accessing digital content such as 2D images, and video. the models developed as part of this project are also broadly applicable to advancing image processing algorithms such as denoising of movies, movie compression, and scene analysis and classification. In addition, these models have a mathematical form that makes them generally applicable to research areas other than vision such as analysis of auditory signals, dynamic routing of network signals, and general data mining of complex data sets.
|
0.915 |
2007 — 2009 |
Olshausen, Bruno Sommer, Friedrich [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns Data Sharing: Central Facility and Services @ University of California-Berkeley
Proposal No: 0749049 PI: Friedrich T. Sommer
Award Abstract:
This award supports services and infrastructure for sharing of computational neuroscience data as part of an exploratory activity aimed at catalyzing rapid and innovative advances in computational neuroscience and related fields. The core facility will provide transparent access to shared resources in a manner that scales up to large data sets. Services will be designed to lessen the burden on contributors to make their data or other resources available and to optimize the ability of the user community to identify and use those resources. Community- and market-oriented mechanisms will be developed to identify resources of particular significance for the field, and to solicit feedback from relevant communities. It is anticipated that the availability of high quality data will offer unprecedented opportunities for new types of discoveries, development of new methods, and development of new interdisciplinary collaborations. This new activity will also assist and drive teaching in computational neuroscience, through the exchange of datasets, stimuli, and analysis and modeling tools among modelers, experimentalists, and students.
|
0.915 |
2007 — 2011 |
Olshausen, Bruno |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Collaborative Research: Hierarchical Models of Time-Varying Natural Images @ University of California-Berkeley
Abstract
Title: Collaborative Research: Hierarchical Models of Time-Varying natural Images PIs: Bruno Olshausen, University of California-Berkeley and David Warland, University of California-Davis
The goal of this project is to advance the state of the art in image analysis and computer vision by building models that capture the robust intelligence exhibited by the mammalian visual system. The proposed approach is based on modeling the structure of time-varying natural images, and developing model neural systems capable of efficiently representing this structure. This approach will shed light on the underlying neural mechanisms involved in visual perception and will apply these mechanisms to practical problems in image analysis and computer vision.
The models that are to be developed will allow the invariant structure in images (form, shape) to be described independently of its variations (position, size, rotation). The models are composed of multiple layers that capture progressively more complex forms of scene structure in addition to modeling their transformations. Mathematically, these multi-layer models have a bilinear form in which the variables representing shape and form interact multiplicatively with the variables representing position, size or other variations. The parameters of the model are learned from the statistics of time-varying natural images using the principles of sparse and efficient coding.
The early measurements and models of natural image structure have had a profound impact on a wide variety of disciplines including visual neuroscience (e.g. predictions of receptive field properties of retinal ganglion cells and cortical simple cells in visual cortex) and image processing (e.g. wavelets, multi-scale representations, image denoising). The approach outlined in this proposal extends this interdisciplinary work by learning higher-order scene structure from sequences of time-varying natural images. Given the evolutionary pressures on the visual cortex to process time-varying images efficiently, it is plausible that the computations performed by the cortex can be understood in part from the constraints imposed by efficient processing. Modeling the higher order structure will also advance the development of practical image processing algorithms by finding good representations for image-processing tasks such as video search and indexing. Completion of the specific goals described in this proposal will provide (1) mathematical models that can help elucidate the underlying neural mechanisms involved in visual perception and (2) new generative models of time-varying images that better describe their structure.
The explosion of digital images and video has created a national priority of providing better tools for tasks such as object recognition and search, navigation, surveillance, and image analysis. The models developed as part of this proposal are broadly applicable to these tasks. Results from this research program will be integrated into a new neural computation course at UC Berkeley, presented at national multi-disciplinary conferences, and published in a timely manner in leading peer-reviewed journals. Participation in proposed research is available to both graduate and undergraduate levels, and the PI will advise Ph.D. students in both neuroscience and engineering as part of this project.
URL: http://redwood.berkeley.edu/wiki/NSF_Funded_Research
|
0.915 |
2009 — 2013 |
Gastpar, Michael (co-PI) [⬀] Olshausen, Bruno A. Theunissen, Frederic E. [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns:Ethological Theories of Optimal Auditory Processing @ University of California Berkeley
Project Summary/Abstract Using as a starting point the postulate that sensory systems have evolved to perform optimal transformations on behaviorally relevant or natural stimuli, we are using signal analysis methods and information theoretic principles to develop a theory of auditory processing. The purpose of our theory is not just to describe but to understand the neural representation of acoustic communication signals, including speech and music. First, we plan on analyzing the statistics of natural sounds and of speech, music and birdsong in particular. We propose to search for theoretical representations of sounds based on principles of statistical independence and sparse representation. Our derived representations will also attempt to maximize differences between acoustic features that meditate the qualitatively different acoustical percepts of rhythm, timbre and pitch. Second, we will test the validity of these theoretically derived representations in psychophysical experiments in humans, and behavioral experiments in songbirds. These experiments will test the effect on perception of systematically removing acoustic features along the particular dimensions that were derived in the statistical analysis. Third, we will develop information theoretic tools that will allow us to estimate the amount of redundancy in a neuronal ensemble response. These measures will be used to quantify how the neural representation changes as one ascends the auditory processing stream and to test whether the neural representation is becoming more sparse and independent as we theorized. Finally, we will record the neural responses in primary and secondary auditory areas in songbirds to playback of song and filtered song. The data from these neurophysiological experiments will be used to: 1) test the utility of the statistically derived representations to predict responses of single auditory neurons, 2) correlate neural responses and behavioral responses, 3) assess the nature of non-linearities in the response, and 4) test the assumptions of independence at the ensemble level. Our studies will give us insight on how speech, music and other complex sounds are processed by the auditory system. These studies could be instrumental in the development of novel methods for sound processing for hearing aids and auditory neural prosthetics, as well as diagnostic tools for classifying language and learning disorders.
|
1 |
2009 — 2013 |
Gray, Charles M Olshausen, Bruno A. Rozell, Christopher John (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns_:Neural Population Coding of Dynamic Natural Scenes @ University of California Berkeley
This project aims to achieve a fundamental advance in our understanding of how neural populations process and represent information within visual cortex. By combining pioneering recording technology with new analytical tools and theoretical frameworks, this research effort will provide the first glimpse at how large numbers of neurons interact within the cortex during the processing of dynamic natural scenes. Silicon polytrodes will be used to record simultaneously from populations of lOO-i- neurons in primary visual cortex. The activity of these populations will be characterized in terms of response precision, sparsity, correlation, and LFP coherence. In order to elucidate the causal factors that contribute to stimulus-evoked responses in the cortex, the joint activity and stimuli will be fit with predictive models that attempt to capture the stimulus-response relationships of large neuronal ensembles. Finally, we will attempt to account for these relationships by building functional models that achieve theoretically-motivated information processing objectives for perception and cognition. The project is highly interdisciplinary in nature, combining the expertise of neurophyslologists, theoreticians, and engineers to answer questions that are beyond the scope of any one discipline. RELEVANCE (See instructions): The data obtained and models developed in this work will open a new window into the operation of cortical circuits, providing a first glimpse of the simultaneous activity of large numbers of neurons responding to dynamic natural scenes. These new insights will pave the way for the development of neural prosthetic devices (cortical implants) and new forms of treatment for visual disorders.
|
1 |
2009 — 2015 |
Olshausen, Bruno Sommer, Friedrich [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-Addo-New: Crcns.Org - Online Repository For High-Quality Neuroscience Data and Resources For Computational Neuroscience @ University of California-Berkeley
This project will develop and operate a community infrastructure, CRCNS.ORG, to enable the sharing of data needed by the computational neuroscience community, to enhance and foster collaborations among theoretical and experimental researchers, and to further the development and testing of computational theories of brain function. This infrastructure will widen the spectrum of techniques applied to brain data, enabling discoveries that go beyond the scopes of individual laboratories.
The infrastructure targets the communities of neuroscience and related fields such as computer science, physics, mathematics, statistics, and engineering in which investigators seek access to high-quality neurophysiology data, including electrical, magnetic, and optical recordings from single neurons, neural ensembles, and brain regions. Development activities are aimed at lowering the barriers to contributing, accessing, and using neurophysiology data. Standardized methods will be developed for storing and annotating data in a self-describing, hierarchical format, and enabling flexible on-line access. Scalable methods will be developed to enable users to find potentially useful data and to provide means for online visualization and some on-line analysis. Operations activities will support users and data contributors as well as community outreach activities. Three summer training courses will be held to introduce students and researchers to methods and conventions concerning organization, visualization, and analysis of neuroscience data, and how to use the specific resources of the repository.
|
0.915 |
2009 — 2013 |
Olshausen, Bruno Koepsell, Kilian [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Small: Ri: Multivariate Phase Models For Image and Signal Processing @ University of California-Berkeley
This project aims to advance neural data analysis and image processing by exploiting the structure in multivariate phase representations. Combining insights from neural computation with advances in multivariate statistics, mathematical signal analysis, and machine learning, the project aims to build multivariate statistical models of angular variables that capture the dependencies between complex and hypercomplex phase variables. Recursive estimation techniques will be developed to allow for optimal estimation of distributions from noisy data and prediction of their temporal evolution. The models developed in this proposal will be applied to current problems in neuroscience and image processing.
As a foundation for the application domains, a recently developed method for estimating the parameters of stationary multivariate phase distributions will be generalized to situations in which the parameters are time-varying and the measurements are noisy, linear mixtures of the underlying sources. A recursive filtering model, similar to the classical Kalman filter, will be developed, that produces an optimal online estimate of latent phase variables in response to a sequence of noisy measurements.
In the first application domain, this model will be used to infer connectivity and temporal interactions among populations of neural oscillators from physiological measurements. The model will also be used to detect transient changes in connectivity by utilizing a mixture model of phase dynamics. These estimation techniques will be evaluated on simulated data and then applied to the analysis of neurophysiological recordings to better elucidate network dynamics.
In the application domain of natural image statistics, the model will be extended to handle hypercomplex phase variables in order to model the phase and orientation of edges in natural images, which contain rich information that may be exploited for image analysis and object recognition. The project aims to develop a model that describes the spatial dependencies among these variables as a nonlinear mixture of multivariate phase distributions.
https://redwood.berkeley.edu/wiki/NSF_Funded_Research
|
0.915 |
2011 — 2016 |
Olshausen, Bruno |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Large: Collaborative Research: 3d Structure and Motion in Dynamic Natural Scenes @ University of California-Berkeley
How does a vision system recover the 3-dimensional structure of the world -- such as the layout of the environment, surface shape, or object motion -- from the dynamic 2-dimensional images received by the sensors in a camera, or the retinas in our eyes? This problem is fundamental to both computer and biological vision. Computer vision has developed a variety of algorithms for estimating specific aspects of a scene such as the 3-dimensional positions of points whose correspondence over time can be established, but obtaining complete and robust scene representations for complex natural scenes and viewing conditions remains a challenge. Biological vision systems have evolved impressive capabilities that suggest they have detailed and robust representations of the 3-dimensional world, but the neural representations that subserve this are poorly understood and neurophysiological studies thus far have provided little insight into the computational process. This project will pursue an interdisciplinary approach by attempting the understand the universal principles that lie at the heart of 3-dimensional scene analysis.
Specifically, the project will 1) develop a novel class of computational models that recover and represent 3-dimensional scene information, 2) collect high quality video and range data of dynamic natural scenes under a variety of controlled motion conditions, and 3) test the perceptual implications of these models in psychophysical experiments. The computational models will utilize non-linear decomposition - i.e., the ability to explain complex, time-varying images in terms of the non-linear interaction of multiple factors, such as the interaction between observer motion, the 3-dimensional scene layout, and surface patterns. Importantly, the components of these models will be adapted to the statistics of natural motion patterns that arise from observer motion through natural scenes and movement around points of fixation.
The project is a collaboration between three laboratories that have played a leading role in developing theoretical models of natural image statistics, visual neural representations, and perceptual processes. The investigators seek to combine their efforts to develop new models, data sets, and characterizations of 3-dimensional natural scene structure that go beyond previous studies of natural image statistics, and that can be tested in neurophysiological and psychophysical experiments. This project has the potential to bring about fundamental advances in neuroscience, visual perception, and computer vision by developing new classes of models that robustly infer representations of the 3-dimensional natural environment. It will create a set of high quality databases that will be made available to help other investigators study these issues. It will also open up new possibilities for generating realistic stimuli that can guide novel investigations of neural representation and processing.
|
0.915 |
2012 |
Gray, Charles M Olshausen, Bruno A. Rozell, Christopher John (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Neural Population Coding of Dynamic Natural Scenes @ University of California Berkeley
This project aims to achieve a fundamental advance in our understanding of how neural populations process and represent information within visual cortex. By combining pioneering recording technology with new analytical tools and theoretical frameworks, this research effort will provide the first glimpse at how large numbers of neurons interact within the cortex during the processing of dynamic natural scenes. Silicon polytrodes will be used to record simultaneously from populations of lOO-i- neurons in primary visual cortex. The activity of these populations will be characterized in terms of response precision, sparsity, correlation, and LFP coherence. In order to elucidate the causal factors that contribute to stimulus-evoked responses in the cortex, the joint activity and stimuli will be fit with predictive models that attempt to capture the stimulus-response relationships of large neuronal ensembles. Finally, we will attempt to account for these relationships by building functional models that achieve theoretically-motivated information processing objectives for perception and cognition. The project is highly interdisciplinary in nature, combining the expertise of neurophyslologists, theoreticians, and engineers to answer questions that are beyond the scope of any one discipline. RELEVANCE (See instructions): The data obtained and models developed in this work will open a new window into the operation of cortical circuits, providing a first glimpse of the simultaneous activity of large numbers of neurons responding to dynamic natural scenes. These new insights will pave the way for the development of neural prosthetic devices (cortical implants) and new forms of treatment for visual disorders.
|
1 |
2016 — 2019 |
Rabaey, Jan (co-PI) [⬀] Olshausen, Bruno Salahuddin, Sayeef [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
E2cda: Type I: Collaborative Research: Energy Efficient Learning Machines (Enigma) @ University of California-Berkeley
The project will aim to develop computing hardware and software that improve the energy efficiency of learning machines by many orders of magnitude. In doing so it will enable large societal adoption of such machines, paving the way for new applications in diverse areas such as manufacturing, healthcare, agriculture, and many others. For example, machines that learn the behavioral trends of individual human beings by collecting data from myriads of sensors may be able to design the most appropriate drugs. Similarly, one may envision machines that learn trends in the weather and thereby assist in predicting the most optimized preparations for the next crop cycle. The possibilities are literally endless. However, the canonical learning machines of today need huge amount of energy, significantly hindering their adoption for widespread applications. The goal of this project will be to explore, evaluate and innovate new hardware and software paradigms that could reduce energy dissipation in learning machines by a significant amount. The team of researchers consists of experts in mathematics, neuroscience, electronic devices and materials and computer circuit and system design that will foster a unique platform for both innovative research and interdisciplinary training of graduate students.
We are witnessing a regimental shift in the computing paradigm. For a vast number of applications, cognitive functions such as classification, recognition, synthesis, decision-making and learning are gaining rapid importance in a world that is infused with sensing modalities, often paraphrased under a common term of "Big Data", that are in critical need of efficient information-extraction. This is in sharp contrast to the past when the central objective of computing was to perform calculations on numbers and produce results with extreme numerical accuracy. We aim to approach this problem by exploiting cognitive models that have shown efficacy in "one shot" learning. In this approach, the information is represented by means of high dimensional (HD) vectors. These vectors follow a set of predetermined mathematical operations that ensure that the resulting vector after such operations is unique. The uniqueness can in turn be used as "learning" and the predefined nature of mathematical operations make the learning "one shot". When paired with traditional artificial neural network or deep learning algorithms, such "one shot" learning could significantly reduce the number of necessary computing operations, leading to orders of magnitude reduction in energy dissipation. We shall explore the entire computer hierarchy, staring from materials and devices, all the way up to system design and optimization to exploit the unique capabilities afforded by the HD computing, with the ultimate objective of realizing energy efficient learning machines (ENIGMA).
|
0.915 |
2017 — 2020 |
Saremi, Saeed Olshausen, Bruno Sommer, Friedrich [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Extracting and Understanding Sparse Structure in Spatiotemporal Data in Neuroscience and Other Applications @ University of California-Berkeley
Sparse coding and manifold learning are two methods that, each in its own right, have proven essential for understanding the structure in complex high dimensional data. The goal of this project is to combine these two methods to yield a qualitatively more powerful approach to analyze data. The investigators will develop the mathematics of sparse coding of spatiotemporal data and combine it with approaches from manifold learning. The tools emerging from this research will bring benefits to society since they are applicable to many areas of technology and medicine, such as signal processing, image and video coding, medical imaging, neural data analysis, neuroprosthetics, and can be expected to have implications for understanding information processing in the visual cortex.
Sparse coding is a concept originally developed in neuroscience to account for sensory representations in the brain, which now sees widespread use in many image and signal processing and data analysis tasks. However, there are critical limitations with current approaches to sparse coding. One major issue is that sparse representations can be brittle, changing abruptly over time or in response to small changes in the input, and they can be quite sensitive to parameter settings, initial conditions, and the particular choice of sparse solver. Another limitation is that if the data lie in a low dimensional manifold, such as sound waveforms or images, the connection between the sparse codes of the data and the geometry of the underlying low dimensional space is lost. The team conjectures that both of these limitations should be addressed together. Building on previous work and their own preliminary studies, they will develop a theoretical framework for sparse coding to reveal conditions under which the results of sparse coding are unique. Based on these theoretical insights, they will design novel algorithms for robustly revealing persistent sparse structure in spatiotemporal data. Finally they will develop a new signal transform, called sparse manifold transform, that combines traditional sparse coding with manifold learning.
|
0.915 |
2021 — 2023 |
Rabaey, Jan (co-PI) [⬀] Olshausen, Bruno Kanerva, Pentti (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Hyperdimensional Computing With Geometric Algebra @ University of California-Berkeley
In the modern era of big data, a crucial challenge is to discover useful information that is buried in highly redundant, seemingly irrelevant, incomplete, or even corrupted data sets. Such information is often contained in certain low-dimensional structures hidden within the high-dimensional space of the data, or may only depend on a small subset of the data. How to extract this information efficiently and automatically remains an open problem. This project brings together two emerging areas of research — hyperdimensional (HD) computing and geometric algebra (GA) — to tackle this problem from a new stand point by investigating the data representation and the intrinsic geometry of the data. This research is also the first in a systematic quest to uncover the potential of using the high-dimensional generalization of complex numbers in analyzing and discovering patterns in large-scale sensing data. The success of this research can help advance the capability of other machine learning models, such as deep neural networks, which are mostly based on real numbers today. It also brings a powerful mathematical tool (GA) which is mainly known in the physics community into the machine learning community.
HD computing is a brain-inspired framework for machine learning and artificial intelligence that is based on representing quantities or symbols as high-dimensional vectors and manipulating vectors with simple operations. In recent work by the investigators, it was shown that by using complex-valued vectors in HD computing it is possible to encode images in such a way that patterns can be effectively recognized by a factorization of HD vectors. To build on this direction, they are exploring the use of geometric algebras which generalize complex numbers to any n-dimensional space. The following thrusts form the core of this research: (1) explore ways of mapping data into the geometric algebra space; (2) investigate how to integrate geometric algebra with the operations of HD computing; (3) apply these methods to real application domains such as multi-microphone speech recognition or distributed sensing to evaluate their efficacy and computational efficiency.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |