2011 — 2017 |
Lewicki, Michael |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Large: Collaborative Research: 3d Structure and Motion in Dynamic Natural Scenes @ Case Western Reserve University
How does a vision system recover the 3-dimensional structure of the world -- such as the layout of the environment, surface shape, or object motion -- from the dynamic 2-dimensional images received by the sensors in a camera, or the retinas in our eyes? This problem is fundamental to both computer and biological vision. Computer vision has developed a variety of algorithms for estimating specific aspects of a scene such as the 3-dimensional positions of points whose correspondence over time can be established, but obtaining complete and robust scene representations for complex natural scenes and viewing conditions remains a challenge. Biological vision systems have evolved impressive capabilities that suggest they have detailed and robust representations of the 3-dimensional world, but the neural representations that subserve this are poorly understood and neurophysiological studies thus far have provided little insight into the computational process. This project will pursue an interdisciplinary approach by attempting the understand the universal principles that lie at the heart of 3-dimensional scene analysis.
Specifically, the project will 1) develop a novel class of computational models that recover and represent 3-dimensional scene information, 2) collect high quality video and range data of dynamic natural scenes under a variety of controlled motion conditions, and 3) test the perceptual implications of these models in psychophysical experiments. The computational models will utilize non-linear decomposition - i.e., the ability to explain complex, time-varying images in terms of the non-linear interaction of multiple factors, such as the interaction between observer motion, the 3-dimensional scene layout, and surface patterns. Importantly, the components of these models will be adapted to the statistics of natural motion patterns that arise from observer motion through natural scenes and movement around points of fixation.
The project is a collaboration between three laboratories that have played a leading role in developing theoretical models of natural image statistics, visual neural representations, and perceptual processes. The investigators seek to combine their efforts to develop new models, data sets, and characterizations of 3-dimensional natural scene structure that go beyond previous studies of natural image statistics, and that can be tested in neurophysiological and psychophysical experiments. This project has the potential to bring about fundamental advances in neuroscience, visual perception, and computer vision by developing new classes of models that robustly infer representations of the 3-dimensional natural environment. It will create a set of high quality databases that will be made available to help other investigators study these issues. It will also open up new possibilities for generating realistic stimuli that can guide novel investigations of neural representation and processing.
|
0.915 |
2015 — 2018 |
Lewicki, Michael Mandal, Soumyajit |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Shf: Small: Bio-Inspired Ultra-Broadband Rf Scene Analysis @ Case Western Reserve University
The detection and analysis of structured signals in noisy and cluttered environments is a fundamental problem in areas ranging from radio communications to image processing and speech recognition. Biological sensory systems have been optimized by millions of years of evolution to solve this problem with exquisite precision and efficiency; man-made communication and signal processing systems do not achieve anywhere near the same level of performance or even share similar fundamental design principles. This project will try to bridge this gap by understanding the universal information processing principles used by the auditory system to analyze natural sounds, and then adapting them to analyze man-made radio frequency (RF) signals. In particular, it will focus on developing electronics and algorithms that emulate some of the amazing capabilities of the biological cochlea (inner ear) and auditory pathway. Graduate and undergraduate students including members of underrepresented groups will be trained as part of this research, thus enlarging the technologically trained workforce of the future.
The bio-inspired approach of this project was motivated by two observations. Firstly, the process by which the auditory system, beginning with the cochlea, analyzes the fine time-frequency content of sounds is both extremely precise and also highly efficient from an algorithmic viewpoint. Secondly, audio and RF scenes are generated by similar physics (wave propagation, absorption, scattering, diffraction, and interference), even though the relevant velocities and time delays differ by a factor of about a million. Thus audio and RF scenes share many of the same characteristics, which makes it interesting to consider models of cochlear mechanics, signal transduction, and auditory coding that are scaled to operate at much higher frequencies. The first research goal is to build a single-chip cochlear model that analyzes RF signals in the GHz range and encodes frequency, amplitude, and phase information into parallel event-driven outputs that are analogous to auditory nerve fibers. The second goal is to allow higher-level properties, such as source locations and categories, to be efficiently extracted from input signals by developing a robust coding framework to create compressed representations of the cochlear outputs.
|
0.915 |