2001 — 2003 |
Itti, Laurent |
R03Activity Code Description: To provide research support specifically limited in time and amount for studies in categorical program areas. Small grants provide flexibility for initiating studies which are generally for preliminary short-term projects and are non-renewable. |
Attentional Modulation in Early Sensory Processing @ University of Southern California
DESCRIPTION: (Applicant's Abstract) Understanding how attention modulates early sensory processing has become a high-priority research topic in visual neuroscience. Although much progress has recently been made in observing the conditions under which attention may affect sensation, the computational mechanism at the origin of attentional modulation remains largely unknown and controversial. This exploratory study seeks to develop new methodologies, combining human psychophysics, functional neuroimaging and computational modeling, to provide further quantitative understanding of how attention modulates early Visual processing. Psychophysical experiments will use a dual-task paradigm to split attention between a central and a near-peripheral visual discrimination task being performed simultaneously. This will allow the acquisition of psychophysical data under "fully" and "poorly" attended conditions, for 15 subjects and five different peripheral spatial pattern discrimination tasks . The central task will thus be used for the sole purpose of engaging attention away from the tasks of interest in the poorly attended condition. The five tasks will consist of discriminating contrast, orientation, spatial frequency, and contrast under two masking conditions, for simple visual patterns. A subset of 10 subjects will be selected, based on the stability of their psychophysical thresholds and on their ability to carry out dual tasks, for a subsequent high-field (4 Tesla) functional magnetic resonance imaging experiment( fMRI). This experiment will use an event-related paradigm to evaluate attentional modulation in primary visual cortex (area V1), for a performance-matched subset of the five pattern discrimination tasks. Using a control fMRI experiment consisting of viewing simple Gabor patches under full attention, the hemodynamic responses measured with fMRI will be calibrated against a detailed computational model of one hypercolumn in V1. This model will be further applied to test whether a single computational effect of attention, namely a strengthening of competition among the neurons within a V1 hypercolumn, can simultaneously explain the psychophysical and fMRI data. This exploratory study will, demonstrate how combining psychophysics, imaging and modeling may yield better quantitative and computational understanding of higher brain function.
|
1 |
2001 — 2005 |
Itti, Laurent Poggio, Tomaso Koch, Christof [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Sy: a Neuromorphic Vision System For Every-Citizen Interfaces @ California Institute of Technology
This project aims to extend an existing simple saliency-based visual attention system to animated color video sequences so as to enable it to cue the object recognition module towards interesting locations in live video streams, and simultaneously to extend an existing model for object recognition to on-line adaptability through top-down signals and task- and object-dependent learning of features. The PIs will then integrate these attention and recognition models, by developing feedforward and feedback interactions between localization of regions of interest and object recognition in those regions. This will require substantial elaboration of both models, as well as specific work on their integration. The result will be a complete model of object localization and recognition in primates, with direct applicability to computer vision challenges. The PIs will next implement and deploy the combined model on a cluster of CPUs linked by very fast interconnect (just installed at USC) to allow for real-time processing, and will demonstrate its utility in a prototype video-conferencing application in which the on-line adaptive attentional component of the integrated system will quickly locate regions in the monitored environment where something interesting is happening (e.g., a user raising her hand in a conference room). The recognition part of the system will then be trained and refined on-line to recognize relatively simple hand signs (e.g., a finger pointing up, meaning that the user wishes to become the center of interest in a video-conference). This work will demonstrate two points: that a biologically-inspired approach to traditionally hard computer vision problems can yield unusually robust and versatile vision systems (which work with color video streams and quickly adapt to various environmental conditions, users, and tasks); and that computational neuroscience models of vision can be extended to yield real, useful and widely applicable computer vision systems, and are not restricted to testing neuroscience hypotheses under simple laboratory stimuli.
|
0.945 |
2002 — 2004 |
Itti, Laurent Landauer, Christopher Arbib, Michael [⬀] Bellman, Kirstie |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Biological Information Technology Systems - Bits: Neural Computing, Round 3 @ University of Southern California
EIA 0130900-Michael Arbib-Biological Information Technology Systems-BITS: Neural Computing, Round 3
There have been two main rounds of neural computing to date, the first focusing on adaptation and self-organization, the second on compartmental modeling of the neuron. This project will catalyze a third round of neural computing: Analyzing the architecture of the primate brain to extract neural information processing principles and translate them into biologically-inspired operating systems and computer architectures. This project will focus on analyzing and further developing computational neuroscience models concerned with grasping, recognizing and executing actions, and describing those actions with language, in terms of basic information processing principles. The intention is to create a new research effort, applying the latest advances in computational neurobiology to the design of a new generation of machines. In particular, the proposed research will catalyze research and development of unusually robust, versatile, and adaptive computer architectures, that can easily adapt, correct themselves, and blend diverse styles of processing.
|
1 |
2004 — 2008 |
Raine, Adrian (co-PI) [⬀] Itti, Laurent Biederman, Irving [⬀] Arbib, Michael (co-PI) [⬀] Lu, Zhong-Lin (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Acquisition of An Fmri Basic Research Imaging System At the University of Southern California @ University of Southern California
With support from a National Science Major Research Instrumentation Award, Professor Irving Biederman and his colleagues at the University of Southern California will purchase a state of the art three Tesla functional Magnetic Resonance Imaging (fMRI) system for the scientific investigation of how cognitive, emotional, perceptual, memory, linguistic, and motor capacities emerge from activity of the human brain.
Joining Professor Biederman and his Co-PIs Z.-L. Lu, L. Itti, A. Raine, and M. Arbib as users of the fMRI system will be members of a variety of academic units including the Neuroscience Program, the Departments of Psychology, Computer Science, Biology, Gerontology, Biomedical Engineering, Kinesiology, Electrical Engineering, and the House Ear Institute, Currently the community of interested users includes approximately 30 faculty and over 100 graduate and post-doctoral students. This on-campus facility will not only allow these research programs to proceed but will provide the capability for the development of imaging expertise within this community. The magnet will be available to researchers from other institutions as well.
The ability to probe the activity-not just the structure-of the intact human brain has been one of the great methodological advances of neuroscience in the past decade. The instrument will provide high-resolution images of brain structures but its primary use will be to assess functioning of the brain as subjects experience various stimuli or perform various tasks while the system measures neural activity at specific brain loci in the order of a few millimeters. Among the first of the research projects that will be launched once the system is installed is one focusing on regions of the prefrontal cortex known to modulate restraint and an appreciation of the consequences of one's own actions for individuals with and without a propensity for impulsive violence. Other studies are designed to understand how an image of a scene, never perceived previously, could be comprehended in a fraction of a section. Another will assess whether brain-produced opiates in areas that mediate comprehension provide the perceptual and cognitive pleasure associated with novel but interpretable experiences. Another study is motivated by the finding that neurons in monkey cortex involved in the production of certain motor movements, such as grasping, also fire when the monkey views the grasp of another organism. This research will evaluate whether such "mirror" neurons might be the core imitative capacity fundamental to the evolution of language. Still another investigation will focus on where and how "episodic memory"-the mental diaries of our lives-are produced and stored in the brain.
Plans for the operation of the magnet, to be housed in the Dana and David Dornsife Cognitive Neuroscience Imaging Center, will include instructional courses designed to give hands-on training and research experience to undergraduate as well as graduate students. Special outreach programs are designed to involve qualified high school students from the local community as part of an effort to provide opportunities for underrepresented minorities to be counted among the next generation of scientists advancing our knowledge of cognitive and behavioral neuroscience.
|
1 |
2005 — 2008 |
Munoz, Doug Itti, Laurent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns: Collaborative Research: Characterizing Bayesian Surprise in Humans and Monkeys @ University of Southern California
The concept of surprise is central to sensory processing, adaptation and learning, attention, and decision making. Yet, no widely-accepted mathematical theory currently exists to quantitatively characterize surprise elicited by a stimulus or event, for observers that range from single neurons to complex natural or engineered systems. This project develops a formal Bayesian definition of surprise that is the only consistent formulation under minimal axiomatic assumptions. Surprise quantifies how data affects natural or artificial observers, by measuring the difference between posterior and prior beliefs of the observers. Preliminary human eye-tracking experiments demonstrated that participants gaze towards surprising image regions, significantly more than expected by chance, while watching complex videoclips including TV and video games. What are the underlying neural mechanisms responsible for this behavior? This cross-disciplinary proposal addresses this question as follows:
- Theory and modeling, to investigate how surprise relates to previous notions of saliency and novelty, and may advantageously complement Shannon information when analyzing neural function and behavior.
- Monkey electrophysiology with simple surprising stimuli, to investigate how single-neurons along the sensorimotor processing stream (primary visual cortex, frontal eye fields, superior colliculus) may be modulated by surprise.
- Parallel human/monkey psychophysics/electrophysiology, to investigate, in natural situations and with more complex stimuli including TV programs, whether stimuli which attract human and monkey attention may carry more Bayesian surprise.
The theory of surprise developed here is applicable across different modalities, datatypes, tasks, and abstraction levels. It has potential for impacting science and engineering, and especially education in computer science, mathematics, information theory, statistics, psychology and biology. The research involves undergraduate, graduate, and postdoctoral trainees in a collaboration between a theory lab (Baldi), a modeling and psychophysics lab (Itti), and a monkey electrophysiology lab (Munoz).
|
1 |
2006 — 2011 |
Itti, Laurent Mataric, Maja (co-PI) [⬀] Schaal, Stefan [⬀] Sukhatme, Gaurav (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Acquisition of An Assistive Humanoid Robot Platform For a Human Centered Robotics Laboratory @ University of Southern California
This project, acquiring a mobile humanoid robotics platform as the centerpiece of a Human-Centered Robotics Lab, aims at assisting a broad population in need, based on the belief that the most suitable form of multi-purpose assistive machine for humans will be human-like. This new kind of robot, not highly accurate, stationary, single task machine with sensing abilities as for typical industrial applications, is richly equipped with multi-model sensing, a high level of dexterity, compliance for safe operation, and mobility. Endowed with the appearance and behavior of a social system appropriate for human environments, it can perform a large number of assistive tasks, autonomously or in collaborative instruction with humans. A humanoid robot instigates a variety of original research. Developing humanoid behavior advances robotics and automation technology while promoting interdisciplinary interaction with natural sciences
|
1 |
2007 — 2009 |
Itti, Laurent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns Data Sharing: Human Eye Movements Under Natural Free Viewing @ University of Southern California
Proposal No: 0747477 PI: Laurent Itti
Award Abstract:
This award supports the preparation and sharing of computational neuroscience data as part of an exploratory activity aimed at catalyzing rapid and innovative advances in computational neuroscience and related fields. The data to be shared in this project are recordings of eye movements of subjects watching video clips under natural free viewing conditions. Data will be made available in both raw and processed forms, along with the corresponding video stimuli. Code will be provided for calibration of traces. Code, training data, and validation data will be provided to facilitate the development of prediction algorithms. These data were originally collected for development of an information-theoretic model of visual saliency and visual attention. It is anticipated that they will be useful for a broad range of questions in neuroscience, cognitive psychology, and computer vision. Saliency maps and raw feature maps tied to the information-theoretic model will also be made available, to allow users interested in quantifying which low-level visual features may more strongly attract human attention and gaze to easily perform quantitative analyses.
|
1 |
2008 — 2009 |
Itti, Laurent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns 2008 P.I. Meeting @ University of Southern California
The PIs and Co-PIs of grants supported through the NSF-NIH Collaborative Research in Computational Neuroscience (CRCNS) program meet annually. This will be the fourth meeting of CRCNS investigators. The meeting brings together a broad spectrum of computational neuroscience researchers supported by the program, and includes poster presentations, talks and plenary lectures. The meeting is scheduled for June 1-3, 2008 and will be held at the University of Southern California.
|
1 |
2008 — 2012 |
Munoz, Doug Itti, Laurent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Neural Basis of Active Perception in Natural Environment @ University of Southern California
Understanding how animals perceive and act upon complex natural environments is one of the most pressing challenges in neuroscience, with applications that have potential to revolutionize not only our understanding of the brain, but also machine vision, artificial intelligence, and robotics. Until now, studying the neural basis of active vision - how visual stimuli give rise to eye movements under diverse task conditions - has largely been restricted to simplified laboratory stimuli, presented to overtrained animals performing stereotypical tasks. With funding from the National Science Foundation, the Canadian Institute of Health Research, and the National Geospatial Intelligence Agency, Dr. Douglas Munoz at Queens University in Canada and Dr. Laurent Itti at the University of Southern California will combine neurophysiology and computational modeling to investigate free viewing in natural environments. Using multi-electrode arrays, this project will record in a deep brain structure, called the superior colliculus (SC). The SC is a layered structure comprising several well-understood neural maps, from purely sensory representations in the superficial layers, to sensorimotor representations linked to the control of eye movements in the deeper layers. The project will start by characterizing responses of neurons in the SC under simple stimulus conditions: When the animal is simply looking at a central fixation cross on a display while small isolated patterns are presented at other visual locations; when the animal searches for an oddball item among an array of distracting items; and when the animal inspects natural images and video clips. The project will extend the investigators' salience map theories and models, and develop a new model of the SC. The complete model will predict, from any image or video clip, which visual locations are more salient, task-relevant, and candidate targets for eye movements.
The project leverages a cross-disciplinary collaboration between a neurophysiology lab (co-PI Douglas P. Munoz) and a computational modeling lab (PI Laurent Itti). This will allow, through the combination of experiments and modeling, the interpretation of an otherwise undecipherable mass of data collected during natural viewing. Conversely, the theories will guide further experiments. Coupling multi-unit recording with modeling during free-viewing of natural videos has never been attempted before, and it is expected that it will lead to new understanding of how percepts map into actions under natural conditions. The project will support undergraduate and graduate students, and post-doctoral researchers, who will benefit from exposure to combined physiological and computational techniques, as will the investigators' teaching. In addition to publications, all theory and algorithm source code will be freely distributed, and data will be available through the CRCNS data sharing web site. This research is hence expected to lead to new and broadly accessible fundamental advances in the understanding of how animals use visual information to guide behavior, and how one could build machines which act in similar ways when faced with the complex natural world.
|
1 |
2012 — 2015 |
Itti, Laurent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Goali/Collaborative Research: Advanced Driver Assistance and Active Safety Systems Through Driver's Controllability Augmentation and Adaptation @ University of Southern California
The research objective of this award is to investigate the next generation, proactive, driver-assist active safety control systems (ASCS) for commercial passenger vehicles. The main novel ingredient over existing methods is the adaptation of the ASCS specifications and operation to the individual driver habits and driving skills (e.g., aggressive or timid), his/her current cognitive state (e.g., attentive or not). By using recently developed techniques from the field of computational neuroscience and adaptive control theory, this research will develop algorithms that will capture the state of the driver, the vehicle and the environment from automotive sensors and behavioral (e.g., eye movement) measurements that will be subsequently used to adapt and customize the ASCS to particular situations so as to achieve maximum performance (e.g., minimum stopping distance during emergency braking, etc). This research will take advantage of recent advances in sensor technology, which has led to the reliable fusion of data, so as to provide situational awareness for the vehicle and the persistent monitoring of the (re)actions of the driver.
If successful, this research will enable new levels of performance for the current active safety systems for passenger vehicles, thus leading to decreased accident rates, increased comfort and improved fuel economy. Graduate and undergraduate engineering students as well as local high school teachers will benefit from their involvement in this research through NSF?s REU and RET projects and through Georgia Tech?s PURA and Dash undergraduate research fellowship programs. Undergraduate and high-school minority students will actively participate in data collection and analysis. Under-represented groups will be particularly targeted for participation in the research activities under this award, directly through active recruitment and indirectly through the collaboration with the industry partner, Ford Motor Company, e.g., in the form of summer internships.
|
1 |
2013 — 2018 |
Itti, Laurent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Visual Cortex On Silicon @ University of Southern California
The human vision system understands and interprets complex scenes for a wide range of visual tasks in real-time while consuming less than 20 Watts of power. This Expeditions-in-Computing project explores holistic design of machine vision systems that have the potential to approach and eventually exceed the capabilities of human vision systems. This will enable the next generation of machine vision systems to not only record images but also understand visual content. Such smart machine vision systems will have a multi-faceted impact on society, including visual aids for visually impaired persons, driver assistance for reducing automotive accidents, and augmented reality for enhanced shopping, travel, and safety. The transformative nature of the research will inspire and train a new generation of students in inter-disciplinary work that spans neuroscience, computing and engineering discipline.
While several machine vision systems today can each successfully perform one or a few human tasks ? such as detecting human faces in point-and-shoot cameras ? they are still limited in their ability to perform a wide range of visual tasks, to operate in complex, cluttered environments, and to provide reasoning for their decisions. In contrast, the mammalian visual cortex excels in a broad variety of goal-oriented cognitive tasks, and is at least three orders of magnitude more energy efficient than customized state-of-the-art machine vision systems. The proposed research envisions a holistic design of a machine vision system that will approach the cognitive abilities of the human cortex, by developing a comprehensive solution consisting of vision algorithms, hardware design, human-machine interfaces, and information storage. The project aims to understand the fundamental mechanisms used in the visual cortex to enable the design of new vision algorithms and hardware fabrics that can improve power, speed, flexibility, and recognition accuracies relative to existing machine vision systems. Towards this goal, the project proposes an ambitious inter-disciplinary research agenda that will (i) understand goal-directed visual attention mechanisms in the brain to design task-driven vision algorithms; (ii) develop vision theory and algorithms that scale in performance with increasing complexity of a scene; (iii) integrate complementary approaches in biological and machine vision techniques; (iv) develop a new-genre of computing architectures inspired by advances in both the understanding of the visual cortex and the emergence of electronic devices; and (v) design human-computer interfaces that will effectively assist end-users while preserving privacy and maximizing utility. These advances will allow us to replace current-day cameras with cognitive visual systems that more intelligently analyze and understand complex scenes, and dynamically interact with users.
Machine vision systems that understand and interact with their environment in ways similar to humans will enable new transformative applications. The project will develop experimental platforms to: (1) assist visually impaired people; (2) enhance driver attention; and (3) augment reality to provide enhanced experience for retail shopping or a vacation visit, and enhanced safety for critical public infrastructure. This project will result in education and research artifacts that will be disseminated widely through a web portal and via online lecture delivery. The resulting artifacts and prototypes will enhance successful ongoing outreach programs to under-represented minorities and the general public, such as museum exhibits, science fairs, and a summer camp aimed at K-12 students. It will also spur similar new outreach efforts at other partner locations. The project will help identify and develop course material and projects directed at instilling interest in computing fields for students in four-year colleges. Partnerships with two Hispanic serving institutes, industry, national labs and international projects are also planned.
|
1 |
2015 — 2018 |
Itti, Laurent |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Synergy: Collaborative Research: Adaptive Intelligence For Cyber-Physical Automotive Active Safety - System Design and Evaluation @ University of Southern California
The automotive industry finds itself at a cross-roads. Current advances in MEMS sensor technology, the emergence of embedded control software, the rapid progress in computer technology, digital image processing, machine learning and control algorithms, along with an ever increasing investment in vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technologies, are about to revolutionize the way we use vehicles and commute in everyday life. Automotive active safety systems, in particular, have been used with enormous success in the past 50 years and have helped keep traffic accidents in check. Still, more than 30,000 deaths and 2,000,000 injuries occur each year in the US alone, and many more worldwide. The impact of traffic accidents on the economy is estimated to be as high as $300B/yr in the US alone. Further improvement in terms of driving safety (and comfort) necessitates that the next generation of active safety systems are more proactive (as opposed to reactive) and can comprehend and interpret driver intent. Future active safety systems will have to account for the diversity of drivers' skills, the behavior of drivers in traffic, and the overall traffic conditions.
This research aims at improving the current capabilities of automotive active safety control systems (ASCS) by taking into account the interactions between the driver, the vehicle, the ASCS and the environment. Beyond solving a fundamental problem in automotive industry, this research will have ramifications in other cyber-physical domains, where humans manually control vehicles or equipment including: flying, operation of heavy machinery, mining, tele-robotics, and robotic medicine. Making autonomous/automated systems that feel and behave "naturally" to human operators is not always easy. As these systems and machines participate more in everyday interactions with humans, the need to make them operate in a predictable manner is more urgent than ever.
To achieve the goals of the proposed research, this project will use the estimation of the driver's cognitive state to adapt the ASCS accordingly, in order to achieve a seamless operation with the driver. Specifically, new methodologies will be developed to infer long-term and short-term behavior of drivers via the use of Bayesian networks and neuromorphic algorithms to estimate the driver's skills and current state of attention from eye movement data, together with dynamic motion cues obtained from steering and pedal inputs. This information will be injected into the ASCS operation in order to enhance its performance by taking advantage of recent results from the theory of adaptive and real-time, model-predictive optimal control. The correct level of autonomy and workload distribution between the driver and ASCS will ensure that no conflicts arise between the driver and the control system, and the safety and passenger comfort are not compromised. A comprehensive plan will be used to test and validate the developed theory by collecting measurements from several human subjects while operating a virtual reality-driving simulator.
|
1 |