1992 — 1996 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ria: Deformable Kernel Filtering For Early Visual Processing @ California Institute of Technology
In the first step of visual analysis, early vision, simple image properties such as brightness, color, texture, stereoscopic disparity, motion patterns are analyzed and boundaries, lines, and other salient visual structures of the image are measured and extracted. A number of theoretical and empirical arguments point to the possibility that all early vision tasks may be accomplished by algorithms sharing a common computational structure: convolution with kernels of different orientations, scales, and shapes followed by simple quasi-local nonlinear operations. Such kernals may be generated as deformations (rotations, scalings stretchings) of a template kernel which is synthesized from task specifications. In the last two years a method based on singular value decomposition (SVD) has been proposed to make such continuous-parameter filtering feasible- it has been demonstrated for rotations and scaling in 2 dimensions. This research endeavours to(1) demonstrate the method in new situations including 3D rotations and scalings stretchings and changes of curvature, (2) apply the method to generating filters for various early vision tasks including texture and motion analysis, (3) explore new early vision algorithms made possible by continuous-parameter filtering, (4) understand the connections between filter-design techniques and the SVD method. //
|
1 |
1993 — 2000 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
U. S.-Esprit Collaboration: Geometry-Driven Diffusion in Vision @ California Institute of Technology
This is the first year of a three year continuing award. It provides funds to support the collaboration of researchers from several U.S. institutions with their counterparts in Europe who are funded by the European ESPRIT program for a project on Geometry- Driven Diffusion in Vision. The U.S. research groups are led by Pietro Perona (CalTech), Jitendra Malik (University of California at Berkeley), David Mumford (Harvard University), Stephen Pizer and Ross Whitaker (University of North Carolina at Chapel Hill), and Sanjoy Mitter (MIT). A number of European investigators are to be involved, from Utrecht University in the Netherlands, Katholieke Universiteit Lueven in Belgium, University of Las Palmas in Spain, University of Paris IX Dauphine in France, the Royal Institute of Technology (KTH) and Linkoping University in Sweden, and the Swiss Federal Institute of Technology. Some important research resultsbased on nonlinear diffusion processes have arisen for early vision tasks. This consortium is investigating the theory of this approach to image processing, some testbed implementations, possible applications such as medical image enhancement, and possible insights into the natural parallelism, cooperative/competitive mechanisms, and neural feedback mechanisms between layers in the brain.
|
1 |
1993 — 1999 |
Perona, Pietro Psaltis, Demetri (co-PI) [⬀] Koch, Christof [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Graduate Research Traineeship Program: Graduate Fellowshipsin Computation and Neural Systems @ California Institute of Technology
This project will support five graduate research traineeships over a seven year period in the Computation and Neural Systems (CNS) Option at the California Institute of Technology. The CNS option brings together biologists, engineers, computer scientists, and physicists with an interest in learning how computation is done by the nervous system, and applying that knowledge to the design of computers. Work in this area is expected to lead to significant advances in many engineering applications as well as neurobiology.
|
1 |
1994 — 2000 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf Young Investigator @ California Institute of Technology
Humans use their body to communicate with each other. Much of this communication is mediated by vision, however vision is not currently used in human-to-machine interfaces. If we could imitate nature and obtain good human-to-machine visual communication we could build convenient, portable and powerful interfaces. This technology would contribute immensely to industries such as entertainment, telecommunications, portable computing and security. Dr. Perona will study techniques for tracking and recognition aimed at building visual interfaces between humans and computers. His research will concentrate on three systems: the face, the hands, and the limbs, including head and body posture. In order to access the information conveyed by the limbs and the hand it is first necessary to locate them in potentially cluttered images and then to reconstruct their configuration in space and time. The face has to be detected, its 3D orientation computed, its features located and its deformation measured. Dr. Perona's research will accordingly concentrate on scene segmentation from motion flow and 3D motion, recursive estimation of the 3D motion of kinematic chains with partially known kinematics from monocular and stereoscopic sequences of perspective images, detection of complex objects (hands, face) in cluttered scenes, representation and estimation of nonrigid motions.
|
1 |
1994 — 2006 |
Perona, Pietro Psaltis, Demetri (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Engineering Research Center For Neuromorphic Systems Engineering @ California Institute of Technology
9402726 GOODMAN The goal of the Engineering Research Center in "Neuromorphic Systems Engineering" is to develop the technology infrastructure for endowing the machines of the next century with the senses of vision, touch, and olfaction which mimic or improve upon human sensory systems. The Center will raise artificial neural network technology to became an "enabling technology" for industry (comparable with the impact of the introduction of the microprocessor). Although the U.S. is the world leader in neural network research, a quantum leap in technology is needed for this research to manifest itself as innovative processes and products in U.S. industry. The Center will aim to facilitate this leap by focusing on sensory processing in which the natural parallelism of artificial neural networks, and neuromorphic VLSI and optical circuits can provide solutions to problems that are hard for conventional computing. These problems include vision, audition, tactition, and chemical sensing (olfaction). Coupling high bandwidth arrays of sensors and actuators with the processing power and learning abilities of distributed neural networks will generate a quantum leap in human-machine interaction and machine-environment interaction. The Center will take a multi-disciplinary approach through the tight coupling of sensors and intelligence required to achieve sensory processing. Algorithms and VLSI hardware must be developed together. Lessons must be learned from neurophysiology, anatomy, and psychophysics but then translated into rigorous engineering design and practice. In order to have massive impact on industry, these design technologies must be automated to the level that digital Application Specific Integrated Circuits (ASICs) are today. The educational mission of the center will be assisted by Caltech's existing Computation and Neural Systems (CNS) Ph.D program. Established in 1986, the goal of this multidisciplinary program is to stud y the structure and computational powers of both living and synthetic neuronal circuits and systems. The program currently has 34 doctoral students working in the laboratories of 20 Caltech faculty members, spread across the divisions of Engineering and Applied Science, Biology, Chemistry, Mathematics, and the Jet Propulsion Laboratory. The Center will run an effective program for industrial collaboration and technology transfer, as well as an outreach program to academia, schools and local business. Industrial guidance will be obtained through the creation of an industrial advisory board. Ultimately, low cost hardware solutions to these sensory processing tasks will open up new areas of application in industry and ensure U.S. competitiveness in such areas as automatic inspection, quality control, flexible manufacturing, telecommunications, process control, transportation, consumer electronics, autonomous machines, smart sensors, and robotics. This award begins the ERC with an initial cooperative of agreement of five years. ***
|
1 |
1998 — 2001 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Human-Computer Interaction With Virtual Social Groups @ California Institute of Technology
This project studies human interaction with virtual human characters, forming a virtual social group. It uses an existing virtual society model at Caltech, experienced as a graphical display of virtual humans which interact socially. An interface, including recognition of the user's gestures and eye gaze using computer vision, allows a user to participate. The research uses simplified social situations derived from a survival world where a social group of people survive by maintaining their social organization, by explicit social interaction, and by cooperatively building dwellings, exploring to find wood and fruits, and by hunting animals. The results will provide answers to questions such as whether a group experience occurs, whether a believable virtual social group can be created, and which social interactions lead to believability of characters. Potential commercial applications include the implementation of a usable virtual social group, from which may be developed animations for movies, interactive games, educational settings and business information settings. Future computer use will require extended continuous interaction and involvement by the user. Socially supportive virtual settings, allowing users to achieve their best, promise a new era of computer use. http://www.vision.caltech.edu/bond
|
1 |
1998 — 2001 |
Perona, Pietro Psaltis, Demetri (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Erc-Crest Partnership Towards Consumer Telepresence @ California Institute of Technology
9730980 PERONA This award provides funding for an Engineering Research Centers (ERC) Program/Centers for Research Excellence in Science and Technology (CREST) Program Partnership between the Center for Neuromorphic Systems Engineering at California Institute of Technology and Tennessee State University CREST. This partnership will involve the development of improved visual telepresence systems through improved algorithms, control loops, user interfaces and the application of these techniques to be teleoperated robots. The research collaboration between the Caltech ERC and the Tennessee State CREST will involve topics that will draw on the expertise of faculty at both institutions as well as involve graduate students. This partnership based in research which is compatible to the research goals of the participating Centers will build a long-term bridge between the two centers that will link their faculty and graduate/undergraduate students in research.
|
1 |
1999 — 2001 |
Burdick, Joel (co-PI) [⬀] Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Equipment Proposal: Early Reach Plans in Parietal Cortex: Toward a Cortical Prosthetic For Arm Movements @ California Institute of Technology
This award provides funds to purchase equipment and instrumentation that will form the basis for a new experimental facility for investigating the science of neural prosthetic arms at the Engineering Research Center for Neuromorphic Engineering at the California Institute of Technology. The equipment is being used in research to investigate how the cerebral cortex plans reaching-arms movements. The goal is to understand the underlying neurobiology of the sensory-motor interface where plans first form. This understanding is the basis for cortically controlled prosthetic devices. The equipment will enable exploration of how parietal reach region neurons in the posterior parietal cortex plan real and virtual arm movements by recording electrical activity from many neurons simultaneously (using an implanted multi-electrode array) while monkeys reach to remembered visual targets. The electrical signals will then be used to drive a virtual arm in a computer model/simulation. The understanding of how the neurons work to move the arm will be the basis for control of a real prosthetic arm.
|
1 |
2000 — 2004 |
Laurent, Gilles (co-PI) [⬀] Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Learning and Recognition of Objects in Sensory Data. @ California Institute of Technology
Humans can recognize objects and scenes using their senses. The ability of learning the appearance of a great number of objects, organizing them into categories, and quickly recognizing them later is an important skill for survival. Replicating such ability in machines would be extremely useful in a great number of scientific and industrial applications such as automatic exploration of databases of medical images, diagnostics and quality control in industrial plants, automatic classification of images and sounds on the web.
The aim of this study is to develop a theory of recognition that is applicable any type of sensory data and where no supervision is required for learning and categorization.
The approach is probabilistic: object categories are modeled by probability density functions on part appearance and object shape. Detection and recognition are formulated as statistical inference problems. Unsupervised learning of object categories is approached using maximum likelihood. In order to motivate and test the theory the investigators will engage in three applications: automatic classification and retrieval of objects from image databases, of human actions from movies, and of neuronal signals associated with perceptual tasks.
|
1 |
2000 — 2004 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cortical Models For Neuromorphic Engineering @ California Institute of Technology
9908537 Psaltis
This award funds the first year of a collaborative research program between the Engineering Research Center at the California Institute of Technology on Neuromorphic Engineering Systems with the Institute of NeuroInformatics at the Swiss Federal Institute of Technology and the University of Zurich, Switzerland. The goals of the effort are: (1) to measure and model the computational properties of neuronal networks in the mammalian cortex, in particular the visual vortex of primates, (2) to build analog VLSI and optoelectronic hardware to perform useful sensory tasks and do the computations, and (3) to construct single-chip modules for landmark-based navigation.. The effort involves support for one key researcher at each institution per goal area, and for travel by the personnel to collaborate and coordinate the work. In addition, the effort involves development of classes and teaching material in neuromorphic engineering, the development of new laboratory equipment needed for the research, and development of software and hardware platforms for the research.
|
1 |
2004 — 2008 |
Burdick, Joel [⬀] Murray, Richard (co-PI) [⬀] Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sst: Networks of Mobile Sensors in Human Environments @ California Institute of Technology
Abstract 0428075 Joel Burdick Caltech
This project will develop basic theory, algorithms, and experimental demonstrations of networks of autonomous mobile sensory platforms. Mobile sensor units enable a sensor network to transiently focus on events or locations that might be interesting or important while at the same time maintaining awareness of the overall environment. Particular focus on tasks where mobile sensory-motor platforms operate in human environments and interact with human operators. The consideration for theory and algorithms include: (1) use mobility to improve network performance; (2) facilitate the interaction between sensor networks and humans; and (3) improve the ability of networks to detect learned categories or events. New unsupervised learning methods will form the basis for object and event detection. We will demonstrate our methods on the Caltech Multi-Vehicle Wireless Testbed (MVWT), an experimental testbed at Caltech consisting of fan-driven and wheeled vehicles operating in a common environment endowed with a variety of sensors and communication schemes.
This is a project supported under the Sensors Initiative NSF 04-522.
|
1 |
2005 — 2009 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
3d Perception of Specular Surfaces @ California Institute of Technology
The recovery of the shape of reflective surfaces from images is investigated. Both the computational foundations as well as human visual perception are explored.
Key issues on the computational front are : (i) the geometrical relationship between surface shape and observations of a reflected scene on the surface; (ii) the nature and role of constraints that help in reconstructing shape from visual measurements. This second issue is investigated first under stringent assumptions (calibrated known scene) which are then lightened progressively (un-calibrated known scene, un-calibrated unknown scene). The relevance of additional visual measurements (multiple images from stereoscopic rigs, occluding boundaries, internal boundaries) as well as the relevance and use of statistical constraints (generic viewpoint assumption, isotropy, homogeneity) is also explored.
Human perception of mirror surfaces is little explored and very poorly understood. The first area of investigation is qualitative shape perception in the presence of an increasing number of cues (image patch, surface boundaries, reflected scene, internal boundaries) and with different scene statistics both natural and synthetic (regular periodic patterns, isotropic textures, indoor scenes, outdoor scenes). The cues that lead the human visual system to classify surfaces as specular vs. textured/matte are explored next. A third issue is the relationship between the mechanisms underlying shape-from-texture and shape-from-specularities. The proposed research provides: i) methods to measure the shape of specular surfaces, a notoriously hard problem in computer vision; ii) fundamental understanding of the geometry and statistics underlying vision of reflective surfaces; iii) exploration of the value of prior knowledge in a Bayesian framework; iv) insight into an underexplored ability of the human visual system.
The broader impacts of this proposal include extending the applicability of 3D scanning system to specular surfaces, which are common in engineering, medicine and art conservation. For this reason methods that are general, practical and low-cost are of particular interest in this study.
|
1 |
2005 — 2008 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Learning Taxonomies of the Visual World @ California Institute of Technology
Learning visual object categories, and recognizing objects in images, is perhaps the most difficult and exciting problem in machine vision today. In light of the fast growing data deluge in science, engineering, industry and society, recognition systems must be able to operate without human supervision. This poses new challenges: How can one learn automatically models of a large number of object classes from unlabelled images? How can one represent these object classes such that they can be searched efficiently? How can one leverage the learnt models to learn new object classes from very few examples?
It is proposed that these challenges may be met by inferring hierarchical representations of object classes from unlabelled image data. Object classes are represented as constellations of parts, where each part extracts shape and appearance information. Non-parametric Bayesian techniques may be employed to organize these object classes into tree-structured representations. The richness of this representation grows incrementally as more data is presented to the system. New similarity measures between object classes naturally derive from this representation facilitating recognition.
Outreach to the local community is established through a collaboration with the California State University Northridge where students, often minorities who are the first in the family to obtain a university degree, will have the opportunity to engage in visual recognition problems proposed by and relevant to local companies.
|
1 |
2007 — 2012 |
Lewis, Nathan [⬀] Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Exp-La: Development of Sensing Materials and Signal Processing Methods For An Electronic Nose @ California Institute of Technology
This is a project to develop sensor arrays for vapor detection using chemically sensitive resistors and luminescent polymers together with biologically inspired algorithms to analyze and interpret the data. The development of a low power circuitry to decode odor patterns from the sensors is a significant component with important technological implications. The work can lead to a general purpose, trainable sensor.
|
1 |
2009 — 2012 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Collaborative Research: Infinite Bayesian Networks For Hierarchical Visual Categorization @ California Institute of Technology
Humans possess the ability to learn increasingly sophisticated representations of the world in which they live. In the visual domain, it is estimated that we are able to identify in the order of 30,000 object categories at multiple levels of granularity (e.g. toe-nail, toe, leg, human body, population). Moreover, humans continuously adapt their models of the world in response to data. Can we replicate this life-long-learning capacity in machines?
In this project, the PIs build hierarchical representations of data streams. The model complexity adapts to new structure in data by following a nonparametric Bayesian modeling paradigm. In particular, the depth and width of our hierarchical models grow over time. Deeper layers in this hierarchy represent more abstract concepts, such as ?a beach scene? or ?chair?, while lower levels correspond to parts, such as a ?patch of sand? or ?body part?. The formation of this hierarchy is guided by fast hierarchical bottom up segmentation of the images.
To process large amounts of information, the PIs distribute computation across many CPUs /GPUs. They develop novel fast inference techniques based on variational inference, memory bounded online inference, parallel sampling, and efficient data-structures.
The technology under development has a large number of potential applications ranging from organizing digital libraries and the worldwide web, building visual object recognition systems, successfully employing autonomous robots and training a ?virtual doctor? by processing worldwide information from hospitals about diseases, diagnosis and treatments.
Results are disseminated through scientific publications and publicly available software.
|
1 |
2010 |
Perona, Pietro |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Automated Behavior Analysis For Model Genetic Organism @ California Institute of Technology
DESCRIPTION (provided by applicant): This project proposes to develop a new class of automated instruments for quantifying complex animal behavior quickly and efficiently, an effort that will be used for many health issues related to human behavior. The effort will focus on the fruit fly, Drosophila melanogaster, because it currently represents the best opportunity for integrating state-of-the-art genetic techniques with a new generation of behavioral assays. Recent research has proven the fruit fly to be a powerful model system for studying many clinically-relevant features of human behavior. This strategy makes use of the fact that the nerve cells of flies and human share many common genetic features, which may be identified quite readily using tools available in fruit flies. For example, investigations that exploit the powerful genetic tools available in this organism have identified a series of candidate genes involved in alcohol and drug tolerance. Recent studies of human populations indicate similar genes may be involved in alcohol addiction. This strategy is not restricted to studies of drug and alcohol addiction, but has also been successfully exploited to study other features of human biology including aging, obesity, fear, aggression and sleep disorders. The potential and impact of these approaches is based on the ease of genetic analyses in flies and also on the ability to accurately identify behavioral defects in large numbers of animals. The goal of this grant will be to thoroughly modernize the quantification of fly behavior so that this genetic model organism can be used more efficiently in the study of a variety of behaviors related to human health. The project team will design, test, and make available three distinct systems that collectively will permit high-throughput quantitative analysis of the individual and social behaviors of adult Drosophila. In designing these instruments, the project team will make use of their collective expertise in machine vision as well as years of practical experience building specialized instruments for quantifying fly behavior. The devices will be intelligent, in that they will be able to automatically identify the most useful measurements to be carried out for a given task, as well as suggest possibly novel models of behavior. They will be based on off-the-shelf video and computer technology to be inexpensive and thus easy to use for the average molecular biologist. Much effort will be made to ensure that the resulting technology will be of use to a broad international community of researchers. Like the robotic sequencers that revolutionized the study of genomics, these devices will help transform behavioral science into a modern discipline of Ethometrics.
|
1 |
2012 |
Perona, Pietro |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I-Corps: Combining Machine Vision and Crowdsourcing For Convenient and Accurate Image Annotation @ California Institute of Technology
Annotating a large body of images quickly, accurately and inexpensively would be a valuable capability in scientific, medical and other commercial applications. Machine vision is making progress in these arenas. However, accuracy is not yet sufficient for many applications. In recent years, a complementary solution has become available: crowdsourcing, that is, dynamically recruiting thousands of people to carry out an assigned task from their computer. The team's research suggests that it is possible to combine the complementary strengths of human annotators and machines into a hybrid system that is flexible, accurate, fast and inexpensive. To demonstrate effectiveness and potential commercial opportunity, the team will develop a prototype and a business model around this approach.
As imaging becomes more available and storage inexpensive, the amount of image data will continue to increase. This is true for the scientific, research, geospatial information systems and consumer markets. The proposed effort will address the need to scale annotation and analysis of this data while keeping the process as inexpensive and fast as possible with today's computational power. By combining computer vision and machine learning automations with humans (both experts and non-expert annotators), the system promises to be quickly configurable and trainable across virtually any image analysis challenge.
|
1 |
2016 — 2020 |
Perona, Pietro Eberhardt, Frederick Yue, Yisong (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Medium: Compcog: Automated Discovery of Macro-Variables From Raw Spatiotemporal Data @ California Institute of Technology
Observation and careful experimentation provide the basis for scientific inquiry, which in turn guides our understanding of the world and policy decisions. Today, scientific data is collected from a vast array of sensors: satellite images and radar, neuro-imaging, microscopes, body monitoring, socio-economic indicators, to name just a few. While models and theories were traditionally derived via careful handcrafting by domain experts, the new data deluge makes direct human analysis impossible. We need intelligent machines that can process vast amounts of sensory data into interpretable quantities that provide actionable information. This project will develop machines that will be able to learn on their own, purely from experience, produce and test hypotheses on causes and effects in complex dynamic scenes, and better collaborate with human scientists and analysts. For generality, we will develop and test our theory in two different domains. Amongst the immediate benefits of our project are methods for discovering the causal relationship between genes, brains and behavior.
Our objective is to develop theory and practical algorithms for automatically interpreting a dynamic scene containing interacting agents. This will involve automatically identifying the main spatial locations, the objects, the actors, their actions and goals, and their relations to one another. The output is a description of the events, and hypotheses on the actors? goals, cause-effect relationships and likely developments. The key technical questions that we will tackle are how to infer semantically meaningful "macro" variables (i.e. agents' role and goals, actions, objects, special locations) directly from raw sensory data (mostly video), how to infer the causal relationships among such variables, and how to adaptively plan new experiments, including collecting feedback from human experts, to resolve ambiguities in the model. The intellectual merit of our project lies in developing an end-to-end, pixels-to-causes approach to the automatic analysis of dynamic scenes. To this end, we will integrate, build upon, and transcend the capabilities of extant "low-level" correlational machine learning and "high-level" causal inference approaches, combined with interactive learning approaches to sequential experimental design.
|
1 |
2020 — 2021 |
Anderson, David J [⬀] Perona, Pietro |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multimodal, Integrated Analysis of Neural Activity and Naturalistic Social Behavior in Freely Moving Mice @ California Institute of Technology
Project Summary/Abstract This proposal responds to an NIMH notice NOT-MH-18-036 aimed at the development and study of novel, computationally defined behavioral assays, and at applying theory and mathematical modeling to better capture the richness of complex, naturalistic behaviors. Specifically, we aim to develop novel computational tools for analyzing social behaviors in freely moving mice, and relating those identified behaviors to neural circuit activity in brain regions that govern the expression of those behaviors. Social behavior is affected in many human psychiatric disorders, such as autism, schizophrenia, and depression. We propose an interdisciplinary, collaborative approach to fill two major gaps that present a barrier to studies of social behavior: 1) the lack of quantitative and high-resolution descriptions of naturalistic social behaviors in freely moving animals, and 2) the difficulty of relating neural activity recorded in deep subcortical regions that govern such behaviors, such as the hypothalamus and extended amygdala, to animals' actions or to models of behavioral control. Our objective is to create a computational behavior analysis platform that integrates automated measurement of naturalistic social behavior, synchronous large-scale recording or imaging of neural activity, and apply these to a novel assay to investigate social behavioral decision-making. The central objective of this proposal is to extend our Mouse Action Recognition System (MARS) to create a platform that allows facile training of supervised and unsupervised behavior classifiers, quantitative correlation with simultaneously acquired neural recording or imaging data, and which can be flexibly adapted to additional behavior assays. The rationale for this approach is that fine-grained quantification of social behavior, and its correlation with neural recordings, is necessary to form and test theories of behavioral control by subcortical brain regions. While automated tracking and ?pose? estimation software such as DeepLabCut have made tracking of animals' body positions more feasible, the identification of social behaviors from pose data is a non-trivial problem, requiring a separate computational approach that takes into account the relative movements of multiple animals over time. To achieve our objective, we will broaden the palette of social behaviors MARS can detect using machine learning and generative models (Aim 1), develop methods to relate those behaviors to neural activity (Aim 2), and extend MARS to additional assays to study neural correlates of social decision-making. This contribution is significant because it will create a resource that will transform our ability to study micro- and meso-scale subcortical circuits controlling social behavior. The contribution is innovative because it combines expertise from circuit neuroscience and computer vision/machine learning to create new tools for understanding the link between neural activity and behavior, in a context that is relevant to understanding dysfunctions of neural circuits that underlie human psychiatric disorders.
|
1 |