2013 — 2018 |
Serre, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Computational Mechanisms of Rapid Visual Categorization: Models and Psychophysics
Primates can recognize objects embedded in complex natural visual scenes at a glance. Despite the ease with which we see, visual recognition -- one of the key issues addressed in computer vision -- is quite difficult for machines. Understanding which computations are performed by the visual cortex would give scientists a powerful tool to uncover key mechanisms of human perception and cognition as well as to create a new generation of 'seeing' machines.
The PI's central research goal is to identify the perceptual principles and model the neural mechanisms underlying rapid visual categorization. By forcing processing to be fast, rapid visual categorization paradigms help isolate the very first pass of visual information before more complex visual routines take place. Hence, understanding 'vision at a glance' is arguably a necessary first step before studying natural everyday vision where eye movements and attentional shifts are known to play a key role.
Specifically, this proposal will lead to the development of a computational neuroscience model of rapid visual recognition in the primate visual system, which is both consistent with physiological properties of cells in the visual cortex and able to predict behavioral responses (both correct and incorrect responses as well as reaction times) from human participants across a range of conditions. The proposed model will integrate recent developments in computational models of vision and decision making with large-scale machine learning techniques. New stimulus sets will be generated, which are optimally tailored for testing among alternative visual representations and computations against human psychophysics data. These experiments will, in turn, enable the refinement of computational models.
The computational models developed as part of this proposal will be integrated in courses and disseminated broadly via a web graphical interface. Overall the interdisciplinary nature of the proposal will give students the opportunity to experience a research environment that crosses traditional boundaries across disciplines and departments. Increased undergraduate participation in computational neuroscience will help integrate this area into the mainstream computer science and neuroscience curricula.
|
1 |
2016 — 2017 |
Serre, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
I-Corps: Development of a Machine Vision System For High-Throughput Computational Behavioral Analysis
The broader impact/commercial potential of this I-Corps project is the promise to revolutionize bio-medical research via the development of machine vision algorithms for automating video analysis and behavioral monitoring. Many areas of the life sciences demand the manual annotation of large amounts of video data. However, the robust quantification of complex behaviors imposes a major bottleneck and a number of controversies in behavioral studies have arisen because of the inherent biases and challenges associated with the manual annotation of behavior. Many of these issues will be resolved with the use of objective quantitative computerized techniques. The goal of the project is to leverage machine learning and computer vision to analyze large volumes of data and discover novel visual features of behavior that are literally hidden to the naked eye.
This I-Corps project proposes the large-scale development, testing, and research application of algorithms and software for automating the monitoring and analysis of behavior. We have developed an initial high-throughput system for the automated monitoring and analysis of rodent behavior. The approach capitalizes on recent developments in the area of deep learning, which is a branch of machine learning that enables neural networks composed of multiple processing stages to learn visual representations with multiple levels of abstraction. The current system accurately recognizes a myriad of normal and abnormal rodent behaviors at a level indistinguishable from human when scoring typical behaviors of a singly housed mouse from video. The proposed activities will bring algorithms closer to commercial deployment by addressing the fundamental problem of visual recognition in biological, cognitive, and psychological research.
|
1 |
2017 — 2018 |
Amso, Dima [⬀] Serre, Thomas |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Naturalistic Data Collection in the Smartplayroom
PROJECT SUMMARY The aims of this proposal are to fully develop and validate the SmartPlayroom as a powerful automated data collection and analysis tool in developmental research. This room looks like any playroom in a home or school but is designed to naturalistically collect data in real time and simultaneously on all aspects of children's behavior. Behaviors include movement kinematics, language, eye movements, and social interaction while a child performs naturalistic tasks, plays and explores without instruction, walks or crawls, and interacts with a caregiver. The space is equipped with mobile eye tracking, wireless physiological heart rate and galvanic skin response sensors, audio and video recording, and depth sensor technologies. Funding is requested to demonstrate the scientific advantage of naturalistic measurement using an example from visual attention research (Aim 1), and in the process, to provide data to further develop flexible computer vision algorithms for automated behavioral analysis for use in 4-9 year-old children (Aim 2). By combining fine-grained sensor data with high-throughput automated computer vision and machine learning tools, we will be able to automate quantitative data collection and analysis in the SmartPlayroom for use in addressing myriad developmental questions. The SmartPlayroom approach overcomes completely the limitations of task-based experimentation in developmental research, offering quantitative precision in the collection of ecologically valid data. It has the power to magnify both construct validity and measurement reliability in developmental research. The investigators are committed to making freely available our data, computer vision algorithms, and discoveries so that we might move the field forward quickly.
|
1 |
2019 — 2024 |
Serre, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Origins of Southeast Asian Rainforests From Paleobotany and Machine Learning
Fossil leaves are the most abundant record of ancient plant life and millions of specimens are contained in museum collections around Fossil leaves are the most abundant record of ancient plant life, and millions of specimens are contained in museum collections around the world, with more discoveries every year. Nevertheless, leaf fossils alone currently provide limited information about the evolution of regional and global plant communities because individual leaf characteristics from a single plant species can vary widely, and detailed, time-consuming examination of each leaf fossil might still not connect it to its true biological family. This project addresses the problem in two ways. First will be the development of the Virtual Paleobotany Assistant (VPA), an artificial intelligence tool that will use machine learning techniques to rapidly analyze leaf characteristics to assign individual fossils to plant families and orders. The VPA, together with more traditional methods of paleobotany, will then be used to interpret the origins of the incredibly diverse tropical rain forests that now exist in Southeast Asia. These plant communities evolved during times of major continental movements and have connections to the former supercontinent of Gondwana, the Indian subcontinent, and Eurasia. Ascertaining the evolutionary and biogeographic pathways that led to the assembly of these tropical forests will help in preserving this important natural resource as the regional human population burgeons. The VPA will be made freely available on the internet and mobile platforms, enabling paleobotanists around the world to make discoveries far beyond this project. The unique collaboration between paleontologists and machine-learning experts will create extremely fertile ground for interdisciplinary advances, while catalyzing new international partnerships and student opportunities.
The project addresses two of the most difficult challenges in paleobotany, fossil leaf identification and the fossil history of Southeast Asian (Malesian) rainforests. Decoding the biological affinities of leaf fossils holds central significance for the improved knowledge of plant evolution, biogeography, and paleoclimate. This project will use deep learning on image databases of extant and fossil leaves to develop the first application (the Virtual Paleobotany Assistant, VPA) for computer-assisted identifications of leaf fossils to plant families and orders. The living floras of Southeast Asia are composed of a stunningly complex juxtaposition of plant lineages that diversified after arriving from disparate sources, including Gondwana (fossils to be studied in Patagonia and Australia), the Indian Plate (India and Pakistan), and Eurasia (South China, Indochina, Malay Archipelago). However, the diverse biogeographic components remain poorly understood due to limited paleobotanical data in many of the source areas. Many widely cited hypotheses are weakly corroborated from fossils; paleobotany and machine vision will coordinate to reveal the identities of fossil plants, correlate them to the geologic time scale, and re-interpret Malesia's floristic history. The influx of new paleobotanical data will test fundamental hypotheses about the relative contributions to Southeast Asian rainforest floras from different source areas.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2019 — 2022 |
Serre, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns Us-France Research Proposal: Oscillatory Processes For Visual Reasoning in Deep Neural Networks
The development of deep convolutional networks (DCNs) has recently led to great successes in machine vision. Despite these successes, to date, the most impressive results have been obtained for image categorization tasks such as indicating whether an image contains a particular object. However, DCNs ability to solve more complex visual reasoning problems such as understanding the visual relations between objects remains limited. Interestingly, much work in computer vision is currently being devoted to extending DCNs, but these models are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to cortical circuits. The challenge is to identify which neuronal mechanisms are relevant and to find suitable abstractions to model them. One promising set of candidates is the neural oscillations that are found throughout the brain. This project seeks to identify the key oscillatory components and characterize the neural computations underlying humans ability to solve visual reasoning tasks, and to use similar strategies in modern deep learning architectures.
This project will use existing computational models to develop tasks and stimuli to be used in EEG studies to identify the key oscillatory components underlying human visual reasoning ability. The analysis of these EEG data will be guided by the development of a biophysically-realistic computational neuroscience model. This will inform the development of hypotheses on the circuit mechanisms underlying the oscillatory clusters and relate these mechanisms to neural computations. Finally, the project will develop novel machine learning idealizations of these neural computations, which are trainable with current deep learning methods but still interpretable at the neural circuit level. In particular, the project will further develop initial machine learning formulation of oscillations based on complex-valued neuronal units, thus extending the approach and demonstrating its ability to qualitatively capture key oscillatory processes underlying visual reasoning.
A companion project is being funded by the French National Research Agency (ANR).
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2020 |
Fallon, Justin R. [⬀] Serre, Thomas |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Next Generation Machine Vision For Automated Behavioral Phenotyping of Knock-in Als-Ftd Mouse Models
Project Summary Amyotrophic lateral sclerosis (ALS) and Frontotemporal Dementia FTD are devastating neurodegenerative disorders that lie on a genetic and mechanistic continuum. ALS is a disease of motor neurons that that is almost uniformly lethal within only 3-5 years of diagnosis. FTD is a heterogeneous, rapidly progressing syndrome that is among the top three causes of presenile dementia. About 10% of ALS cases are caused by dominantly transmitted gene defects. SOD1 and FUS mutations cause aggressive motor neuron pathology while TDP43 mutations cause ALS-FTD. Further, wild type FUS and TDP43 are components of abnormal inclusions in many FTD cases, suggesting a mechanistic link between these disorders. Early phenotypes are of particular interest because these could lead to targeted interventions aimed at the root cause of the disorder that could stem the currently inexorable disease progression. Elucidating such early, potentially shared characteristics of these disorders should be greatly aided by: 1) knock-in animal models expressing familial ALS-FTD genes; 2) sensitive, rigorous and objective behavioral phenotyping methods to analyze and compare models generated in different laboratories. In published work the co-PIs applied their first-generation, machine vision-based automated phenotyping method, ACBM ?1.0? (automated continuous behavioral monitoring) to detect and quantify the earliest-observed phenotypes in Tdp43Q331K knock-in mice. This method entails continuous video recording for 5 days to generate >14 million frames/mouse. These videos are then scored by a trained computer vision system. In addition to its sensitivity, objectivity and reproducibility, a major advantage of this method is the ability to acquire and archive video recordings and to analyze the data at sites, including the Cloud, remote from those of acquisition. We will use Google Cloud TPUs supercomputers that have been designed from the ground up to accelerate cutting-edge machine learning workloads, with a special focus on deep learning. We will analyze this data using Bayesian hierarchical spline models that describe the different mouse behaviors along the circadian rhythm. The current proposal has two main goals: 1) Use deep learning to refine and apply a Next Generation ACBM - ?2.0? - that will allow for more sensitive, expansive and robust automated behavioral phenotyping of four novel knock-in models along with the well characterized SOD1G93A transgenic mouse. 2) To establish and validate procedures to enable remote acquisition of video recording data with cloud-based analysis. Our vision is to establish sensitive, robust, objective, and open-source machine vision-based behavioral analysis tools that will be widely available to researchers in the field. Since all the computer-annotated video data is standardized in ACBM 2.0 and will be archived, we envision a searchable ?behavioral database?, that can be freely mined and analyzed. Such tools are critical to accelerate the development of novel and effective therapeutics for ALS-FTD.
|
1 |
2021 |
Frank, Michael J. [⬀] Frank, Michael J. [⬀] Rasmussen, Steven A Serre, Thomas |
T32Activity Code Description: To enable institutions to make National Research Service Awards to individuals selected by them for predoctoral and postdoctoral research training in specified shortage areas. |
Brown Postdoctoral Training Program in Computational Psychiatry
The goal of understanding psychiatric disorders and advancing psychiatric treatments requires basic knowledge of not only what environmental, genetic and epigenetic factors underlie function and dysfunction, but also how these factors alter the circuit-level computations that are the proximal neural events to behavior. The advent of research in this area holds the promise of linking core computations of neural circuits to complex human behavior, with the ultimate goal of developing comprehensive, multilevel transdiagnostic models of neuropsychiatric disorders. Consequently, the emerging field of computational psychiatry is central to the NIMH mission. Despite its importance, there are very few opportunities to pursue training in this area. Consequently, the proposed training program seeks to take recent PhDs, with strong backgrounds in fields such as neuroscience, engineering, applied math, and computer science, and provide them with the tools to make important contributions to the nascent field of computational psychiatry. The proposed Training Program in Computational Psychiatry (TPCP) will take place at Brown University where there is a critical mass of basic researchers on the main campus and clinical researchers in the Department of Psychiatry and Human Behavior to conduct such a training program. We propose enrolling six fellows (3 per year) in the TPCP with the goal of training, more efficiently and effectively, nonclinical research fellows capable of collaborating with clinical researchers to advance knowledge of psychiatric disorders and treatments. The program embraces an apprenticeship model in which fellows work with a primary research trainer in a computational field and a secondary research mentor in clinical psychiatry. In this apprenticeship model, the trainer works closely with the fellow and a secondary clinical psychiatry mentor, who is conducting research in areas such as neuroimaging, neurostimulation, and digital phenotyping. These research areas are especially conducive to addressing important issues in computational psychiatry, whether they be model/theory-driven or data-driven. The proposed didactic program will include both core seminars and an individualized curriculum including fellow-selected courses in neuroscience, computer science, engineering, applied mathematics, or psychiatric disorders. All fellows attend core seminars on grant writing, responsible conduct of research, and rigor and reproducibility. The short-term final product is an NIH grant application on a computational psychiatry topic. The long-term goal is to produce a new cohort of academics who can conduct research in computational psychiatry and train the next generation of graduate students in this emerging field of inquiry.
|
1 |