2013 — 2016 |
Pasupathy, Anitha (co-PI) [⬀] Bair, Wyeth |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-German Collaboration: Circuit Models of Form Processing in Primate V4 @ University of Washington
This collaborative study aims to advance the understanding of visual object recognition by combining electrophysiology, computational, modeling and psychophysics to probe the implications of newly discovered properties of neurons in visual cortical area V4, an important intermediate stage in the shape processing pathway of the brain.
V4 neurons respond selectively to a variety of shape attributes, but recent studies demonstrate that they are also selective for the contrast polarity of stimuli and can be broadly classified into four categories based on their preference for the luminance contrast of shapes relative to a uniform background. Specifically, Equiluminance cells respond best to stimuli defined purely by a chromatic contrast with no luminance contrast while Bright, Dark and Contrast cells respond best to positive contrasts, negative contrasts or either, respectively. Because these categories are based on simple stimuli, it remains unknown how these cells respond to more naturalistic stimuli, where boundaries are seldom defined by a fixed luminance contrast, and whether the different cell classes have different functional roles for encoding objects. Characterizing V4 neurons with a parameterized set of naturalistic stimuli that are developed with rigorous psychophysical testing will provide novel insights into underlying circuitry and function and open new understanding about V4 and the ventral stream.
Computational models of visual form processing in the brain have also been limited: they have not taken account of realistic physiological cell types known to exist from the retina to the visual cortex, they have been largely aimed at processing achromatic signals, and have relied heavily on feedfoward processing. This study will generate models that overcome these limitations and are invaluable for gaining insights into the circuits and mechanisms underlying form processing. These models will be available in an open, online framework designed to set a standard for ease of use and transparency, to spur further collaboration between theoreticians and experimentalists, and to facilitate education.
Finally, there has been a longstanding debate in vision science, motivated from the psychophysical literature, that questions whether and how chromatic signals contribute to form processing. The traditional view has been that boundary detection and segmentation are solely based on luminance contrast. Color then paints a surface within the confines of the identified boundary. Recent psychophysical and theoretical studies are at odds with this view and argue that color is important for segmentation and form processing in natural scenes, for example, fruit amidst leaves, where detection based on luminance contrast is very difficult. The experiments in this study will inject much needed physiology data into this debate and the models developed here will shed light on the functional organization of cortical pathways at multiple stages, revealing how different aspects of our natural visual input contribute to form perception.
This award is being co-funded by NSF's Office of the Director, International Science and Engineering. A companion project is being funded by the German Ministry of Education and Research (BMBF).
|
1 |
2015 — 2016 |
Bair, Wyeth |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Crcns 2015 Pi Meeting @ University of Washington
The PIs and Co-PIs of grants supported through the NSF-NIH-ANR-BMBF-BSF Collaborative Research in Computational Neuroscience (CRCNS) program meet annually. This eleventh meeting of CRCNS investigators brings together a broad spectrum of computational neuroscience researchers supported by the program, and includes poster presentations, talks, plenary lectures, and workshops. The meeting is scheduled for September 28-30, 2015 and is hosted by the University of Washington.
|
1 |
2016 — 2017 |
Bair, Wyeth Daniel Pasupathy, Anitha (co-PI) [⬀] |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
2 Photon Imaging in Visual Cortex of Awake Monkey @ University of Washington
? DESCRIPTION (provided by applicant): Two-photon calcium imaging (2PCI) allows the simultaneous visualization and physiological characterization of hundreds of neurons within a small patch of cortex with single neuron resolution. With the advent of genetically-encoded Ca2+ indicators (GECIs), it is now possible to identify and track neurons for extended periods, up to a month or longer. This opens up new opportunities for the thorough characterization of neurons and for longitudinal studies that examine the neural bases of learning. While awake 2PCI is a well-established technique in smaller animals, its development is still in its infancy in the primate. Our goal is to successfully implement 2PCI in the awake behaving macaque and establish it as a powerful tool in the arsenal of primate systems neuroscientist. This proposal addresses the two major challenges to successful 2-photon imaging in the awake macaque. First, an imaging chamber with a clear, transparent window and a suitable interface to the 2P microscope objective must be implanted and maintained free of tissue growth for a prolonged period in the awake animal. Second, a protocol for the reliable expression of a GECI must be developed for the macaque. We will work independently on each of these challenges in Aims 1 and 2 and will combine the resulting technologies in Aim 3. In Aim 1, we will implant a custom-designed low profile chamber, perform a craniotomy and durotomy in a bloodless surgery and implant an artificial dura. We will then refine the technique to maintain the chamber over months by removing the neomembrane regrowth in a delicate bloodless procedure. We will express GFP within the chamber and assess the quality of 2P imaging over the course of months. We will also implement hardware and software strategies for image stabilization and alignment, both within session to correct for motion artifacts and to identify matched neurons across sessions. In Aim 2, we will identify the appropriate AAV capsid serotypes, promotors, and injection procedures to express GCaMP6 by performing injections with a variety of parameters and assessing expression in postmortem tissue. We will also assess the stability of the GCaMP signal over time, its signal-to-noise ratio, its linearity, its toxicity to cells and the toxicity f the laser to the cells in anesthetized animals. Finally, in Aim 3 we will conduct 2P experiments in the awake animal with the GECI expressed using the procedures refined in Aim 2. We will quantify the tuning of V1 neurons for basic physiological parameters (orientation, spatial frequency, etc.) across weeks and months to determine whether the signals are sufficiently stable to allow long-term studies of neurons. Our experiments will provide the first detailed evaluation of whether 2P imaging in the awake macaque can serve as a powerful tool for longitudinal studies and visualization of neurons, and our results will provide a recipe for implementation of this technique that can be easily transplanted to other labs. Ultimately, this technique can help us understand how networks of neurons underlie learning and complex behavior, and it can aid in devising strategies to alleviate brain disorders in which these capacities are impaired.
|
0.958 |
2017 — 2021 |
Bair, Wyeth Daniel Huk, Alexander C Kohn, Adam (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Cortical Computations Underlying Binocular Motion Integration @ University of Washington
PROJECT SUMMARY / ABSTRACT Neuroscience is highly specialized?even visual submodalities such as motion, depth, form and color processing are often studied in isolation. One disadvantage of this isolation is that results from each subfield are not brought together to constrain common underlying neural circuitry. Yet, to understand the cortical computations that support vision, it is important to unify our fragmentary models that capture isolated insights across visual submodalities so that all relevant experimental and theoretical efforts can benefit from the most powerful and robust models that can be achieved. This proposal aims to take the first concrete step in that direction by unifying models of direction selectivity, binocular disparity selectivity and 3D motion selectivity (also known as motion-in-depth) to reveal circuits and understand computations from V1 to area MT. Motion in 3D inherently bridges visual submodalities, necessitating the integration of motion and binocular processing, and we are motivated by two recent paradigm-breaking physiological studies that have shown that area MT has a robust representation of 3D motion. In Aim 1, we will create the first unified model and understanding of the relationship between pattern and 3D motion in MT. In Aim 2, we will construct the first unified model of motion and disparity processing in MT. In Aim 3, we will develop a large-scale biologically plausible model of these selectivities that represents realistic response distributions across an MT population. Having a population output that is complete enough to represent widely-used visual stimuli will amplify our ability to link to population read-out theories and to link to results from psychophysical studies of visual perception. Key elements of our approach are (1) an iterative loop between modeling and electrophysiological experiments; (2) building a set of shared models, stimuli, data and analysis tools in a cloud-based system that unifies efforts across labs, creating opportunities for deep collaboration between labs that specialize in relevant submodalities, and encouraging all interested scientists to contribute and benefit; (3) using model-driven experiments to answer open, inter-related questions that involve motion and binocular processing, including motion opponency, spatial integration, binocular integration and the timely problem of how 3D motion is represented in area MT; (4) unifying insights from filter-based models and conceptual, i.e., non-image- computable, models to generate the first large-scale spiking hierarchical circuits that predict and explain how correlated signals and noise are transformed across multiple cortical stages to carry out essential visual computations; and (5) carrying out novel simultaneous recordings across visual areas. This research also has potential long-term benefits in medicine and technology. It will build fundamental knowledge about functional cortical circuitry that someday may be useful for interpreting dysfunctions of the cortex or for helping biomedical engineers construct devices to interface to the brain. Insights gained from the visual cortex may also help to advance computer vision technology.
|
0.958 |
2018 — 2021 |
Bair, Wyeth Daniel Pasupathy, Anitha [⬀] |
T32Activity Code Description: To enable institutions to make National Research Service Awards to individuals selected by them for predoctoral and postdoctoral research training in specified shortage areas. |
Vision Training Grant @ University of Washington
ABSTRACT This is an application for continued funding for the Vision Training Grant (VTG) at the University of Washington (UW) which includes 27 preceptors in six Departments and four Interdepartmental Programs. Our goal is to train the next generation of independent vision scientists to communicate and access techniques broadly across vision sub-fields, and to appreciate the links between fundamental and clinical research with the aim of developing treatments for diseases of the visual system. Funds are requested to provide training for four pre- and two post-doctoral trainees in the effective communication of scientific principles to a broad audience, in grant-writing as a pathway to independence, and to provide exposure to a broad range of topics and techniques in vision research ranging from individual proteins and molecules to systems-level neuroscience and cognition. Trainees are required to attend a weekly journal club where, each week, preceptors discuss an influential paper in their sub-field. Trainees will also participate in monthly lunches where a wide variety of topics including ethics and alternative career options will be discussed. Trainees will present their work twice each year--once as a talk at an annual symposium for VTG trainees, and another as a poster at the ?Gained in Translation? Vision symposium, attend VTG seminars and participate in VTG lunches with the visiting speaker, prepare an independent grant for submission and participate in training for responsible conduct of research. Postdoctoral trainees are also required to participate in ?Hit the Ground Running?, a mentoring program for postdocs. Trainees will also receive mentorship related to the successful completion of research projects and options for alternative careers. Predoctoral trainees will be supported for two years and postdoctoral trainees for one year. The Vision Training Grant is currently the only source of support for pre- and post-doctoral trainees who want to commit to biomedical research in Vision and Ophthalmology.
|
0.958 |