2007 — 2011 |
Pasupathy, Anitha |
P51Activity Code Description: To support centers which include a multidisciplinary and multi-categorical core research program using primate animals and to maintain a large and varied primate colony which is available to affiliated, collaborative, and visiting investigators for basic and applied biomedical research and training. |
Neural Basis of Shape Representation and Recognition @ University of Washington
This subproject is one of many research subprojects utilizing the resources provided by a Center grant funded by NIH/NCRR. The subproject and investigator (PI) may have received primary funding from another NIH source, and thus could be represented in other CRISP entries. The institution listed is for the Center, which is not necessarily the institution for the investigator. The goal of research in my laboratory is to understand how visual stimuli are encoded in the neural activity patterns in the visual cortex and how this representation underlies object recognition. In one set of experiments we are investigating how partially occluded objects are encoded. In everyday situations objects are often partially occluded but our visual system recognizes objects robustly and efficiently despite retinal image distortions due to partial occlusion. To discover how this is achieved in the primate brain, we have conducted experiments in two fixating monkeys to study activity in area V4 and compare representations when an object is isolated versus when adjoined by a contextual stimulus that suggests partial occlusion. Our results indicate that responses of V4 neurons more closely resemble the perceived stimulus, i.e. V4 neurons do not encode contours that are likely to arise in the retinal image as a result of partial occlusion. In a second experiment, we have trained an animal to perform a shape matching task in the presence of occluders. We will start recording from this animal in a few weeks to study activity of V4 neurons as the animal performs the task. Correlation between neural activity and animal behavior on a single trial basis will provide important insights into how V4 activity is utilized for recognizing objects. Finally, we have found a new class of cells in V4 that respond preferentially to colored stimuli at isoluminance. When the contrast between the stimulus and background increases the neuron is suppressed. We are currently investigating how this neuron contributes to color perception.
|
1 |
2009 — 2013 |
Pasupathy, Anitha |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Role of Area V4 in the Perception of Partially Occluded Objects @ University of Washington
Description (provided by applicant): The human visual system rapidly, accurately, and seemingly effortlessly, recognizes objects that are partially occluded. The long-term goal of research in my laboratory is to determine how this is achieved by the primate visual system. Previous research has demonstrated that visual information that reaches our eyes is processed along the multi-stage ventral "shape processing" pathway. We will investigate the contributions of area V4, an intermediate stage in this pathway, to the processing of partial occlusion. When the three-dimensional world casts a two-dimensional image on the retina, objects that are closer to the viewer partially or completely occlude objects that are farther away. This causes two types of distortions in the retinal image. First, partial occlusions produce "accidental" contour features due to the accidental juxtaposition of the bounding contours of the occluded and occluding objects. Second, parts of the occluded object are missing and may even be fragmented in the retinal image. To accurately recognize the occluded object despite partial occlusion, the visual system needs to discount the accidental contour features and then sew together the fragmented parts by amodally completing the missing contours. Psychological and theoretical evidence suggests that analysis of image features at the intersecting junctions of the occluded and occluding contours (T-like junctions) in the early stages of visual processing underlies processing of occlusion but the neural mechanisms are unknown. Evidence from lesion studies and neurophysiological studies suggest that area V4 is likely to play an important role. A competing hypothesis proposes that occlusion is inferred as a result of robust recognition of objects from their fragmented parts in the highest stages of processing such as inferotemporal cortex. The two hypotheses make distinct predictions about the patterns of responses in area V4. We will conduct single cell recordings of V4 neurons in awake primates performing fixation and behavioral tasks. In aim 1, we will investigate if V4 responses support differential processing of real and accidental contour features. In aim 2, we will investigate if amodal completion signals in area V4 appear before or after accurate recognition of the partially occluded object. Results from these experiments will determine which of the above hypotheses is supported in the primate brain. It will also identify V4 neural mechanisms that contribute to inference about partial occlusion. Object recognition is impaired in visual agnosia, a dysfunction of the occipitotemporal pathway. Results from the proposed experiments will constitute a major advance in our understanding of the brain computations that underlie object recognition and will bring us closer to devising strategies to alleviate and treat this brain disorder. PUBLIC HEALTH RELEVANCE Object recognition is a fundamental capacity of the human brain essential for our interaction with others and for all complex behavior in general. This fundamental brain function is impaired in visual agnosia, a dysfunction of the occipitotemporal pathway. Results from the proposed experiments will constitute a major advance in our understanding of the brain computations that underlie object recognition and will bring us closer to devising strategies to alleviate and treat this brain disorder.
|
1 |
2013 — 2016 |
Pasupathy, Anitha Bair, Wyeth [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Us-German Collaboration: Circuit Models of Form Processing in Primate V4 @ University of Washington
This collaborative study aims to advance the understanding of visual object recognition by combining electrophysiology, computational, modeling and psychophysics to probe the implications of newly discovered properties of neurons in visual cortical area V4, an important intermediate stage in the shape processing pathway of the brain.
V4 neurons respond selectively to a variety of shape attributes, but recent studies demonstrate that they are also selective for the contrast polarity of stimuli and can be broadly classified into four categories based on their preference for the luminance contrast of shapes relative to a uniform background. Specifically, Equiluminance cells respond best to stimuli defined purely by a chromatic contrast with no luminance contrast while Bright, Dark and Contrast cells respond best to positive contrasts, negative contrasts or either, respectively. Because these categories are based on simple stimuli, it remains unknown how these cells respond to more naturalistic stimuli, where boundaries are seldom defined by a fixed luminance contrast, and whether the different cell classes have different functional roles for encoding objects. Characterizing V4 neurons with a parameterized set of naturalistic stimuli that are developed with rigorous psychophysical testing will provide novel insights into underlying circuitry and function and open new understanding about V4 and the ventral stream.
Computational models of visual form processing in the brain have also been limited: they have not taken account of realistic physiological cell types known to exist from the retina to the visual cortex, they have been largely aimed at processing achromatic signals, and have relied heavily on feedfoward processing. This study will generate models that overcome these limitations and are invaluable for gaining insights into the circuits and mechanisms underlying form processing. These models will be available in an open, online framework designed to set a standard for ease of use and transparency, to spur further collaboration between theoreticians and experimentalists, and to facilitate education.
Finally, there has been a longstanding debate in vision science, motivated from the psychophysical literature, that questions whether and how chromatic signals contribute to form processing. The traditional view has been that boundary detection and segmentation are solely based on luminance contrast. Color then paints a surface within the confines of the identified boundary. Recent psychophysical and theoretical studies are at odds with this view and argue that color is important for segmentation and form processing in natural scenes, for example, fruit amidst leaves, where detection based on luminance contrast is very difficult. The experiments in this study will inject much needed physiology data into this debate and the models developed here will shed light on the functional organization of cortical pathways at multiple stages, revealing how different aspects of our natural visual input contribute to form perception.
This award is being co-funded by NSF's Office of the Director, International Science and Engineering. A companion project is being funded by the German Ministry of Education and Research (BMBF).
|
1 |
2015 — 2018 |
Pasupathy, Anitha |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Role of Area V4 in the Perception and Recognition of Visual Objects @ University of Washington
? DESCRIPTION (provided by applicant): The human visual system parses the information that reaches our eyes into a meaningful arrangement of regions and objects. This process, called image segmentation, is one of the most challenging computations accomplished by the primate brain. To discover its neural basis we will study neuronal processes in two brain areas in the macaque monkey-V4, a fundamental stage of form processing along the occipito-temporal pathway, and the prefrontal cortex (PFC), important for executive control. Dysfunctions of both areas impair shape discrimination behavior in displays that require the identification of segmented objects, strongly suggesting that they are important for image segmentation. Our experimental techniques will include single and multielectrode recordings, behavioral manipulations, perturbation methods and computer models. In Aim 1 we will identify the neural signals that reflect segmentation in visual cortex. Using a variety of parametric stimuli with occlusion, clutter and shadows-stimulus features known to challenge segmentation in natural vision-we will evaluate whether segmentation is achieved by grouping regions with similar surface properties, such as surface color, texture and depth, or by grouping contour segments that are likely to form the boundary of an object or some interplay between these two strategies. We will test the hypothesis that contour grouping mechanisms are most effective under low clutter and close to the fovea. In Aim 2, we will investigate how feedback from PFC modulates shape responses in V4 and facilitates segmentation: we will test the longstanding hypothesis that object recognition in higher cortical stages precedes and facilitates segmentation in the midlevels of visual form processing. We will simultaneously study populations of V4 and PFC neurons while animals engage in shape discrimination behavior. We will use single-trial decoding methods and correlation analyses to relate the content and timing of neuronal responses in the two areas. To causally test the role of feedback from PFC, we will reversibly inactivate PFC by cooling and study V4 neurons. Our results will provide the first detailed, analytical models of V4 neuronal response dynamics in the presence of occlusion and clutter and advance our understanding of how complex visual scenes are processed in area V4. They will also reveal how V4 and PFC together mediate performance on a complex shape discrimination task, how executive function and midlevel vision may be coordinated during behavior and how feedback is used in cortical computation. Object recognition is impaired in visual agnosia, a dysfunction of the occipito-temporal pathway, and in dysfunctions of the PFC (e.g. schizophrenia). Results from these experiments will constitute a major advance in our understanding of the brain computations that underlie segmentation and object recognition and will bring us closer to devising strategies to alleviate and treat brain disorders in which these capacities are impaired.
|
1 |
2016 — 2017 |
Bair, Wyeth Daniel Pasupathy, Anitha |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
2 Photon Imaging in Visual Cortex of Awake Monkey @ University of Washington
? DESCRIPTION (provided by applicant): Two-photon calcium imaging (2PCI) allows the simultaneous visualization and physiological characterization of hundreds of neurons within a small patch of cortex with single neuron resolution. With the advent of genetically-encoded Ca2+ indicators (GECIs), it is now possible to identify and track neurons for extended periods, up to a month or longer. This opens up new opportunities for the thorough characterization of neurons and for longitudinal studies that examine the neural bases of learning. While awake 2PCI is a well-established technique in smaller animals, its development is still in its infancy in the primate. Our goal is to successfully implement 2PCI in the awake behaving macaque and establish it as a powerful tool in the arsenal of primate systems neuroscientist. This proposal addresses the two major challenges to successful 2-photon imaging in the awake macaque. First, an imaging chamber with a clear, transparent window and a suitable interface to the 2P microscope objective must be implanted and maintained free of tissue growth for a prolonged period in the awake animal. Second, a protocol for the reliable expression of a GECI must be developed for the macaque. We will work independently on each of these challenges in Aims 1 and 2 and will combine the resulting technologies in Aim 3. In Aim 1, we will implant a custom-designed low profile chamber, perform a craniotomy and durotomy in a bloodless surgery and implant an artificial dura. We will then refine the technique to maintain the chamber over months by removing the neomembrane regrowth in a delicate bloodless procedure. We will express GFP within the chamber and assess the quality of 2P imaging over the course of months. We will also implement hardware and software strategies for image stabilization and alignment, both within session to correct for motion artifacts and to identify matched neurons across sessions. In Aim 2, we will identify the appropriate AAV capsid serotypes, promotors, and injection procedures to express GCaMP6 by performing injections with a variety of parameters and assessing expression in postmortem tissue. We will also assess the stability of the GCaMP signal over time, its signal-to-noise ratio, its linearity, its toxicity to cells and the toxicity f the laser to the cells in anesthetized animals. Finally, in Aim 3 we will conduct 2P experiments in the awake animal with the GECI expressed using the procedures refined in Aim 2. We will quantify the tuning of V1 neurons for basic physiological parameters (orientation, spatial frequency, etc.) across weeks and months to determine whether the signals are sufficiently stable to allow long-term studies of neurons. Our experiments will provide the first detailed evaluation of whether 2P imaging in the awake macaque can serve as a powerful tool for longitudinal studies and visualization of neurons, and our results will provide a recipe for implementation of this technique that can be easily transplanted to other labs. Ultimately, this technique can help us understand how networks of neurons underlie learning and complex behavior, and it can aid in devising strategies to alleviate brain disorders in which these capacities are impaired.
|
1 |
2018 — 2020 |
Pasupathy, Anitha |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Crcns: Joint Coding of Shape and Texture in the Primate Brian @ University of Washington
PROJECT DESCRIPTION Collaborating Pis and Consultant United States Pl: Anitha Pasupathy, Dept. of Biological Structure, University of Washington, Seattle, USA Co-Pl: Wyeth Bair, Dept. of Biological Structure, University of Washington, Seattle, USA Japan Pl: lsamu Motoyoshi, Dept. of Life Sciences, The University of Tokyo, Japan Consultant: Hidehiko Komatsu, Tamagawa University, Japan Specific Aims Our visual system endows us with a diverse set of abilities: to recognize and manipulate objects, avoid obstacles and danger during navigation, evaluate the quality of food, read text, interpret facial expressions, etc. This relies on the neuronal processing of information about form and material texture along the ventral pathway of the primate visual system (Ungerleider & Mishkin, 1982; Felleman & Van Essen, 1991). Studies over the past several decades have produced detailed models of how visual information is processed in V1, the earliest stage along . this pathway (Hubel & Wiesel, 1959, 1968; Movshon et al., 1978a, b; Albrecht et al., 1980), but beyond V1 our understanding of visual processing and representation is limited. This is particularly true with regard to our understanding of how visual representations of form and texture jointly contribute to object perception and recognition. The broad goal of this proposal is two-fold-to develop an experimentally-driven image-computable model for how naturalistic visual stimuli are processed in area V4, an important intermediate stage along the ventral visual pathway (Aim 1) and to discover how such a representation contributes to perception (Aim 2). Past studies have shown that V4 neurons are sensitive to both the form (Desimone and Schein, 1987; Kobatake and Tanaka, 1994; Gallant et al., 1993; Pasupathy and Connor, 2001; Nandy et al., 2013) and the surface texture of visual stimuli (Arcizet et al., 2008; Goda et al., 2014; Okazawa et al., 2015). But, because expertise is narrow and experimental time limited, scientists tend to focus exclusively on the encoding of form or texture and not on their joint coding. For example, in the laboratories of the USA portion of this collaboration, we have until now focused on form processing by carrying out neurophysiological studies using 2D shapes with uniform surface properties to investigate how object boundaries are encoded (Oleskiw et al., 2014; Popovkina et al., 2016). We have modeled our data by comparing the representation of V4 neurons to that of the units in AlexNet (Pospisil et al., 2015), a prominent convolutional neural net (CNN) trained to recognize objects (Krizhevsky et al., 2012). At the same time, the Japanese contingent of this collaboration has investigated the encoding of surface texture and gloss in human perception without associated form encoding (Motoyoshi et al., 2007; Sharan et al., 2008; Motoyoshi, 2010; Motoyoshi & Matoba, 2012). Here we propose to bring our respective expertise in studying form and texture encoding to bear on the question of how naturalistic stimuli with both form and surface cues are encoded in area V4 and how these representations support human visual perception. Our specific aims are: Aim1. To build a unified image-computable model for neuronal responses to shapes and textures in area V4 V4 responses to 2D shapes with uniform luminance/chromatic characteristics can be explained by a hierarchical-Max (HMax) model for object recognition that emphasizes boundary features (Cadieu et al., 2007). Such responses can also be explained by units in artificial deep convolutional networks, in which boundary features are not explicitly emphasized (all features are learned from initially random weights). On the other hand, V4 responses to texture patches can be well explained by a higher-order image-statistics-based model (Okazawa et al., 2015). Using shape data from the Pasupathy lab and texture data from the Komatsu lab (Japanese consultant), we will ask whether responses of V4 neurons to shapes and textures can be Page 21
|
1 |
2018 — 2021 |
Bair, Wyeth Daniel Pasupathy, Anitha |
T32Activity Code Description: To enable institutions to make National Research Service Awards to individuals selected by them for predoctoral and postdoctoral research training in specified shortage areas. |
Vision Training Grant @ University of Washington
ABSTRACT This is an application for continued funding for the Vision Training Grant (VTG) at the University of Washington (UW) which includes 27 preceptors in six Departments and four Interdepartmental Programs. Our goal is to train the next generation of independent vision scientists to communicate and access techniques broadly across vision sub-fields, and to appreciate the links between fundamental and clinical research with the aim of developing treatments for diseases of the visual system. Funds are requested to provide training for four pre- and two post-doctoral trainees in the effective communication of scientific principles to a broad audience, in grant-writing as a pathway to independence, and to provide exposure to a broad range of topics and techniques in vision research ranging from individual proteins and molecules to systems-level neuroscience and cognition. Trainees are required to attend a weekly journal club where, each week, preceptors discuss an influential paper in their sub-field. Trainees will also participate in monthly lunches where a wide variety of topics including ethics and alternative career options will be discussed. Trainees will present their work twice each year--once as a talk at an annual symposium for VTG trainees, and another as a poster at the ?Gained in Translation? Vision symposium, attend VTG seminars and participate in VTG lunches with the visiting speaker, prepare an independent grant for submission and participate in training for responsible conduct of research. Postdoctoral trainees are also required to participate in ?Hit the Ground Running?, a mentoring program for postdocs. Trainees will also receive mentorship related to the successful completion of research projects and options for alternative careers. Predoctoral trainees will be supported for two years and postdoctoral trainees for one year. The Vision Training Grant is currently the only source of support for pre- and post-doctoral trainees who want to commit to biomedical research in Vision and Ophthalmology.
|
1 |
2019 — 2020 |
Pasupathy, Anitha |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Role of Area V4 in the Perception and Recognition of Visual Objects @ University of Washington
ABSTRACT In cluttered natural visual environments object recognition capacity can be severely limited. This may reflect limitations associated with the visual cortical encoding of multiple nearby stimuli. Alternatively, poor object recognition in clutter, especially profound in patients with autism, may result from limited resolution of attentional control and object decoding that rely on interactions between visual cortex and frontal cortex. We do not know what neuronal mechanisms limit object recognition in crowded scenes because neurophysiological studies typically present one or two stimuli at a time, and are thus free from the constraints imposed by clutter in natural scenes. In addition, studies seldom investigate the role of visual-frontal interactions in object recognition. We will use a combination of single neuron studies in awake monkeys, behavioral manipulations, reversible inactivation and computational modelling in two mid-level stages of visual cortex, V2 and V4, and the prefrontal cortex (PFC), to determine: (Aim 1) how V2 and V4 neurons encode visual stimuli in the presence of clutter, and how the encoding depends on eccentricity and on attentional engagement; (Aim 2) how PFC feedback influences encoding in V2 and V4, and how these brain regions together contribute to shape discrimination in clutter. We will consider three hypotheses. First, visual encoding may have limited resolution in clutter: when many objects are nearby, regardless of what those objects are, the visual system may fail to segment and encode individual objects. Second, processing in mid-level stages may be designed to encode only salient objects, i.e., objects that exhibit sufficient feature contrast relative to neighboring image regions. In this case, loss of information may pertain to objects in homogeneous image regions reflecting a representational strategy to preferentially encode objects that stand out. Third, it is possible that all objects are segmented and encoded faithfully, even in clutter, but the capacity limits are dictated by the resolution of attention or other downstream processes that influence object decoding. Our studies will address a fundamental gap in the understanding of how multi-objects displays, which dominate natural vision, are encoded in mid-level visual cortex. They will reveal how encoding strategies vary across eccentricity and this could be relevant for diseases like age-related macular degeneration, where foveal representations are compromised selectively. Finally, our results will provide fundamental insights into how V4 and PFC communication is critical for object recognition in clutter and how diminished communication between the two could influence behavior. This could be important for guiding translational work on autism spectrum disorder.
|
1 |
2021 |
Pasupathy, Anitha |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Spatiotemporal Representation in Ventral Visual Pathway @ University of Washington
In natural vision, objects change appearance over time as they translate, rotate, become occluded or undergo complex transformations, e.g., during biological motion. In these dynamic environments, the visual cortex integrates information over multiple spatial and temporal scales to compute motion trajectories and represent the shape of objects. To understand how form and motion percepts are derived from such dynamic visual input, we will investigate how neuronal responses in area V4?an intermediate stage along the ventral visual pathway?are shaped by the spatiotemporal integration of time-varying visual stimuli. We will test the hypothesis that the spatiotemporal characteristics of V4 neurons are suited for tracking dynamic objects: specifically, V4 motion signals arise from object-tracking over longer spatiotemporal windows than comparable dorsal-stream areas and that V4 signals reflect form transformations at an object level rather than at the level of the local retinal image. We will leverage the percept of long-range apparent motion to probe the role of V4 neurons in motion perception (Aim 1). When a stimulus intermittently skips across the visual field with large spatial and temporal steps, it induces a strong illusory motion percept but neurons in V1 and MT of the dorsal visual stream are strikingly insensitive to the direction of the perceived motion. Psychophysical studies have argued that long-range apparent motion relies not on the dorsal stream but on higher order object tracking processes with large spatiotemporal windows in the ventral visual stream. We will conduct the first neurophysiological investigations in the awake monkey to ascertain the role of V4 in the perception of long- range apparent motion. Next (Aim 2), we will use dynamic stimuli that rotate in the fronto-parallel plane, and translate and rotate in depth, to determine whether V4 neurons encode other common dynamic object transformations (beyond long-range translation), and whether the encoding is based on a sequence of static poses, as in the inferotemporal (IT) cortex, or dynamic transformations. Finally, we will examine the encoding and perception of partially occluded dynamic objects (Aim 3). When an occluded object moves, different parts of the object are revealed over time and integration across time and multiple neuronal receptive fields is required to build an entire object representation. As animals discriminate moving occluded objects, we will study 50-100 neurons with high-density Neuropixels probes. We will use single trial population decoding methods to determine how dynamic stimulus information is integrated across the V4 network to extract object shape and the motion trajectory and how V4 contributes to psychophysical behavior. We anticipate that our results will reveal an important role for V4 in the processing of dynamic stimuli that is complementary to those of MT and IT cortex and will establish the level of internal visual representation operating in V4. Our studies will provide a deeper understanding of the neuronal basis of global motion perception and the tracking of dynamic objects?processes that are impaired in aging populations, especially those with Alzheimer?s disease.
|
1 |