2004 — 2010 |
Geisler, Wilson (co-PI) [⬀] Bovik, Alan [⬀] Cormack, Lawrence (co-PI) [⬀] Seidemann, Eyal |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Foundations of Visual Search @ University of Texas At Austin
Project Abstract
This study is directed towards developing flexible, general-purpose Visual Search systems capable of Searching for objects in real, cluttered environments. The research will include extensive psychophysical and physiological experiments on humans and primates that will prototype artificial systems that mimic this behavior. The goals of the study can be conveniently divided into four Aims: Aim 1: Develop and prototype a revolutionary camera gaze control device dubbed Remote High-Speed Active Visual Environment, or RHAVEN. RHAVEN will allow telepresent control of the gaze of a remote camera using eye movements as rapidly and naturally as if viewing the scene directly. Aim 2: Develop optimal statistical bounds on Visual Search, by casting it as a Bayesian problem, yielding a maximum a posteriori (MAP) solutions for firstly, finding a target in a visual scene using a smallest number of fixations, and secondly, for next-fixation selection given a current fixation. Aim 3: Construct models for Visual Search based on Natural Scene Statistics at the point of gaze. Visually important image structures can be inferred by analyzing the statistics of natural scenes sampled by eye movements and fixations. Aim 4: Conduct neurophysiological studies on awake, behaving primates during Visual Search tasks. Measure and analyze search performance in awake, behaving monkeys, while measuring the responses of neural populations in the brain's frontal eye fields (FEF) which help control saccadic eye movements. Broader Impact: The results of this research should significantly impact numerous National Priorities: Searching Large Visual Databases, Robotic Navigation, Security Imaging, Biomedical Search, Visual Neuroscience, and many others. It is easy to envision scenarios that would benefit by a fundamental theory of Visual Search. For example: searching for suspect faces in airport security systems; examining internet streams for questionable material; semi-automatic search for lesions in mammograms; steering robotic vehicles around obstacles in hostile environs; navigating huge visual data libraries, etc.
|
1 |
2005 — 2021 |
Seidemann, Eyal J |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Linking Neural Population Activity and Visual Perception @ University of Texas, Austin
DESCRIPTION (provided by applicant): A central goal of sensory neuroscience is to understand the neural code in sensory cortical areas to the point where, by monitoring neural responses in a subject engaged in a perceptual task, one could read, in real time, the content of the neural representation and account for the subject's perceptual capabilities. The overarching goal of the proposed research is to understand the nature of the neural code in primate V1. Specifically, we will focus on the neural code in two important perceptual tasks, pattern discrimination and contour grouping. In Aim 1 we will test the hypothesis that, in addition to representing stimuli by the activity of a small subset of highly selective neurons, V1 representation relies on the large-scale pattern of spatial variations in neural population responses across the retinotopic map. Specifically, we recently discovered a novel topographic signal in V1, reflecting the spatial representation of the stimulus's luminance modulations (LM) in V1's retinotopic map. We also documented a similar retinotopic signal reflecting the global pattern of the stimulus' contrast modulations (CM). By monitoring neural responses at the retinotopic and columnar scales as monkeys perform a threshold orientation discrimination task, we will test the hypothesis that the retinotopic LM and CM signals contribute to visual perception. In Aim 2 we will examine the contribution of primate V1 to perceptual grouping - the process of grouping together disparate visual elements that belong to the same object. Specifically, we will test the hypothesis that mechanisms involving configuration-specific lateral interactions in V1 play a key role in this process, and initiate the grouping by linking together pairs of elements that are likely to belong to the same object. To test this hypothesis, we will monitor neural population responses in V1 of monkeys as they perform a challenging and naturalistic perceptual grouping task. In addition, these experiments will be used to determine the rules by which responses to local image elements combine to form a spatiotemporal pattern of population activity in primate V1. Overall, the proposed experiments are likely to provide important and unique insights into the neural mechanisms that mediate cortical sensory processing.
|
0.958 |
2010 — 2013 |
Seidemann, Eyal J |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Linking Neural Population Activity and Visual Perception @ University of Texas, Austin
DESCRIPTION (provided by applicant): The overall goal of the proposed research is to provide a quantitative understanding of the link between neural activity in the primate primary visual cortex (V1) and behavioral performance in visual detection and discrimination tasks. To achieve this goal, monkeys are trained to perform four demanding visual detection and discrimination tasks using small oriented visual stimuli that could appear in isolation or on top of a visual mask. While the monkey performs these tasks, we use voltage-sensitive dye optical imaging in conjunction with electrophysiology, to monitor neural population activity in V1. We then use computational techniques to study the relationships between the visual stimuli, the measured neural responses at multiple spatial scales, and the observed behavioral responses to these stimuli. Our first two aims focus on two fundamental causal relationships between these three variables. In Aim #1 our goal is to determine how visual information regarding the target and the mask is represented, or encoded, by populations of V1 neurons. We address three primary questions: (i) what is the quality of the signals that are provided to the rest of the visual system by V1 responses at multiple spatial scales, (ii) how is this information distributed in V1, and (iii) what is the optimal way to extract this information from V1? To form a decision regarding the target, neural circuits subsequent to V1 must 'read out', or decode, the neural signals provided by populations of V1 neurons. Our goal in Aim #2 is to determine which neurons in V1 contribute to the perceptual decision regarding the target, and how their signals might be pooled to form this decision. Finally, these two fundamental relationships - the encoding of visual information by V1 neurons, and the decoding of V1 responses by subsequent processing stages - may change, depending on the behavioral task. In Aim #3, we vary the task by modulating target uncertainty and target relevance. We then examine if and how top-down mechanisms change the representation of the target in V1 based on the demands of the task. Together, this research will significantly expand our understanding of the way in which information is represented and processed by populations of neurons in the primate cortex.
|
0.958 |
2014 — 2021 |
Priebe, Nicholas J (co-PI) [⬀] Seidemann, Eyal J Taillefumier, Thibaud O. (co-PI) [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Cortical Mechanisms Mediating Visual Function and Behavior @ University of Texas, Austin
Project Summary Intracellular recording from sensory cortex provides a window into the synaptic inputs that shape spiking responses of individual cortical neurons, but until recently, this powerful technique has been limited to anesthetized animals. By combining the unique expertise from our laboratories, we have developed a novel technique that allow us to conduct, on a routine basis, reliable, whole-cell intracellular recording in primary visual cortex (V1) of awake, behaving macaque monkeys. We combine intracellular recording with an array of concomitant measurements that provide access to the state of the local network in which the neuron is embedded as well as to the internal state of the animal. Using these techniques, we have access to both subthreshold (membrane potentials, representing input) and suprathreshold (spikes, representing output) responses of individual cortical neurons, while also utilizing the precise control of visual stimulation and the subject?s behavioral state afforded by behaving primates. Our ability to perform intracellular recording in awake, behaving primates opens the door to addressing three fundamental questions with respect to the circuit-level mechanisms that mediate visual perception: (1) what are the nature, sources, and behavioral consequences of the large neural variability of sensory cortical neurons, (2) what is the contribution of internal state fluctuations to this variability, (3) what circuit models can account for the observed neural variability during spontaneous and evoked responses? To address these questions, in Aim 1 we will study the quantitative relationship between sub- and suprathreshold activity during spontaneous and stimulus-evoked responses in V1 of fixating monkeys. This will allow us to test the generality of previous findings from anesthetized animals. In Aim 2, we will examine the relationship between the activity of single V1 neurons and perceptual decisions in monkeys that are engaged in a demanding visual detection task. Specifically, we will examine how sub- and suprathreshold responses are altered by changing the attentional and motivational states under which the stimulus is presented. Finally, in Aim 3 we will test a novel set of circuit models that can account for our observed results and guide future experiments.
|
0.958 |
2016 — 2018 |
Geisler, Wilson S (co-PI) [⬀] Seidemann, Eyal J Zemelman, Boris V (co-PI) [⬀] |
U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
An Optical-Genetic Toolbox For Reading and Writing Neural Population Codes in Functional Maps @ University of Texas, Austin
The overarching goal of this proposal is to develop an optical-genetic toolbox for reading and writing neural population codes in functional maps of awake, higher mammals. Such tools could ultimately be used to restore perceptual capabilities in patients with damage to peripheral sensory pathways by direct stimulation of early sensory cortex. Advanced optical methods for reading and writing neural codes using genetically-encoded reporters and actuators have become powerful tools for studying neural circuits in rodents. However, rodents are a suboptimal model for human perception because of their vastly different sensory representations and perceptual capabilities. For example, rodents' primary visual cortex (V1) lacks the functional columnar organization which is a hallmark of primate vision. In contrast to rodents, the macaque monkeys' sensory representations and perceptual capabilities are highly similar to those of humans. Furthermore, the behaving macaque provides a unique opportunity to develop and test tools for reading and writing neural codes at the level of functional domains such as the orientation columns in V1. However, multiple technical hurdles remain before the optical-genetic methods currently available in rodents could be readily applied in larger, non- transgenic mammals. Here we propose to take advantage of the unique expertise of our team members to develop optical techniques that utilize virally delivered transgenes for monitoring and manipulating neural population codes in behaving macaques. Specifically, we will address three technical goals. First, we will develop and test new genetic methods that will provide long-term expression of transgenes in primates with cell-type and activity- dependent specificity. Second, we will develop a two-photon microscope for behaving monkeys that will allow one to monitor these signals with cellular resolution and complement current imaging techniques with larger coverage but coarser resolution. Finally, we will develop methods for writing neural population codes in functional maps by combining patterned light stimulation that target specific functional domains and selective expression of actuators. We will validate and optimize these techniques by linking V1 responses (elicited by both visual and direct patterned optogenetic stimulation) and monkeys' behavior in visual discrimination tasks. The tools that we will develop will enable a deeper understanding of the neural code and a better characterization of the capabilities and limitations of methods for reading and writing neural population codes in functional maps in humans.
|
0.958 |