1992 — 1993 |
Angelaki, Dora |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Role of Regular and Irregular Otolith Afferents in Otoli |
0.951 |
1994 |
Angelaki, Dora |
F32Activity Code Description: To provide postdoctoral research training to individuals to broaden their scientific background and extend their potential for research in specified health-related areas. |
Regular and Irregular Otolith Afferents and Otoli |
0.951 |
1995 — 2000 |
Kuo, Arthur [⬀] Paloski, William Angelaki, Dora Horak, Fay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bac: Development of a Biologically-Motivated Model For Adaptive, Multi-Input Multi-Output Control of Human Balance @ University of Michigan Ann Arbor
9511814 Kuo The goal of this research project is to understand how the human brain puts together thousands of pieces of information from sensors throughout the body, such as eyes, the inner ear, and receptors in joints and muscles, to form a picture of where the limbs are and how they are moving. This information is used for planning and control of movement. It is a process that the brain carries out subconsciously; it is critically important for proper balance, to keep up from falling; it provides athletes with the ability to move fluidly and skillfully and allows seamen and astronauts to adjust movements to their new environments. The approach used in this study is to integrate techniques from two scientific fields. Physiological studies describe how signals are produced by the sensors and define the places in the brain where these signals are transmitted. Control engineering describes how signals of many types can be used to perform a useful task. In this project, physiologists and control engineers will work together to examine the differences in balance control in astronauts before and after spaceflight. The adaptations that take place during spaceflight make it possible to determine some of the factors that are and are not relevant to control of moment and, therefore, to explain some of the basic mechanisms underlying control of movement.
|
0.937 |
1995 — 1999 |
Angelaki, Dora |
R29Activity Code Description: Undocumented code - click on the grant title for more information. |
Three Dimensional Organization of the Oculomotor System @ University of Mississippi Medical Center
The vestibulo-ocular system serves to maintain gaze stability during head movements. Trauma to the system, or disease states such as gaze palsies or ophthalmoplegia can produce oculomotor deficits that are disorienting and sometimes difficult to resolve clinically. In order to provide more effective treatments for oculomotor and vestibular disturbances, a more thorough scientific understanding of the function of these brain systems must be acquired. Signals from the vestibular labyrinths converge with visual inputs in a region of the brainstem that comprise the vestibular nuclei. Most previous investigations attempting to elucidate the exact role of vestibular nuclei neurons in vestibulo-ocular computations have been limited to examining neural responses that are associated with only horizontal or vertical eye movements. Movement of the eyes and head, however, have three degrees of freedom. In order to understand the sensory-to-motor transformations that control ocular motility, an examination of the neural coding in motor and premotor structures that produces three-dimensional eye movements is necessary. The proposed study will, for the first time, examine the three-dimensional organization of the response properties of vestibular nuclei neurons during spontaneous and visually-evoked eye movements, as well as during head rotation and linear translation. Binocular three-dimensional eye movements will be monitored and correlated with the neuronal discharge. Electrical stimulation will be used to identify vestibular nuclei neurons in terms of their labyrinthine inputs as well as their outputs to abducens, trochlear or oculomotor nuclei. The proposed experiments have two main aims. First to investigate the three-dimensional organization of eye position and eye angular velocity signals in oculomotor related vestibular nuclei cells. Second, to quantitatively evaluate the three-dimensional spatio-temporal organization of head motion signals in these neurons during angular rotation and linear translation. The relationship between the spatial organization of oculomotor and vestibular signals will also be delineated. Electrophysiological studies and theoretical modeling will both be utilized to achieve the specific aims of the proposed project. These results will provide a better understanding of the sensory-to-motor transformations that occur in the vestibulo-ocular system for the control of eye movements that direct vision in a three-dimensional world.
|
0.928 |
1999 — 2003 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
3d Organization and Neural Plasticity of Primate Vor
The vestibulo-ocular (VORs) are an essential component of the oculomotor system since they are fundamental for gaze stabilization during both angular (angular VORs) and linear (linear VORs) head movements. Damage to the vestibular system through disease or trauma can produce profound oculomotor deficits that are often difficult to resolve clinically. Vestibular disturbances are often accompanied by nystagmus, blurred vision, dizziness and spatial disorientation. Despite years of research regarding the neural mechanisms of angular VOR function and disease states, very little is currently known about the linear VORs. Moreover, almost no information exists regarding the symptomology of otolith system damage and few, if any, practical diagnostic evaluations of otolith function are routinely performed in clinical today. The current proposal seeks to study the nature and function of the linear VORFs in primates and to investigate the deficits produced by cerebellar lesions and peripheral vestibular damage. The primary goal of these studies is to test quantitative predictions of a functional framework regarding the bilateral organization of the linear VORs across a wide stimulus repertoire in awake rhesus monkeys trained to fixate targets at different distances and vertical/horizontal eccentricities. Precisely calibrated binocular three-dimensional eye movements will be recorded during both lateral and fore-act linear motion consisting of either steady-state sinusoidal or transient stimulus profiles. Data obtained from normal animals will be quantitatively compared to the eye movements generated during functional ablation of the most irregularly firing vestibular afferents, as well as both acutely and chronically after unilateral labyrinthectomy, selective semicircular canal plugging and cerebellar lesions. The results will provide for the first time quantitative data and models regarding the bilateral neural organization of the linear VORs as well as identify practical clinical tests of peripheral and central otolith-ocular function.
|
1 |
1999 — 2003 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Mechanisms of Vestibular Adaptation
DESCRIPTION (from applicant's abstract) Changes in vestibular function through disease, trauma and aging occur frequently and are particularly pronounced with exposure to unusual motion or gravitational environments. Throughout the history of the manned space flight program, the introduction of the body into microgravity has produced vestibular-related disturbances that result in personal discomfort and a loss in crew performance. Since the symptoms subside within several days of microgravity exposure, it suggests that the vestibular system responses can adaptively change to altered sensory conditions. These changes may be similar to the process of vestibular compensation which is observed following unilateral labyrinthine loss or alterations in visual-vestibular interactions. In order to better understand the nature of vestibular adaptation and its effects upon motor function, the processes underlying neural plasticity and adaptation to altered vestibular signals must be established. The proposed project will utilize systems and electrophysiological approaches to relate stimulus parameters to vestibular adaptation through quantification of the adaptive properties of the vestibulo-ocular reflex and, in particular, the otoli1ith-ocular system in response to changes in the gravitoinertial accelerations (i.e., hypergravity) brought about through centrifugation. Three-dimensional eye movements will be recorded and quantified before and after short-term centrifugal motion. In addition, the underlying neural mechanisms that are responsible for the adaptive behavior will be determined by quantifying the gravity-sensitive properties of behaviorally and electrophysiologically identified vestibular nuclei neurons before and after centrifugation. The adaptive changes in otolith-ocular responses and the associated neural elements need to be understood in order to provide a functional framework regarding adaptive changes in otolith function not only in microgravity but also in vestibular compensation.
|
1 |
2004 — 2006 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Organization and Plasticity of the Vor
[unreadable] DESCRIPTION (provided by applicant): Vestibulo-ocular reflexes are essential components in the perception and control of spatial orientation. They are also important for the perception of the visual world, as well as visually-guided behavior, since we view the world from a constantly shifting platform and certain visual mechanisms function optimally only if the images on the retina are relatively stable. As we go about our everyday activities, visual and vestibular mechanisms help to stabilize our gaze on objects of interest, by generating eye movements that offset our head movements. The traditional approach emphasized mechanisms that deal with rotational disturbances. More recently, however, it has become clear that a separate class of vestibulo-ocular reflexes exists (Translational VORs or TVORs) that represent a phylogenetically recent acquisition in the evolutionary tree and which appear to have evolved in parallel with foveal vision, vergence eye movements and stereopsis. This proposal is a competitive renewal to test specific hypotheses about how sensory information is centrally processed in order to create motor commands for the TVOR. We propose novel behavioral experiments aiming at understanding specific functional hypotheses about the TVOR, as well as neurophysiological studies to understand the yet unknown neural processing. Electrophysiological data will be closely accompanied with theoretical modeling in order to delineate the vestibulo-ocular computations that underlie the organization of the complex oculomotor responses during translation. Although motivated by fundamental basic science issues, the results of this effort will provide a basis for understanding clinical deficits related to otolith system pathology and for the development of clinical tests of otolith and vestibulocerebellar function. [unreadable] [unreadable]
|
1 |
2004 — 2021 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Mechanisms of Vestibular Function
DESCRIPTION (provided by applicant): To navigate and act effectively through a complex three-dimensional (3D) environment, we must accurately estimate our own motion and orientation relative to nearby objects. Although multi-modal in nature, both the perception of self-motion and self-orientation, as well as the precise monitoring of changes in our head or gaze relative to objects of interest, require contributions from the vestibular system, which provides information about the angular and linear acceleration of the head in space. The long-term goal of these studies is to understand the thalamo-cortical processing of vestibular information, pertinent to the elucidation of the neural correlates for motion perception, spatial orientation and control of movement. As a first step in delineating these unexplored neural correlates of higher vestibular processing, we propose here to characterize pre-cortical neural pathways, focusing first on the neural processing in the two main thalamic projections from the vestibular nuclei (VN) and prepositus hypoglossi (PH) to the ventral posterior nuclei (VPN) and the intralaminar nuclei (ILN) of the thalamus. Experiments proposed here are motivated by a central hypothesis where these two vestibulo-thalamic pathways participate in two distinct functions: The VPN pathway represents the conduit of vestibular signals involved in selfmotion perception. The ILN pathway, on the other hand, provides cortical eye fields with the necessary extraretinal signals (including an efference copy of gaze changes) required to update retinal information for nonretinotopic saccades. To address the validity of these hypotheses, we propose a multi-faceted approach using multiple techniques, including single unit recording, antidromic identification of physiologically-characterized neurons, dual tracer injections, as well as reversible inactivation while animals perform behavioral tasks. The proposed experiments will test for the first time a direct link between vestibular neural activities and perception and will bridge the gap between traditional vestibular system analysis and modern, functionally-relevant, stochastic correlation analysis techniques relating neural activities with animal's behavioral choices.
|
1 |
2004 — 2011 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Studies of Three-Dimensional Ocular Kinematics
DESCRIPTION (provided by applicant): The long-term goal of these studies is to understand how the brain and the extraocular muscles generate eye movements in three-dimensions (3D). Using a combined approach, consisting of both single unit recordings and electrical microstimulation, we have provided strong neurophysiological evidence in support of the `active pulley hypothesis', which proposes that motoneuron commands are two-dimensional (i.e., horizontal/vertical) and that extraocular muscle pulleys are responsible for providing for an eye position-dependent muscle pulling direction, thus implementing the `half-angle rule'of 3D eye rotations during visually-guided eye movements. In this application, we propose a systematic series of experiments in macaques that focus on the following questions: How do eye movements that violate Listing's law and the half-angle rule (e.g., the rotational vestibulo-ocular reflex, RVOR) work with an eye plant that mechanically implements the half-angle rule? Specifically, how do the eye position-dependent muscle pulling directions become modified during the RVOR and other conditions that deviate from Listing's law and the half-angle rule? We propose to use electrical microstimulation of the abducens nerve and single unit recordings from extraocular motoneurons as tools to understand the particular configuration of the extraocular muscle pulleys during these movements. Specifically, we propose the following aims/experiments: (1) We will use electrical microstimulation of the abducens nerve to characterize extraocular muscle configuration at tilted head/body orientations (static ocular counter-rolling). (2) We will use electrical microstimulation of the abducens nerve to investigate extraocular muscle configuration during convergence. (3) We will test the hypothesis that a coordinated rotation of the pulleys in the same direction as eyeball torsion is responsible for the deviations from the half-angle rule during the roll RVOR by electrically stimulating the abducens nerve during steady-state sinusoidal and transient roll head rotations. (4) As in aim 3, we will investigate the extraocular muscle pulley configuration during combined rotations and translations by evaluating the eye position dependence of the stimulation-evoked eye velocity during combinations of translational VOR (TVOR) and yaw RVOR, conditions that result in diverse eye position dependencies. (5) Finally, once the organization of the oculomotor plant during the VOR is established, we will next characterize and compare neural activities of superior/inferior oblique and superior/inferior rectus as a function of gaze during the roll RVOR and combined RVOR/TVOR. Under these conditions, the eye position dependence of eye velocity deviates significantly from the half-angle rule (default arrangement of the oculomotor plant). We expect to find a correlate of such RVOR/TVOR-driven torsion in the firing rates of cyclovertical motoneurons. Results from these experiments will substantially expand our current understanding of pulley and extraocular muscle function during eye movements that violate Listing's law. PUBLIC HEALTH RELEVANCE Strabismus, the misalignment of the two eyes, is a disorder of unknown etiology, caused by problems related to either the central innervation sent to the eye muscles and/or the eye muscles and orbital tissues themselves. Improving our understanding of the central and peripheral aspects of eye movement control is an important and necessary step for an effective clinical diagnosis and treatment of strabismus patients. Results from the proposed experiments are also fundamental in resolving on-going controversies and providing a comprehensive understanding of oculomotor function in health and disease.
|
1 |
2005 |
Angelaki, Dora |
S10Activity Code Description: To make available to institutions with a high concentration of NIH extramural research awards, research instruments which will be used on a shared basis. |
Human Motion System: Neuroscience, Stroke |
1 |
2005 |
Angelaki, Dora |
S10Activity Code Description: To make available to institutions with a high concentration of NIH extramural research awards, research instruments which will be used on a shared basis. |
Human Motion System: Neuroscience |
1 |
2005 |
Angelaki, Dora |
S10Activity Code Description: To make available to institutions with a high concentration of NIH extramural research awards, research instruments which will be used on a shared basis. |
Human Motion System
DESCRIPTION (provided by applicant): We are requesting funds to acquire a complete virtual reality motion system that is used to study spatial orientation, motion perception, as well as the planning and execution of eye, head, hand and arm movements in human subjects. This unique system consists of a motion platform with 6 degrees of Freedom, a head-mounted visual display, an eye tracker to measure eye movements and an Optotrak system to measure head, arm and hand movements. The motion platform delivers both linear and angular motions along any direction in space. The head-mounted visual display can provide optic flow or simulated motion through more naturalistic visual environments that are either congruent or incongruent with the motion of the platform. The human motion system will be used by each of the Users to study the effects of motion environments (established by congruent visual/vestibular stimuli or cue-conflicting situations) on spatial perception, motor control and coordination. It will be used for addressing basic science questions in both normal healthy human volunteers as well as in patients with spatial orientation and/or motor deficits. The human motion system will be housed within the newly constructed Central Institute for the Deaf building at Washington University School of Medicine. Deficits in spatial orientation and motor coordination in motion environments remain poorly understood but are vital for daily functions. In humans with subjective vertigo or balance difficulties, vestibular and spatial orientation deficits are often subtle and can only be revealed by innovative and quantitative spatial orientation and motor tasks. Millions of Americans suffer from balance related deficits, particularly in older aged adults, that can lead to injury or even death from falls. The requested system will allow for state-of-the-art basic science and clinical experimentation, as well as ensure a further growth in our research endeavors and an expansion in the collaborative efforts that are already strong among the motor control and balance communities.
|
1 |
2006 — 2013 |
Buckner, Randy Petersen, Steven (co-PI) [⬀] Thoroughman, Kurt (co-PI) [⬀] Angelaki, Dora Deangelis, Gregory (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Cognitive, Computational, and Systems Neuroscience
This Integrative Graduate Education and Research Training (IGERT) award supports the establishment of an interdisciplinary graduate training program in Cognitive, Computational, and Systems Neuroscience at Washington University in Saint Louis. Understanding how the brain works under normal circumstances and how it fails are among the most important problems in science. The purpose of this program is to train a new generation of systems-level neuroscientists who will combine experimental and computational approaches from the fields of psychology, neurobiology, and engineering to study brain function in unique ways. Students will participate in a five-course core curriculum that provides a broad base of knowledge in each of the core disciplines, and culminates in a pair of highly integrative and interactive courses that emphasize critical thinking and analysis skills, as well as practical skills for developing interdisciplinary research projects. This program also includes workshops aimed at developing the personal and professional skills that students need to become successful independent investigators and educators, as well as outreach programs aimed at communicating the goals and promise of integrative neuroscience to the general public. This training program will be tightly coupled to a new research focus involving neuro-imaging in nonhuman primates. By building upon existing strengths at Washington University, this research and training initiative will provide critical new insights into how the non-invasive measurements of brain function that are available in humans (e.g. from functional MRI) are related to the underlying activity patterns in neuronal circuits of the brain. IGERT is an NSF-wide program intended to meet the challenges of educating U.S. Ph.D. scientists and engineers with the interdisciplinary background, deep knowledge in a chosen discipline, and the technical, professional, and personal skills needed for the career demands of the future. The program is intended to catalyze a cultural change in graduate education by establishing innovative new models for graduate education and training in a fertile environment for collaborative research that transcends traditional disciplinary boundaries.
|
1 |
2006 — 2010 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Vestibular Influences On Spatial Constancy and Movement Planning
DESCRIPTION (provided by applicant): The spatial orientation functions of the vestibular system, although clinically relevant, have received little attention compared to reflexes. In the proposed experiments, we intend to characterize the vestibular system's involvement in maintaining spatial constancy for movement planning as humans and primates are passively rotated or translated in three-dimensional (3D) space. This process, referred to as 'visuospatial updating'utilizes extra-retinal signals related to the intervening movement in order to change the end-goal of sensorimotor transformations for eye, hand or limb movements. Although vestibular signals have an important role in this process, spatial constancy has been mostly studied with saccadic ocular deviations, whereas little is currently known about the role of other sources of extra-retinal information. Thus, the long-term goal of these studies is to characterize the functional properties and neural basis for subcortical extra-retinal signals on spatial constancy, with a particular focus on vestibularly-driven mechanisms. Here we propose a series of behavioral and neurophysiological aims to characterize the vestibular system's involvement in visuospatial updating during memory-guided eye movements. Specifically, we will quantify the properties of these processes, as they relate to the numerous computational issues encountered when three-dimensional (3D) vestibular information must be combined with a two-dimensional (2D) retinotopic goal for a saccade. To investigate the neural basis of these interactions, we also propose to characterize visuospatial updating and vestibular memory-contingent saccades after reversible inactivation of each 1 of the 3 cortical eye fields, the lateral intraparietal area (LIP), the frontal eye fields (FEF) and the supplementary eye fields (SEF). Finally, we will also start characterizing the neural correlates of this function by testing whether visual receptive fields of LIP neurons shift during an imposed rotational/translational movement, similarly as previously shown for saccadic eye movements. The proposed studies aim at filling an important gap in knowledge and are fundamental in establishing a causal role for the vestibular system in spatial and sensorimotor functions that, although largely uncharacterized, are important for understanding and treating cognitive deficits of spatial perception.
|
1 |
2007 — 2015 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Basis of Self-Motion Perception
[unreadable] DESCRIPTION (provided by applicant): To navigate effectively through a complex three-dimensional environment, we must accurately estimate our own motion relative to the objects around us. Perception of self-motion is a multi-modal process, involving integration of visual, vestibular, and proprioceptive/somatosensory cues. For example, patterns of image motion across the retina ('optic flow') can be strong cues to self-motion, as evidenced by the fact that optic flow alone can elicit the illusion of self-motion (vection). We propose here to systematically explore a potential contribution of area 2v in the perception of translational and/or rotational self-motion and to directly compare these contributions to those of the ventral intraparietal area (VIP) and the dorsal subdivision of the medial superior temporal (MSTd) areas. Area 2v is one of the so-called 'vestibular' cortical areas with both vestibular and optic flow responsiveness, located in the most anterior and lateral tip of the intraparietal sulcus. Simultaneous behavioral and electrophysiological experiments will be carried out using rhesus monkeys in a state-of-the-art virtual reality apparatus. These studies will combine behavioral, cognitive, and neural analyses to address an important, yet underdeveloped, area of sensory and cognitive neuroscience: the mechanisms of multi-sensory integration underlying self-motion perception. We will be attacking this difficult problem using a combination of `. Specific aims 1 and 2 will characterize the 3D tuning properties of neurons during optic flow stimulation alone (Visual condition), motion in darkness (Vestibular condition) and Combined combinations of the two cues. In addition, we will also characterize the reference frames of Visual or Vestibular signals. In aim 3, we will test for direct links between area 2v neuronal activity and self-motion perception using a fine motion direction discrimination task. Together, these studies will provide a vital test of the hypothesis that area 2v is part of the neural substrate for self-motion perception. The proposed studies aim at filling an apparent gap in knowledge, important for understanding and treating cognitive deficits of spatial perception. [unreadable] [unreadable] [unreadable]
|
1 |
2007 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
3d Organization and Neural Plasticity of Primate Vestibulo-Ocular Reflexes
[unreadable] DESCRIPTION (provided by applicant): Vestibulo-ocular reflexes are essential components in the perception and control of spatial orientation. They are also important for the perception of the visual world, as well as visually-guided behavior, since we view the world from a constantly shifting platform and certain visual mechanisms function optimally only if the images on the retina are relatively stable. As we go about our everyday activities, visual and vestibular mechanisms help to stabilize our gaze on objects of interest, by generating eye movements that offset our head movements. The traditional approach emphasized mechanisms that deal with rotational disturbances. More recently, however, it has become clear that a separate class of vestibulo-ocular reflexes exists (Translational VORs or TVORs) that represent a phylogenetically recent acquisition in the evolutionary tree and which appear to have evolved in parallel with foveal vision, vergence eye movements and stereopsis. This proposal is a competitive renewal to test specific hypotheses about how sensory information is centrally processed in order to create motor commands for the TVOR. We propose novel behavioral experiments aiming at understanding specific functional hypotheses about the TVOR, as well as neurophysiological studies to understand the yet unknown neural processing. Electrophysiological data will be closely accompanied with theoretical modeling in order to delineate the vestibulo-ocular computations that underlie the organization of the complex oculomotor responses during translation. Although motivated by fundamental basic science issues, the results of this effort will provide a basis for understanding clinical deficits related to otolith system pathology and for the development of clinical tests of otolith and vestibulocerebellar function. [unreadable] [unreadable]
|
1 |
2008 — 2012 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Multisensory Integration in Extrastriate Visual Cortex @ Baylor College of Medicine
DESCRIPTION (provided by applicant): Integration of multiple sensory inputs is required for robust perception and behavioral performance. Recent psychophysical studies indicate that humans combine cues according to a statistically optimal weighting scheme derived from Bayesian probability theory. When we make perceptual judgments that rely on two separate cues, we are able to take into account the reliability of each cue, even when this reliability varies randomly from trial to trial. Bayesian theory also predicts an improvement in behavioral performance when two sensory cues are present, as compared with only one cue. One particularly vital task that involves multisensory integration is the estimation of self-motion, or heading. Information from both visual (`optic flow') and vestibular cues can be useful for heading perception, yet little is known about the principles and neural substrates for cue integration. Using a sophisticated virtual reality system, we have developed a novel behavioral paradigm for studying cue integration in rhesus monkeys. Here we propose to test specific hypotheses regarding a role of the dorsal medial superior temporal area (MSTd) and ventral intraparietal area (VIP) in visual/vestibular cue integration. In aim 1, we will record from MSTd and VIP neurons while the monkey performs a heading discrimination task, based on vestibular cues alone, visual cues alone and combined presentation of both cues. We hypothesize that MSTd/VIP neurons with `congruent' visual/vestibular responses constitute a neural substrate for Bayesian cue integration, including cue re-weighting when visual reliability changes. In aim 2, we will probe for causal links between MSTd/VIP neurons and multi-sensory cue integration for heading perception by manipulating neural activity using microstimulation and reversible chemical inactivation techniques. In aim 3, we explore the functional roles of MSTd/VIP neurons having `opposite', as compared to `congruent', visual/vestibular responses. We will test specific hypotheses, the most prominent of which is a potential role of neurons with opposite visual/vestibular preferences on disambiguating the components of visual motion that are due to object motion from those due to self-motion. Combined these aims will provide a fundamental breakthrough in our understanding of multisensory cortex, as well as more generally how the brain computes using probabilities. The general principles that we uncover regarding sensory integration should have wide application to many issues in systems-level neuroscience. Understanding the neural basis of multisensory integration and self-motion perception would also promote new strategies for treating spatial disorientation deficits common to many brain dysfunctions, including Alzheimer's disease. One of these deficits is an impaired ability to judge heading from optic flow, and this impairment is correlated with patients' difficulty in navigating through their surroundings. Better localization of these functions in a primate animal model would help targeting new Alzheimer's therapies to the appropriate brain regions. PUBLIC HEALTH RELEVANCE Understanding the neural basis of multisensory integration and self-motion perception would also promote new strategies for treating spatial disorientation deficits common to many brain dysfunctions, including Alzheimer's disease. One of these deficits is an impaired ability to judge heading from optic flow, and this impairment is correlated with patients' difficulty in navigating through their surroundings. Better localization of these functions in a primate animal model would help targeting new Alzheimer's therapies to the appropriate brain regions.
|
1 |
2009 — 2015 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Organization and Function of the Vestibulo-Cerebellum @ Baylor College of Medicine
DESCRIPTION (provided by applicant): The long-term goal of these studies is to understand the cerebellar processing of vestibular information and visual/vestibular interactions. The proposed studies are motivated by recent findings that the cerebellar nodulus/uvula (vermis lobules X and IX, NU), the vestibular (VN) and deep cerebellar (CN) nuclei appear to form an interconnected network that implements multisensory convergence necessary to resolve the gravito- inertial acceleration (GIA) ambiguity and distinguish gravity from translational accelerations. Theory has proposed that the GIA ambiguity is resolved by generating an internal estimate of gravity by appropriately 'combining' otolith, semicircular canal and visual sensory information, as well as prior knowledge about the statistics of linear accelerations experienced most commonly in everyday life (conceptualized as a Bayesian prior). Here we plan to probe the neural implementation of such an internal model in macaque NU Purkinje cells (aims 1 & 2), as well as start a functional dissection of how these signals are generated within the VN-CN- NU network by recording from NU-projecting and NU-target neurons in the VN/CN (aim 3). We will use stimuli that create the illusion of translation, without actually delivering any translation stimulus to test the following hypotheses: (1) NU Purkinje cell activity represents the output of long-postulated internal models for gravity and translational acceleration; (2) Neural correlates of gravity and translational acceleration signals are both found in distinct populations of NU Purkinje cells; (3) Population activity of these two groups of NU Purkinje cells are complementary to each other, such that their net sum equals GIA, as predicted by theory; (4) Vision can be used, instead of canal-driven signals, to compute a more reliable estimate of gravity during steady-state rotation, thus preventing the generation of erroneous tilt and translation signals; and (5) Translation-selective and gravity-selective response properties are only found in orthodromically-activated NU-target neurons, but not antidromically-activated NU-projecting neurons, all of which are either canal-only (i.e., have no response to translation) or GIA-coding cells. Together, these experiments constitute fundamental studies necessary to establish the NU-VN-CN circuitry as key areas in inertial multisensory processing for self-motion perception and spatial orientation, critical for allocentric orientation and inertial navigation. A major innovation in the current studies is the use of un-natural stimuli that are known to induce tilt and translation illusions to challenge the system and unmask the underlying computations. These experiments will provide novel quantitative evidence for the neural correlates of long-postulated theoretical concepts, like internal models and Bayesian priors. Thus, this work has a significant broader impact for unveiling the intricate mysteries of the functional roles of the cerebellum and its circuitry.
|
1 |
2011 — 2012 |
Angelaki, Dora |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Vestibular Influences On Arm Movements @ Baylor College of Medicine
DESCRIPTION (provided by applicant): We want to pursue a novel line of research regarding vestibular influences upon arm reaching movements. The proposed experiments have been motivated by recent findings in human subjects showing that path curvature is greater, acceleration duration is shorter and peak acceleration is increased for upward versus downward arm movements. Muscle EMG activity and computational modeling have suggested that these kinematic asymmetries are generated because the brain uses gravity as a major force for acceleration during downward movements and for deceleration during upward movements. Here, we propose a basic set of experiments to examine whether vestibular (otolith) signals are functionally important for these properties. Thus, the aims of this application are two-fold. First, to ascertain whether macaques exhibit similar kinematic and EMG asymmetries during vertical arm movements as those observed in humans. Second, to examine whether the observed up/down asymmetries in arm movement kinematics and EMG depend on a functional vestibular labyrinth. We hypothesize (1) that macaques, like humans, show kinematic and EMG asymmetries during execution of up and down (as compared to horizontal) arm movements and (2) that an intact vestibular labyrinth is functionally important for detecting gravity and allowing for cost-effective planning of vertical arm movements. To achieve these goals, we will measure arm movement kinematics (Optotrak), as well as EMG activity from agonists and antagonists as monkeys execute center-out and center-in arm movements both in normal animals and after bilateral labyrinthectomy. To further examine whether there is any long term adaptation, we will be testing kinematics and EMG weekly for the first two months after labyrinthectomy. These experiments will be the first to directly investigate in macaques a potential role of gravity-related otolith signals on the planning and execution of arm movements. If successful, we will establish a solid behavioral context for a vestibular influence upon arm movements. Future neurophysiology studies can then explore how gravity- related vestibular signals are used for arm movement planning and execution.
|
1 |
2012 — 2016 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Vestibular Influences On Spatial Orientation and Visuospatial Constancy @ Baylor College of Medicine
DESCRIPTION (provided by applicant): The pattern of image motion across the retina (optic flow) during self-motion provides a powerful cue to the heading direction of the observer. However, optic flow processing poses a number of computationally difficult problems, the most notable example being segregation of object motion from self-motion. Vestibular signals about the linear components of head movement can greatly simplify the extraction of behaviorally relevant information from optic flow. Using quantitative human psychophysics, here we test the hypothesis that vestibular information can be used by the brain to solve the flow parsing problem and distinguish object motion and self-motion. In addition, we explore whether visual and vestibular signals maintain internal calibration such as to ensure that estimates of the same stimulus by different sensors agree with one another. By exposing subjects to spatially conflicting optic flow and vestibular heading information, we test two models of internal consistency: a visual dominance model, which expects that, like the vestibulo-ocular reflex, vestibular perception always changes to become consistent with vision, and a reliability-based calibration model, which depends on cue reliability and assures minimum variance sensory estimates over time. According to the latter, vestibular perception should adapt towards the visually-specified heading when visual reliability is higher than the vestibular reliability and visual perception should adapt towards the vestibular-specified heading when visual reliability is lower than the vestibular reliability. We will also examine the effects of external feedback and how subjects weigh internal consistency versus external accuracy. A 6-degree-of-freedom motion platform with attached large field-of-view stereo projection system and two-alternative-forced-choice methodology will be used for all experiments. Findings are important for understanding basic perceptual visual/vestibular interactions using quantitative methodology and opening new directions in the fields of basic and clinical spatial orientation psychophysics. PUBLIC HEALTH RELEVANCE: Understanding multisensory integration and self-motion perception would promote new strategies for treating spatial disorientation deficits common to many brain dysfunctions, including Alzheimer's disease. One of these deficits is an impaired ability to judge heading from optic flow, and this impairment is correlated with patients' difficulty in navigating through their surroundings. Better localization of these functions would help targeting new Alzheimer's therapies. In addition, neurological correlates of otolith disorders are still a mystery, posing a major hurdle in defining effective therapeutic strategies. Understanding the properties of otolith-mediated self- motion perception are important for understanding and treating basic postural and spatial orientation deficits. Our experiments aim at filling a very notable gap in knowledge, important for understanding and ultimately treating basic and cognitive deficits of spatial perception.
|
1 |
2013 — 2021 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Correlates of 3d Visual Orientation
ABSTRACT We can see things in three dimensions (3D) because the visual system reconstructs the 3D configuration of objects from their two-dimensional images projected onto the retina. Previous studies have described loss of 3D binocular vision and constructional apraxia after parietal lesions, although the neurophysiology of this effect remains poorly understood. Furthermore, many studies have characterized the neural basis of binocular disparity processing, although few have dealt with the representation of 3D object orientation and how this is maintained invariant in the world. In the proposed studies we will examine neural selectivity for 3D planar orientation in the caudal intraparietal area (CIP), visual posterior sylvian (VPS) and area V3A of macaque monkeys. We employ a multi-faceted approach, combining neural recordings, behavior, population decoding and chemical inactivation. We will directly test the role of each of these areas in slant discrimination by first recording and then manipulating neural activity while macaques perform a fine slant discrimination task. Neural firing rates will be analyzed using signal detection theory and we will quantify neuronal variability by measuring noise correlations and choice probabilities. We will also probe for causal links between neurons in these areas and slant orientation perception by employing reversible inactivation. Furthermore, it has been known that gravity plays a critical role in shaping our visual experience of the world, influencing both sensory perception and motor planning, but surprisingly little is known about where and how the brain may use a neural estimate of gravity to transform visual signals from retinal to an allocentric representation. The proposed experiments will test the hypothesis that the transformation occurs progressively, beginning with an egocentric representation in V3A (CIP's main visual input) and culminating in a primarily gravity-centered representation: V3A (egocentric) ? CIP ? VPS (mostly gravity-centered). Finally, we will test earth-vertical slant orientation perception in animals after bilateral labyrinthectomy to monitor deficits in visual orientation perception, both acutely and after recovery from peripheral vestibular lesion. We hypothesize acute deficits, but also a functional recovery as the role of proprioceptive signals increases. This research is important for understanding multisensory visual?vestibular influences on 3D vision in a 3D world.
|
1 |
2013 — 2018 |
Angelaki, Dora Raphael, Robert [⬀] O'malley, Marcia (co-PI) [⬀] Aazhang, Behnaam (co-PI) [⬀] Kemere, Caleb (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Neuroengineering From Cells to Systems @ William Marsh Rice University
This Integrative Graduate Education and Research Traineeship (IGERT) award provides Ph.D. students at Rice University with innovative training in neuroengineering, spanning the disciplines of neuroscience, electrical engineering, mechanical engineering, and bioengineering. In collaboration with Baylor College of Medicine and seven other universities and participating organizations, Rice University trainees are developing the tools to understand, interface with, model, and manipulate the nervous system.
Intellectual Merit: Due to improved technologies that enable neuroscientists to interact with brain cells, and due to the increasing types of neuroscientific data collected through electrical and optical methods, the neuroengineers who create and work with these complex data sets require highly specialized training. This program trains students in three specific areas: (1) cellular systems neuroengineering, which studies the nervous system?s signaling processes at the molecular and cellular levels; (2) engineering multi-neuron circuits, which involves collecting and analyzing data from groups of brain cells and devising methods to induce them to produce new functional responses; and (3) translational neuroengineering, which develops systematic approaches to improve clinical devices such as prosthetics and deep brain stimulators. Trainees in this program are learning to be technologically innovative; to be aware of social, cultural, and ethical aspects of neuroengineering; to communicate their work effectively to a wide variety of audiences; and to understand the pathways to commercialize their discoveries.
Broader Impacts: As this program trains neuroengineers to develop advanced solutions to functional and structural problems in the brain, a new problem-based learning curriculum will result and will be shared with the public through open education resources. By cultivating relationships with biomedical device companies, international researchers, and collaborators within two minority-serving institutions in Texas, students in this program are expanding the applications of their training and increasing participation in their research.
IGERT is an NSF-wide program intended to meet the challenges of educating U.S. Ph.D. scientists and engineers with the interdisciplinary background, deep knowledge in a chosen discipline, and the technical, professional, and leadership skills needed for the career demands of the future. The program is intended to establish new models for graduate education and training in a fertile environment for collaborative research that transcends traditional disciplinary boundaries, and to engage students in understanding the processes by which research is translated into innovations that benefit society.
|
0.933 |
2014 — 2016 |
Pitkow, Xaq (co-PI) [⬀] Angelaki, Dora |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Brain Eager: Flashes of Insight: Revealing Dynamic Mental Models During Rodent Virtual Reality Foraging @ Baylor College of Medicine
A primary goal of neuroscience is to understand how the brain works-- not in artificial lab tasks, but when using its full capabilities to thrive in the rigors of the natural environment. Neuroscience has made enormous progress by examining how the brain performs simplified tasks, but these tasks do not expose the richly adaptive dynamics that the brain must use in a changing world. Therefore, the current neuroscientific understanding of the brain is missing fundamental ingredients. The current project begins to fill this gap, providing a new paradigm for the conduct of behavioral neuroscience and offering an unprecedented opportunity to observe the neural computations that solve a complex natural task. Team members will record activity of many neurons in multiple areas of a mouse brain while the mouse is foraging in a virtual reality environment, and develop mathematical models to make sense of the complex data. This research will thereby provide a unique training opportunity for undergraduate and graduate students in both computational and experimental neuroscience. The project results will be widely disseminated by sharing data, computational models, and analysis techniques with the neuroscience community through public data repositories, so conclusions can be replicated and extended. This research will thereby advance society?s goals of understanding the biology of healthy and disordered brains, with the ultimate hope of repairing neurological problems.
Experimenters will train mice to forage in a virtual reality environment, while recording activity from many neurons in four brain areas involved in vision and navigation: visual cortex, entorhinal cortex, posterior parietal cortex and hippocampus. State-of-the-art analysis techniques will be used to describe the mouse's behavior, and to discover neural representations of the internal models that express the animal's beliefs about things that cannot be observed directly in sense data. Finally, the project will uncover how neural representations of critical task variables are communicated and transformed across brain areas, guided by the hidden variable dynamics of the behavioral model. Together, these experiments, theory, and analysis will provide an unprecedented, system-wide understanding of neural computation, ranging from the scale of individual neurons up to a multi-region system. A key quality of the approach is the pervasive influence of theory, both in structuring experiments and dictating analyses. Since the great strength of the human brain is its ability to comprehend the hidden structure in the world, this approach takes an essential step toward unraveling the mysteries of cognition.
|
1 |
2015 — 2017 |
Angelaki, Dora Dragoi, Valentin (co-PI) [⬀] Pitkow, Zachary Samuel Schrater, Paul R |
U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Dynamic Network Computations For Foraging in An Uncertain Environment @ Baylor College of Medicine
? DESCRIPTION (provided by applicant): The brain evolved complex recurrent networks to enable flexible behavior in a dynamic and uncertain world, but its computational strategies and underlying mechanisms remain poorly understood. We propose to uncover the network basis of neural computations in foraging, an ethologically relevant behavioral task that involves sensory integration, spatial navigation, memory, and complex decision-making. We will use large-scale electrical recordings from six relevant interconnected areas (visual cortical area V4, Area 7A, Entorhinal Cortex, Hippocampus, Parahippocampal gyrus, and Prefrontal Cortex) of freely behaving macaques. To track the neural network computations used in these ethologically relevant, natural tasks, we will exploit recent advances in both statistical data analysis and theories of neural computation. First, to characterize behavior, we will model relationships between task-relevant sensory, motor, and internal variables using graphical modeling. Animal behavior will be modeled in the framework of Partially Observable Markov Decision Processes (POMDP) and these models will provide predictions about which variables the animals use and how they interact. Second, once we have modeled the behaviorally relevant variables, we will use modern data analysis techniques to identify these variables from the patterns of neuronal responses, extracting the low- dimensional, task-relevant signals from the high-dimensional population activity. The time series of these low- dimensional neural representations will be used to analyze the transformation and flow of signals between different brain areas, using such measures as Directed Information. Finally, we will compare these neural analyses to predictions from the normative models of the foraging task. We hypothesize that neural representations of sensory and internal variables will exhibit the same causal and temporal relationships manifested in the behavioral model. By combining - for the first time - normative modeling, selective dimensionality reduction of neural population signals, and quantification of directed information flow, we will be able to identify the transformations within and between key brain areas that enact neural computations on complex natural tasks. The team project aims to produce a transformative view of distributed neural population coding, unifying ethologically crucial computations across multiple neural systems.
|
1 |
2015 — 2016 |
Angelaki, Dora Pitkow, Zachary Samuel |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Cortical Feedback to the Vestibular Brainstem @ Baylor College of Medicine
? DESCRIPTION (provided by applicant): The goal of this R21 application is to understand the nature of feedback from cortex to early sensory areas. Specifically, our goal is the theoretical development and testing of computational models of how feedback from the vestibular cortex to the vestibular brainstem affects the perception of translation (heading). We propose to understand how inactivation of the parieto-insular vestibular cortex (PIVC), which increases perceptual heading thresholds, alters the response properties of neurons in the vestibular nuclei. The best known correlational link between sensory neural responses and perceptual choice is known as the `choice probability'. Relevant to the origin of choice probabilities is the amount of `shared noise' among neurons, which is measured as interneuronal correlation of spike rates (`noise correlations'). First, using mechanistic models, we will generalize previously identified relationships between neural tuning, noise correlations, and choice probabilities to network architectures that incorporate recurrent connectivity. Second, we will compare various normative models that specify the function and form of neural feedback for our tasks, and derive from them predictions for the same experimental quantities. We will then test the predictions of these theories on data (neural sensitivities, choice probabilities and noise correlations) recorded from vestibular nuclei neurons before and after PIVC inactivation. Results from these studies are critical for understanding how sensory signals contribute to perception. Showing that perceptual, choice-driven signals are fed back onto the early sensory areas that themselves provide the main contributions to the thalamo-cortical network will provide valuable new insights about `active sensing' and how perception arises from the coordinated activity of multiple interconnected loops and networks.
|
1 |
2016 — 2020 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Plasticity During Visual/Vestibular Conflict
? DESCRIPTION (provided by applicant): The neural mechanisms of sensory conflict and plasticity are not well understood, yet common in everyday life. If sensory conflict is not ameliorated or reduced, problems to human health, such as motion sickness, can be harmful. Perceptual plasticity can ameliorate motion sickness by reducing sensory conflict. Here we hypothesize that multisensory plasticity enables our senses to dynamically adapt to each other and the external environment. Recent work using human/monkey psychophysics has distinguished two multisensory plasticity mechanisms, unsupervised and supervised. How and where multisensory plasticity takes place in the brain is unknown and this proposal explores specific hypotheses about its neural basis. In aim 1, we will search for neural correlates of adult multisensory plasticity in single cortical neuron activity and population responses from the dorsal medial superior temporal area (MSTd) and ventral intraparietal area (VIP). In aim 2, we will search for direct links between neuronal activity and multisensory plasticity using reversible chemical inactivation. The aims outlined here test the hypothesis that unsupervised plasticity occurs in the relatively low- level multisensory cortical area MSTd, whereas supervised plasticity occurs in higher-level multisensory VIP, an area thought closer to where the perceptual decisions that guide behavior are formed. Results from aim 1 about neuronal tuning curve shifts, choice probabilities and noise correlations will be used to compute population thresholds as well as simulate specific cortical lesions. These model simulations will be directly compared with the inactivation experiments in aim 2, thus providing novel information about neural decoding. Results from these experiments are critical for understanding the neural basis of multisensory plasticity, a fundamental operation our brain performs throughout our lives. Our combined use of psychophysics, single cell and population responses, causal manipulations while monitoring perception, as well as computational modeling, represents a state-of-the-art approach and ensures substantial successes.
|
1 |
2016 — 2017 |
Angelaki, Dora |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Inertial and Multisensory Influences On Entorhinal Grid Cells @ Baylor College of Medicine
PROJECT SUMMARY Animals navigate based on self-motion cues and external references, which serve to correct errors accumulated in path integration. Cells in the medial entorhinal cortex (MEC) encode direction, speed and position, and their output could represent the neural basis for path integration. The vestibular system provides information for angular and linear motion, and lesion studies have suggested that it may be important for path integration-based navigation. Whether and how this is done represents a mystery. In recent years, virtual- reality (VR) has become fashionable in exploring the navigation circuit. Remarkably, two-dimensional (2D) grid properties are compromised in head-fixed rodents experiencing VR. The hypothesis to be tested here is that this occurs because inertial (vestibular) signals are in conflict with locomotion and visual cues ? which contrasts with real-world navigation where congruent multisensory cues from vestibular, visual and other modalities all converge to give rise to the sense of motion through space. In VR, only the visual and proprioceptive cues are present, while vestibular cues signal no motion. To test the hypothesis that inertial motion cues are necessary for space coding in the MEC, we have built a one-of-a-kind VR apparatus (mouse is head-fixed and locomoting on an air cushioned Styrofoam ball) that is mounted on top of a motion platform, which can provide inertial accelerations. The visual stimulus creating the VR environment as well as the movement of the platform are both controlled by the locomotion of the mouse and the resulting ball rotation. We will monitor grid cell activity during navigation, while rotation and/or translation multisensory cues are independently and systematically manipulated. Our findings could revolutionize our understanding of the nature and properties of the spatial code in MEC by bridging together diverse expertise and two segregated (navigation/MEC/hippocampus and vestibular/multisensory) communities.
|
1 |
2019 — 2021 |
Angelaki, Dora |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Using Gravity to Perceive, Move and Orient
PROJECT SUMMARY Accurate perception of the body?s orientation in the world is central to everyday life, as balance problems are common and serious, especially in older adults. Computational theory proposes that this ability requires the brain to learn and store an internal representation of gravity that can be used for perception and action, and research has shown that this model is based on visual, somatosensory, and vestibular signals. The proposed experiments investigate the contribution of the vestibular system to the generation of this internal model and test whether the same neural mechanisms provide the gravitational information used for perception and action. In Aim 1, macaques will be trained to visually discriminate the earth-vertical orientation while they are in random head/body tilt positions and then behaviorally tested after bilateral removal of the vestibular organs. If this manipulation abolishes the ability to perform this task at tilted positions, that will indicate that vestibular information is critical for the internal model of gravity. This aim will also examine how visual orientation discrimination adapts to vestibular receptor loss by recruiting extra-vestibular sensory cues, such as somatosensation, and whether active training is necessary for this compensation. Aim 2 will examine how gravity-dependent vestibular information affects motor function by measuring arm kinematics and muscle activity from agonist and antagonist muscles before and after bilateral removal of vestibular inputs. This aim will also measure adaptation after vestibular injury and the role of training in this compensation. Aim 3 will distinguish whether these gravity effects on perception and action are mediated by a shared thalamocortical pathway by probing for deficits in the tasks of Aims 1 and 2 during reversible inactivation of the anterior thalamus, where gravity-tuned cells have been reported. Taken together, these experiments are important for understanding the multisensory influences of gravity on perception and action, as well as the underlying neural circuits, and for revealing how motor learning can aid recovery from vestibular dysfunction.
|
0.954 |
2020 — 2021 |
Angelaki, Dora |
U19Activity Code Description: To support a research program of multiple projects directed toward a specific major objective, basic theme or program goal, requiring a broadly based, multidisciplinary and often long-term approach. A cooperative agreement research program generally involves the organized efforts of large groups, members of which are conducting research projects designed to elucidate the various aspects of a specific objective. Substantial Federal programmatic staff involvement is intended to assist investigators during performance of the research activities, as defined in the terms and conditions of award. The investigators have primary authorities and responsibilities to define research objectives and approaches, and to plan, conduct, analyze, and publish results, interpretations and conclusions of their studies. Each research project is usually under the leadership of an established investigator in an area representing his/her special interest and competencies. Each project supported through this mechanism should contribute to or be directly related to the common theme of the total research effort. The award can provide support for certain basic shared resources, including clinical components, which facilitate the total research effort. These scientifically meritorious projects should demonstrate an essential element of unity and interdependence. |
Project C: Neural Basis of Causal Inference in Continuous Navigation @ University of Rochester
Project Summary When sensory inputs are ambiguous, the brain builds an internal model to infer which events in the world caused this pattern of sensory activity. This process, called causal inference, provides a unifying framework for understanding how neural signals that represent beliefs about the structure of the world interact with incoming sensory signals to drive perception-action loops. This proposal focuses on perceptual interactions among object motion, object depth, and an animal's self-motion through the world, as a particular moving pattern of neural activity on the retina can be generated by many combinations of object motion in the world and self- motion. The overall hypothesis is that parietal and prefrontal neurons infer whether an object moves in the world, and that these signals flow through feedback projections to update task-relevant representations in extrastriate visual cortex. The goal of this project is to study causal inference in dynamic tasks, in which an animal's internal model of the world changes continuously. In a virtual reality navigation task in monkeys and mice, these experiments will explore brain computation and multi-area interactions in the naturalistic setting of continuous action and active sensing, as well as dynamic on-line inference about latent, task-relevant variables related to the internal model. This project will develop a causal inference version of a dynamic navigation task already in use in the Angelaki laboratory and then use population recordings and causal neural manipulations to test and refine the dynamic model developed by the theory team in Project A. The continuous-time latent variables of this model will be fitted to monkey and mouse behavioral data to reveal each animal's beliefs about the state of the world and interacting task-relevant variables, and to generate novel hypotheses about the neural dynamics. Using multi-electrode recordings and chemical and optogenetic manipulations, this project will test these hypotheses in four mutually interconnected monkey brain areas involved in visual perception, navigation, memory, and decision-making: parietal area 7a, prefrontal area 8aV, and extrastriate visual cortical areas MSTd (dorsal medial superior temporal) and MT (middle temporal). Finally, neural activity will be mapped throughout the mouse brain, with an emphasis on subcortical structures, using parallel recordings with Neuropixels probes for hypothesis-free identification of other areas that are modulated by this dynamic task, which will also serve to generalize the findings across species. Based on these findings, additional macaque brain regions will be targeted for recording and manipulation experiments as needed. Collectively, these experiments will rigorously test the computational framework of dynamic causal inference across species and brain areas. When compared with the complementary findings from trial-based tasks in Project B, successful completion of these experiments is expected to uncover general principles of the function of causal inference processes and top-down feedback connections during naturalistic and dynamically fluid behavior.
|
0.942 |
2020 — 2021 |
Angelaki, Dora |
P30Activity Code Description: To support shared resources and facilities for categorical research by a number of investigators from different disciplines who provide a multidisciplinary approach to a joint research effort or from the same discipline who focus on a common research problem. The core grant is integrated with the center's component projects or program projects, though funded independently from them. This support, by providing more accessible resources, is expected to assure a greater productivity than from the separate projects and program projects. |
Core Vision Grant - Design & Fabrication
Abstract The Design and Fabrication Module supports vision research at New York University by providing an expert in CAD/CAM design and fabrication to assist vision researchers in designing custom equipment and managing fabrication at at NYU downtown facilities, NYU Tandon, and various non-NYU fabrication facilities. This expert ensures effective design in collaboration with vision researchers, timely fabrication, and efficient repair of specialized electronic, mechanical, and electro-mechanical devices that are vital for research. The Design and Fabrication Module provides moderate or extensive support for 10 members of the Vision Core, including 7 NEI-funded investigators who hold qualifying R01 grants.
|
0.954 |