1986 — 1988 |
Ballard, Dana H. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Computational Models of Sensory-Motor Cortex @ University of Rochester
sensorimotor system; mathematical model; sensory cortex; motor cortex;
|
0.958 |
1991 — 2001 |
Ballard, Dana H. |
P41Activity Code Description: Undocumented code - click on the grant title for more information. R24Activity Code Description: Undocumented code - click on the grant title for more information. |
Resource For the Study of Neural Models of Behavior @ University of Rochester
Over the past decade neuroscience studies with awake behaving animals have shown that neuronal activity can change dramatically with behavioral context. Recent anthropomorphic models at the University of Rochester and elsewhere suggest that many representational problems that are complex when formulated in the absence of environmental and behavioral context become simpler in systems that directly exploit this context. These studies have shown that some complex behaviors can be reduced to a collection of loosely coordinated primitive behaviors, vastly reducing the need for complex internal representations. These and many other observations suggest that the natural variables for representing information in the nervous system may be in terms of the animal's behaviors. If this is the case, significant progress in neuroscience will require experimental apparatus that can directly measure important aspects of behavioral state. This proposal is to develop state-of-the-art instrumentation for the monitoring and simulation of behavior in conjunction with relevant neural parameters. This instrumentation would form a national resource that would be situated in a complex of three interrelated laboratories at the University of Rochester. The proposed experimental program is an integrated interdisciplinary effort that includes behavioral, computational and neuroscience experiments. The specific aims of the resource would be as follows. A) The resource would greatly extend our capability to monitor behavior in natural situations. The design of the laboratory is directed primarily towards the system integration of four different kinds of recent technical innovations into a coherent setting: (1) equipment to measure eye movements in freely moving head situations, (2) devices for measuring kinematic state such as arm and hand movements, (3) devices for producing whole body accelerations, and (4) anthropomorphic devices to simulate experiments and develop experimental protocols. B) The resource would provide a setting for the conduction of interdisciplinary experiments that are tightly coupled in that the results of an experiment in one domain can be used to design experiments in other domains. C) The resource would provide a vehicle for the development and dissemination of new hardware and software specifically designed for the control of these environments.
|
0.958 |
1996 — 1997 |
Ballard, Dana H. |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Telluride Workshop On Neuromorphic Engineering @ University of Rochester
skeletal system; mental disorders; cognition; nervous system; computers; bioengineering /biomedical engineering; biomedical resource; informatics; psychology; behavioral /social science research tag;
|
0.958 |
1998 |
Ballard, Dana H. |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Virtual Force Stimulation: Motor Control @ University of Rochester
Models of human motor control can be studied with a unique instrument that provides bi-digit force stimulation, creating the sensation of the presence of virtual objects.
|
0.958 |
1999 — 2001 |
Ballard, Dana H. |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Neural Signaling @ University of Rochester
Models of human motor control can be studied with a unique instrument that provides bi-digit force stimulation, creating the sensation of the presence of virtual objects.
|
0.958 |
1999 — 2001 |
Ballard, Dana H. |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Virtual Force Stimulation @ University of Rochester
Our major focus is the development of a driving simulator using a virtual display, with eye and head position monitoring. We chose driving as a test paradigm to investigate perceptual and cognitive function in a natural environment. Driving provides a good environment to study short-term cognitive information processing since crucial unprocessed information leads to an obvious behavioral outcome in driver errors. It also allows investigation of natural behavior in a situation that is still constrained enough to draw theoretically rigorous inferences. The relation of attention deficits to driver errors has been well established. Such deficits are far more correlated with accident records than visual deficits. One of the problems with classical paradigms is arranging for `unattended' stimuli to influence performance without covert attention shifts. This problem is almost always approached by using very brief presentations, which leaves open the question of how attention is deployed under natural conditions. In our driving simulator we can observe an extended behavioral sequence while maintaining tight experimental control. Covert attention shifts can be controlled by manipulating the demands of the primary driving task. Recent Progress. The car was mounted on a six degree-of-freedom motion platform which provides vestibular input.
|
0.958 |
2001 — 2003 |
Ballard, Dana H. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Single Spike Models of Predictive Coding @ University of Rochester
DESCRIPTION:(provided by the applicant) Stimulus encoding by sensory neurons is often viewed as feature detection by template matching. In these models, each neuron responds to its preferred input pattern with its highest firing rate. This perspective has several disadvantages: 1) Stimulus specificity-each neuron optimally encodes only one stimulus. 2) Response ambiguity-various non-optimal stimuli evoke identical responses. 3) Behavioral significance- task-linked and irrelevant stimuli are the same. We have recently proposed a link between spatio-temporal structure and population encoding that has the prospect of overcoming the difficulties of the feature matching/rate coding approach. We hypothesized that stimulus attributes are represented by the firing patterns of distributed networks of cortical neurons. Such networks, termed predictive coding networks, can make extensive use of feedback to learn their receptive fields from the statistics of input stimuli. We propose to test this hypothesis in studies of neurons in macaque dorsal extrastriate visual cortex. Visual motion processing in areas MT and MST provide an ideal setting for testing models of neural coding as the relevant stimuli are complex, time varying, and are used in naturalistic behaviors. We will first develop a predictive coding model of MST responses to local motion stimuli and full-field optic flow. We will then test that model by determining whether the predicted effects of MST feedback on MT neuronal responses are consistent with the predictive coding model. Next we will measure the responses of MST cells when visual stimuli are combined with self-movement and pursuit targets. Finally, we will engage the monkey in stimulus linked behavioral tasks to determine whether population-distributed synchrony might identify stimulus and task effects in MT and MST responses. MT-MST responses are well-described in an extensive literature that has failed to explain the receptive field mechanisms or higher-order visual motion responses in MST. Our collaborative development of a detailed model of MT-MST spike trains will directly test our model and elucidate the cortical mechanisms of motion perception.
|
0.958 |
2002 — 2006 |
Ballard, Dana H. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
A Resource For the Study of Neural Models of Behavior @ University of Rochester
DESCRIPTION (provided by the applicant): The focus of the Research Resource at the University of Rochester is to relate the neural codes of the brain to observed behavior. The key feature of neural codes is their slow timescale. The coding of behaviors has to take place within a temporal range of 10-100 milliseconds. It is now believed that computational theory must play the central role in descriptions at this level, but testing its explanatory power requires extensive experimentation. For this reason the Resource is specifically aimed at studying the brain's behaviors at these timescales. This is done using high-performance computers to simulate real-world environments in a way that allows them to be manipulated as a function of behavior. The centerpiece of the constituent laboratories is innovative instrumentation for generating visual and kinetic stimuli in conjunction with the monitoring of behavioral state. Particular emphasis is given to the visual, kinematic, proprioceptive, and haptic states used in sensori-motor coordination. The Resource includes human and animal experiments as well as extensive computer simulations. The Resource equipment is located in three laboratories: (1) A virtual reality laboratory that allows the simultaneous recording of unrestricted head, eye and hand movements while human subjects are engaged in visually-guided tasks using both real and virtual displays. (2) A visual and vestibular stimulation facility for human subjects that uses virtual displays coupled with a sled rotator device capable of delivering precise angular and linear accelerations in any combination. In addition an animal research laboratory for use with awake behaving monkeys has capabilities similar to those described above. (3) An anthropomorphic simulation laboratory that uses robotic hardware and graphics simulation software to simulate sensori-motor coordination. This facility couples an existing high-speed binocular camera control system with an anthropomorphic four-fingered hand. The current proposal would allow the continued development of state-of-the art equipment for psychophysical and neurophysiological measurements as well as improve capabilities with virtual reality simulation. In addition, the proposal would provide necessary resources for expanding the collaboration and service capabilities.
|
1 |
2009 — 2011 |
Ballard, Dana H. |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Neural Models of Behavior @ University of Texas, Austin
DESCRIPTION (provided by applicant): Extensive research has provided a comprehensive understanding of the neural mechanisms of gaze deployment, but there is still a fundamental lack of understanding of the cognitive mechanisms that choose one possible fixation over another. Attempts to characterize these choices in terms of image properties have been a beginning, but at this point we can only go further by introducing cognitive factors. The proposed research uses driving in virtual reality as a controlled environment within which we can ask when and where fixations are made and what influences these choices. Driving is a complex but circumscribed skill that most subjects have extensive experience with. Our studies and those of others have demonstrated that other complex behaviors such as tea making and sandwich making can be readily seen as consisting as compositions of more elemental behaviors. Our modularized theory of behavior allows us to propose very specific testable hypotheses as to the deployment of gaze that are aimed at elucidating its essential link with cognition. Unique capabilities of our facility allow us to measure the eye, head and hand movements, as well as acceleration, braking and steering movements within the confines of a very realistic driving simulator that uses a state of the art Sensics wide field of view head-mounted binocular display (HMD). The simulator is mounted on a hydraulic platform that delivers realistic acceleration stimuli and the driver is immersed in a very complex cityscape driving venue rendered in real time on the HMD. Our theory is that the rules for the deployment of gaze are learned by reinforcement and based on reward-based optimality criteria. This theory is to be tested using human driving experiments as well as a human avatar driver that has realistic gaze movements with fixations. The avatar performs complicated tasks by decomposing them into essential modules. Each of the modules can achieve its goal by repeatedly recognizing crucial visual features in the scene and carrying out the relevant action. Our preliminary studies have successfully modeled human data from walking and making a sandwich and have suggested several hypotheses as to the conduct of human visually guided behaviors that we propose to develop and test using the more demanding virtual automobile driving environment. The proposed research will have three interrelated foci directed at three central questions in task-directed visual processing. 1. When is gaze deployed? Our theory suggests that gaze is deployed in the aid of the behavior that needs it the most. 2. Is the disposition of gaze reward-based? Since gaze is not easily shared among concurrent behaviors, there has to be some way of allocating it. This project will test a new analytical formulation that describes gaze competition in a multi-task situation. 3. How is visual alerting handled? How do humans recognize important interruptions from the visual environment? We will test a hypothesis is that a behavior for recognizing a new situation tries to successfully compete with the current behaviors by promising greater rewards. PUBLIC HEALTH RELEVANCE: When we perform common everyday tasks, such as driving, making coffee or making a sandwich, we depend heavily on the ability to use our eyes. These eyes direct our actions by looking at the items we use in the task and also helping coordinate our arm and other body movements. We have a good idea how nerve cells make the eyes move from place to place, but we do not understand how our brain chooses one particular place over another. People have initially reasoned that image objects, such as a stop sign or a fire hydrant, are the main thing that commands our gaze, but we think its likely to depend on what people are thinking about from moment to moment. Our proposed research uses driving in virtual reality as a controlled environment where we can see where eye fixations are made and what influences these choices. Driving is a common skill with which most subjects have extensive experience and make similar eye fixations, so it's a good venue for our studies. Unique capabilities of our research facility allow us to measure the eye, head and hand movements, as well as acceleration, braking and steering movements within the confines of a very realistic driving simulator that uses a head-mounted binocular display (HMD). The simulator is mounted on a hydraulic platform that provides a sense of acceleration and the driver is immersed in a very complex cityscape that looks very much like reality. The combination of new instrumentation and analytical techniques proposed here should produce a detailed model of cognition that will help us understand disease-related cognitive problems in people and will spur the use of eye gaze in clinical diagnostic tools. Diseases like Schizophernia, Huntington's, Tourette's, Alzheimer's and ADHD all can be diagnosed through characteristically abnormal eye fixations. The even larger hope is that, by knowing how the eyes are used in these instances, we can get a general idea on how the brains function.
|
1 |
2009 — 2012 |
Ballard, Dana |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Small: a Real-Time Cognitive Operating System @ University of Texas At Austin
The objective of this research is to develop a real-time operating system for a virtual humanoid avatar that will model human behaviors such as visual tracking and other sensori-motor tasks in natural environments. This approach has become possible to test because of the development of theoretical tools in inverse reinforcement learning (IRL) that allow the acquisition of reward functions from detailed measurements of human behavior, together with technical developments in virtual environments and behavioral monitoring that allow such measurements to be obtained.
The central idea is that complex behaviors can be decomposed into sub-tasks that can be considered more or less independently. An embodied agent learns a policy for actions required by each sub-task, given the state information from sensori-motor measurements, in order to maximize total reward. The reward functions implied by human data can be computed and compared to those of an avatar model using the newly-developed IRL technique, constituting an exacting test of the system.
The broadest impact of the project would provide a formal template for further investigations of human mental function. Modular RL models of human behavior would allow realistic humanoid avatars to be used in training for emergency situations, conversation, computer games, and classroom tutoring. Monitoring behavior in patients with diseases that exhibit unusual eye movements (e.g., Tourettes, Schizophrenia, ADHD) and unusual body movement patterns (e.g., Parkinsons), should lead to new diagnostic methods. In addition the regular use of the laboratory in undergraduate courses and outreach programs promotes diversity.
|
0.915 |
2014 — 2016 |
Hayhoe, Mary (co-PI) [⬀] Ballard, Dana Acikmese, Behcet |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cps: Synergy: Collaborative Research: Autonomy Protocols: From Human Behavioral Modeling to Correct-by-Construction, Scalable Control @ University of Texas At Austin
Computer systems are increasingly coming to be relied upon to augment or replace human operators in controlling mechanical devices in contexts such as transportation systems, chemical plants, and medical devices, where safety and correctness are critical. A central problem is how to verify that such partially automated or fully autonomous cyber-physical systems (CPS) are worthy of our trust. One promising approach involves synthesis of the computer implementation codes from formal specifications, by software tools. This project contributes to this "correct-by-construction" approach, by developing scalable, automated methods for the synthesis of control protocols with provable correctness guarantees, based on insights from models of human behavior. It targets: (i) the gap between the capabilities of today's hardly autonomous, unmanned systems and the levels of capability at which they can make an impact on our use of monetary, labor, and time resources; and (ii) the lack of computational, automated, scalable tools suitable for the specification, synthesis and verification of such autonomous systems.
The research is based on study of modular reinforcement learning-based models of human behavior derived through experiments designed to elicit information on how humans control complex interactive systems in dynamic environments, including automobile driving. Architectural insights and stochastic models from this study are incorporated with a specification language based on linear temporal logic, to guide the synthesis of adaptive autonomous controllers. Motion planning and other dynamic decision-making are by algorithms based on computational engines that represent the underlying physics, with provision for run-time adaptation to account for changing operational and environmental conditions. Tools implementing this methodology are validated through experimentation in a virtual testing facility in the context of autonomous driving in urban environments and multi-vehicle autonomous navigation of micro-air vehicles in dynamic environments. Education and outreach activities include involvement of undergraduate and graduate students in the research, integration of the research into courses, demonstrations for K-12 students, and recruitment of research participants from under-represented demographic groups. Data, code, and teaching materials developed by the project are disseminated publicly on the Web.
|
0.915 |