Andrew Barto

University of Massachusetts, Amherst, Amherst, MA 
Reinforcement Learning
"Andrew Barto"

Andrew Barto is Professor of Computer Science, University of Massachusetts, Amherst. He received his B.S. with distinction in mathematics from the University of Michigan in 1970, and his Ph.D. in Computer Science in 1975, also from the University of Michigan. He joined the Computer Science Department of the University of Massachusetts Amherst in 1977 as a Postdoctoral Research Associate, became an Associate Professor in 1982, and has been a Full Professor since 1991. He is Co-Director of the Autonomous Learning Laboratory and a core faculty member of the Neuroscience and Behavior Program of the University of Massachusetts. His research centers on learning in natural and artificial systems, and he has studied machine learning algorithms since 1977, contributing to the development of the computational theory and practice of reinforcement learning. His current research centers on models of motor learning and reinforcement learning methods for real-time planning and control, with specific interest in autonomous mental development through intrinsically motivated reinforcement learning. He currently serves as an associate editor of Neural Computation, as a member of the editorial boards of the Journal of Machine Learning Research, Adaptive Behavior, and Theoretical Computer Science-C: Natural Computing. Professor Barto is a Fellow of the American Association for the Advancement of Science, a Fellow and Senior Member of the IEEE, and a member of the American Association for Artificial Intelligence and the Society for Neuroscience. He received the 2004 IEEE Neural Network Society Pioneer Award for contributions to the field of reinforcement learning. He has published over one hundred papers or chapters in journals, books, and conference and workshop proceedings. He is co-author with Richard Sutton of the book "Reinforcement Learning: An Introduction," MIT Press 1998, and co-editor with Jennie Si, Warren Powell, and Don Wunch II of the "Handbook of Learning and Approximate Dynamic Programming," Wiley-IEEE Press, 2004.
(Show less)

Mean distance: 14.56 (cluster 29)
Cross-listing: MathTree

BETA: Related publications


You can help our author matching system! If you notice any publications incorrectly attributed to this author, please sign in and mark matches as correct or incorrect.

Niekum S, Osentoski S, Konidaris G, et al. (2015) Learning grounded finite-state representations from unstructured demonstrations International Journal of Robotics Research. 34: 131-157
Niekum S, Osentoski S, Atkeson CG, et al. (2015) Online Bayesian changepoint detection for articulated motion models Proceedings - Ieee International Conference On Robotics and Automation. 2015: 1468-1475
Botvinick M, Weinstein A, Solway A, et al. (2015) Reinforcement learning, efficient coding, and the statistics of natural tasks Current Opinion in Behavioral Sciences. 5: 71-77
Baldassarre G, Stafford T, Mirolli M, et al. (2014) Intrinsic motivations and open-ended development in animals, humans, and robots: an overview. Frontiers in Psychology. 5: 985
Solway A, Diuk C, Córdova N, et al. (2014) Optimal behavioral hierarchy. Plos Computational Biology. 10: e1003779
Barto AG. (2014) Commentary on utility and bounds. Topics in Cognitive Science. 6: 338-41
Da Silva BC, Baldassarre G, Konidaris G, et al. (2014) Learning parameterized motor skills on a humanoid robot Proceedings - Ieee International Conference On Robotics and Automation. 5239-5244
Barto AG, Konidaris G, Vigorito C. (2014) Behavioral hierarchy: Exploration and representation Computational and Robotic Models of the Hierarchical Organization of Behavior. 13-46
Da Silva BC, Konidaris G, Barto A. (2014) Active learning of parameterized skills 31st International Conference On Machine Learning, Icml 2014. 5: 3736-3745
Levy YZ, Levy DJ, Barto AG, et al. (2013) A computational hypothesis for allostasis: delineation of substance dependence, conventional therapies, and alternative treatments. Frontiers in Psychiatry. 4: 167
See more...