1987 — 1991 |
Touretzky, David (co-PI) [⬀] Fahlman, Scott |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Studies of Learning and Representation in Distributed Connectionist Networks @ Carnegie-Mellon University
Massively parallel connectionist systems having distributed representations of knowledge, have two fundamental problems: developing faster and more effective learning procedures for connectionist networks, and developing techniques by which such networks can handle complex symbolic knowledge structures in addition to the lower-level sensory knowledge being studied by groups. The learning work generally focuses primarily on variations of the existing so-called Boltzmann and back-propagation procedures. In this proposed effort "variable plasticity" techniques in which not all of the weights have the same ability to change during learning. The learning procedures developed will be evaluated by application to problems in speech understanding, low-level image processing, and control of a manipulator. Preliminary experiments in these areas are described in the main proposal. The work on symbolic representations will focus on language understanding and commonsense reasoning, specifically: increasing the subtlety and richness of distributed symbolic representations, combining multiple sources of syntactic and semantic constraint via parallel relaxation, investigating problems in matching and complex inference, using learning to adjust the behavior of an adaptive symbol processor problems in artificial intelligence and cognitive psychology. systems. The proposed work is cross-disciplinary in nature, applying techniques from mathematics, computer science and the new field of connectionism to problems in artificial intelligence, cognitive psychology, and
|
1 |
1992 — 1999 |
Fahlman, Scott Taylor, D. Lansing [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
High Performance Imaging in Biological Research @ Carnegie-Mellon University
The Grand Challenge Application Groups competition provides one mechanism for the support of multidiscipinary teams of scientists and engineers to meet the goals of the High Performance Computing and Communications (HPCC) Initiative in Fiscal Year 1992. The ideal proposal provided not only excellence in science: focussed problem with potential for substantial impact in a critical area of science and engineering) but also significant interactions between scientific and computational activities, usually involving mathematical, computer or computational scientists, that would have impact in high-performance computational activity beyond the specific scientific or engineering problem area(s) or discipline being studied. This is a project to research and develop an Automated Interactive Microscope (AIM). The AIM will combine the latest technologies in light microscopy and reagent chemistry with advanced techniques for computerized image processing, image analysis, and display, implemented on high-performance parallel computers. This combination will produce an automated, high-speed, interactive tool that will make possible new kinds of basic biological research on living cells and tissues. While one milestone of the research will be to show the proof-of-concept of AIM, the on-going thrust will be continued development as new technologies arise and the involvement of the biological community.
|
1 |
1993 — 1996 |
Fahlman, Scott |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Learning Algorithms and Architectures For Artificial Neural Networks @ Carnegie-Mellon University
This project is concerned with learning algorithms and architectures for artificial neural networks. The overall goal is improve the learning speed, scalability, generalization power, robustness, and ease of use of these learning algorithms, and to extend them to cover new kinds of learning tasks. This work builds upon the PI's earlier work in this area, which has produced the Quickprop, Cascade-Correlation, and Recurrent Cascade-Correlation algorithms. Cascade-Correlation builds its own network topology in the course of learning and is much faster than standard back-propagation. The current project aims to extend these algorithms to cover a number of new situations: ``online'' learning from a non-repeating stream of training examples, recognition of unclocked, time-continuous signals, and a new form of unsupervised learning.
|
1 |