1992 — 1994 |
Hirsh, Haym |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Scaling Up Version Spaces @ Rutgers University New Brunswick
This grant addresses a problem central to building intelligent systems, namely how to extract knowledge from data. This problem, known as "inductive concept learning," has received much attention in the machine learning community, and provides an alternative approach to the labor intensive and time consuming knowledge acquisition bottleneck by foregoing interaction with an expert altogether, and instead acquiring knowledge from case libraries. Although version spaces are one of the best known conceptual tools for concept learning, they suffer from three limitations that restrict their use as a practical tool for learning: computational intractability, noise intolerance, and representational inadequacy. This work proposes a three layered approach to overcome these limitations so that version spaces can be applied to practical problems while maintaining the attractive properties that make them a useful conceptual and analytical tool for concept learning.
|
1 |
1995 — 1999 |
Imielinski, Tomasz [⬀] Hirsh, Haym |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Query-Based Approach to Database Mining: Database Tools For Rule Discovery @ Rutgers University New Brunswick
Many government and commercial organizations possess extremely large databases, with sizes often measured in terabytes, containing such information as consumer data, astronomical observations, biological sequences, etc. The extraction of information from such large databases has become known as database mining and is an area where machine learning techniques must meet performance requirements of very large database systems. This research focuses on one particular database-mining task, the problem of rule discovery. Rule discovery is viewed as an interactive process with a human in the loop , an iterative process where the user is not only trying to discover interesting results, but also interesting questions to ask. The approach is based on the key idea that rules can themselves be viewed as objects. Under this view the space of possible rules supported by a database can itself be treated as a database, and the rule-discovery process can be approached as a process of querying the rule base implicitly defined by each database. The human in the loop user of the discovery system would interact with the system via ad hoc rulebase queries, designing the desired query interactively as various results are returned during a rule discovery session. The proposed implementation of the data mining system is tested on the data from the health-care field, obtained through an ongoing collaboration with a major provider of managed health care.
|
1 |
2000 — 2004 |
Hirsh, Haym |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Intelligent Web Prefetching to Reduce Client Latencies @ Rutgers University New Brunswick
Delays on the World-Wide Web are a well-known problem for users. Caching of web objects closer to clients is a technique shown to improve performance because requested objects that are in the cache can be presented to the client without traversing sometimes slow network connections to a possibly overloaded web server. Prefetching, in advance of their need, soon-to-be-requested objects that are not in the cache can improve performance even further. Such objects can often be predicted from a range of information sources, such as client and server histories as well as the contents of the objects currently and previously retrieved by the client. In this work, the researcher proposes to develop intelligent prefetching algorithms that use machine learning techniques to develop models that make predictions based on past experience, and to test their implementation in a proxy cache with the goal of reducing user-perceived latency. Moreover, adding prefetching to a cache raises some subtle problems for evaluation methods, and so the researcher proposes a new methodology for proxy cache evaluation that can also handle prefetching systems. The high-level goals of this research are to propose and implement both prefetching algorithms and a sufficient evaluation methodology for such algorithms. This evaluation methodology will then be used to measure the progress made toward the ultimate objective of reducing user latencies.
|
1 |
2013 — 2018 |
Poggio, Tomaso [⬀] Wilson, Matthew (co-PI) [⬀] Kreiman, Gabriel (co-PI) [⬀] Mahadevan, Lakshminarayana Hirsh, Haym |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Center For Brains, Minds and Machines: the Science and the Technology of Intelligence @ Massachusetts Institute of Technology
Today's AI technologies, such as Watson, Siri and MobilEye, are impressive yet still confined to a single domain or task. Imagine how truly intelligent systems --- systems that actually understand their world --- could change our world. The work of scientists and engineers could be amplified to help solve the world's most pressing technical problems. Education, healthcare and manufacturing could be transformed. Mental health could be understood on a deeper level, leading in turn to more effective treatments of brain disorders. These accomplishments will take decades. The proposed Center for Brains, Minds, and Machines (CBMM) will enable the kind of research needed to ultimately achieve such ambitious goals. The vision of the Center is of a world where intelligence, and how it emerges from brain activity, is truly understood. A successful research plan for realizing this vision requires four main areas of inquiry and integrated work across all four guided by a unifying theoretical foundation. First, understanding intelligence requires discovering how it develops from the interplay of learning and innate structure. Second, understanding the physical machinery of intelligence requires analyzing brains across multiple levels of analysis, from neural circuits to large-scale brain architecture. Third, intelligence goes beyond the narrow expertise of chess or Jeopardy-playing computers, bridging several domains including vision, planning, action, social interactions, and language. Finally, intelligence emerges from the interactions among individuals ? it is the product of social interactions. Therefore, the research of the Center engages four major research thrusts (Reverse Engineering the Infant Mind, Neuronal Circuits Underlying Intelligence, Integrating Intelligence, and Social Intelligence) with interlocking teams and working groups, and a common theoretical, mathematical, and computational platform (Enabling Theory).
The intellectual merit of the Center is its focus on elucidating the mechanisms and architecture of intelligence in the most intelligent system known: the human brain. Success in this project will ultimately enable us to understand ourselves better, to produce smarter machines, and perhaps even to make ourselves smarter. The Center's potential legacy of a deep understanding of intelligence, and the ability to engineer it, is tantalizing and timeless. It includes the creation of a community of researchers by programs such as an intensive summer school, technical workshops and online courses that will train the next generation of scientists and engineers in an emerging new field -- the Science and Engineering of Intelligence. This new field will catalyze continuing progress in and cross-fertilization between computer science, math and statistics, robotics, neuroscience, and cognitive science. Sitting between science and engineering, it will attract growing interest from the best students at all levels. The broader impact of the Center program could be to revolutionize K-12, and also 0-K, and 12-life with a deeper understanding of the process of learning. The ability to build more human-like intelligence in machines will transform our productivity, enabling robots to care for the aged, drive our cars, and help with small-business manufacturing. The Center team is composed of over 23 investigators, many having already made significant accomplishments in multiple research areas relevant to the science and the technology of intelligence. The Center team has a mix of junior and senior researchers, bringing expertise in Computer Science, Neuroscience, Cognitive Science and Mathematics. The institutional partners include nine institutions (MIT, Harvard, Cornell, Rockefeller, UCLA, Stanford, The Allen Institute, Wellesley, Howard, Hunter and the University of Puerto Rico), three of which have significant underrepresented student populations. The academic institutions are complemented by the Center's industrial partners (Microsoft, IBM, Google, DeepMind, Orcam, MobilEye, Willow Garage, RethinkRobotics, Boston Dynamics) and by world-renowned researchers at international institutions (Max Planck Institute, The Weizmann Institute, Italian Institute of Technology, The Hebrew University).
|
0.903 |