1999 — 2005 |
Feldman, Jacob |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: the Logic of Grouping and Perceptual Organization @ Rutgers University New Brunswick
This project will investigate one of the most basic aspects of human perceptual intelligence, the ability to organize the visual world. Perceptual organization---the process whereby individual bits of the visual image are aggregated into coherent, meaningful wholes---is a fundamental process that is known to influence many other aspects of visual processing. Yet no mathematically well-defined theory of it exists. The key stumbling block is the pervasive but slippery notion known as ``goodness of form,'' which has resisted attempts to define it rigorously. The approach taken in this proposal is a modern, mathematically-motivated version of the idea that for any visual image, human observers see the simplest interpretation possible. In most previous renditions of this idea, the term ``simplest'' is only very vaguely or subjectively defined. The lack of concrete definitions or algorithms in turn makes it impossible to determine empirically whether the interpretation that human subjects see is, in fact, the simplest. Minimal Model (MM) theory builds on ideas from modern computational logic, with which which the term ``interpretation'' and ``simple'' can be given extremely precise definitions. Under these definitions, it turns out that human judgments---for example, the way line drawings are grouped and organized---correspond closely to the formally minimal interpretation in a well-defined logical language. This minimal interpretation is in a sense the least ``coincidental'' interpretation possible of a given scene; that is, the one that explains the image best. An efficient algorithm exists for rapidly computing the minimal interpretation. Moreover predictions derived from this theory have already been used to answer some long-standing empirical questions about human perceptual grouping. The experiments to be conducted in this project investigate many of the most difficult and important problems in perceptual organization: the perception of occluded figures, the detection of figures amid complex and cluttered scenes, the interpretation of three-dimensional structure, and the representation and categorization of shape. For each of these visual tasks, MM theory makes definite and concrete predictions what people will see under various conditions, and about the limits of the visual system's ability to recover the true structure of the visual world. Educational activities in connection with this project include the creation of new courses at the undergraduate and graduate levels. The proposed undergraduate course (Topics in Cognitive Research) and graduate course (Mathematical Methods in Cognitive Science) emphasize an interdisciplinary approach to research in which both behavioral and computational studies are emphasized.
|
1 |
2004 — 2008 |
Feldman, Jacob |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eitm: Minimization of Complexity in Human Concept Learning @ Rutgers University New Brunswick
This project investigates how human learners form generalizations from examples, focusing on how conceptual complexity influences learning. When inducing concepts, as when a learner forms an abstraction of the concept "chair" after viewing only a few individual chairs, human learners have a bias towards simplicity; i.e., they tend to induce the simplest generalizations consistent with the examples. The exact meaning of the term "simple," however, is notoriously difficult to capture in a rigorous theory. This project draws on recent progress in quantifying conceptual simplicity and complexity in ways that are both mathematically sound and psychologically accurate. The project seeks to generalize this progress to apply to a wider range of human conceptual types than has previously been possible, including "fuzzy" probabilistic concepts and concepts defined over continuous features. The project involves both mathematical modeling and extensive experiments on human subjects learning a wide variety of concepts. By extending our understanding of complexity-minimization in human learning, the project aims to build a more complete account of the mechanisms underlying human learning.
This project has many potential scientific benefits, including a greater understanding of human learning and the possibility of more effective automated learning mechanisms. More broadly, this project has the potential to help quantify what makes some concepts inherently easier for humans to learn than others, which could have direct applications to education practices and to treatment understanding of learning disorders.
|
1 |
2006 — 2015 |
Kowler, Eileen [⬀] Shiffrar, Margaret Metaxas, Dimitris Feldman, Jacob Stone, Matthew (co-PI) [⬀] Pai, Dinesh (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Interdisciplinary Training in Perceptual Science @ Rutgers University New Brunswick
This Integrative Graduate Education and Research Training (IGERT) award supports a new graduate training program at Rutgers University in perceptual science. The past decade of growth in perceptual technologies (automated recognition systems; usable virtual environments) has created the need for a new generation of realistic, comprehensive and innovative perceptual models, applicable to humans and implemented in machines. This IGERT will train students to develop and apply such models by integrating formal and experimental approaches to human and machine perception, bridging the gaps in language, perspective and knowledge that divide technically and behaviorally oriented disciplines. Training is organized around a new core curriculum in perceptual science that begins with foundational coursework in human perception and computer science, including bootstrapping courses to fill in gaps in undergraduate backgrounds. A cornerstone is a new one-year laboratory course, Integrative Methods in Perceptual Science, in which students learn to integrate human and computer perception by working on realistic projects in small teams with faculty mentors in a specialized multi-faceted teaching laboratory. Students will carry out integrative doctoral research, co-advised by faculty in human and computer perception, in one of 6 cross-cutting areas: animate vision, multi-modal cues for perceiving and grasping 3D objects, scanning and searching, visual-auditory integration, visual language, and visual communication. Broader impacts include development of novel perceptual devices and technologies usable in home, educational, clinical or industrial settings. IGERT is an NSF-wide program intended to meet the challenges of educating U.S. Ph.D. scientists and engineers with the interdisciplinary background, deep knowledge in a chosen discipline, and the technical, professional, and personal skills needed for the career demands of the future. The program is intended to catalyze a cultural change in graduate education by establishing innovative new models for graduate education and training in a fertile environment for collaborative research that transcends traditional disciplinary boundaries.
|
1 |
2013 — 2017 |
Feldman, Jacob Elgammal, Ahmed |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ri: Small: Collaborative Research: Detecting Abnormalities in Images @ Rutgers University New Brunswick
Computer interpretation of images has taken huge strides in recent years, but even the most modern algorithms can't come close to matching human capabilities on simple visual tasks. For example, in a brief glance at an image, people reflexively classify the objects in it in terms of the categories they belong to--people, animals, tools, and other significant classes. This allows us to understand the objects' meaning in the image, for example understanding that a scene with many pieces of food might be a dinner table. Because even modern computer vision systems can't make such a classification, they can't automatically detect when an object in a scene doesn't belong, that is, when it is abnormal relative to the categories present in the scene. Detecting such "oddball" or atypical objects is essential to understanding visual scenes, because objects that don't belong are often the ones that play the most important role and require immediate action (like a cat on the dinner table). Studies of human subjects have shown that humans are indeed especially adept at detecting atypical items, which often draw our visual attention even before we become consciously aware of them.
This project aims at developing algorithmic techniques to endow computer visions systems with the same ability. By adapting modern vision techniques to mimic the way human observers classify visual atypicality, researchers will develop computer systems that can examine an image and automatically detect abnormal objects, as well as identifying the nature of the abnormality and quantifying the degree of abnormality. The project involves a collaboration among researchers at multiple universities and multiple scientific specialties, including both computer vision and human vision. The result will be a new and useful class of computer vision techniques that can be applied to visual image understanding in many contexts.
|
1 |
2020 — 2025 |
Andrews, Clinton (co-PI) [⬀] Feldman, Jacob Dana, Kristin [⬀] Yi, Jingang (co-PI) [⬀] Bekris, Kostas (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nrt-Fw-Htf: Socially Cognizant Robotics For a Technology Enhanced Society (Socrates) @ Rutgers University New Brunswick
The popular vision of ubiquitous robot assistants that improve the quality of life remains mostly a vision. A key challenge of the program is Robotics for Everyday Augmented Living (REAL), semi-automated systems that focus on tasks and work within daily life. To make this vision a reality, important considerations include safety, adaptability to human desires, and nuanced societal impacts, such as dignity, consent, privacy, and fairness. Traditional social sciences often study the effects of technology on individuals and society only after it is deployed. Given the potential impact of robotics as well as the potential for unintended negative social consequences, technology should adapt to humans rather than the other way around. Current robotics training does not equip researchers with the interdisciplinary tools necessary to address this challenge. This National Science Foundation Research Traineeship (NRT) award to Rutgers University will train a new type of professional, the socially cognizant roboticist, with the skills?in technology, social science, and public policy?needed to bridge this gap. This training program aims to instill an awareness of human involvement into every phase of the design of new technology, so that these technologies can provide positive human value wherever they are introduced. The training program anticipates training over 35 graduate students (MS and PhD), including 17 NRT-funded trainees, by integrating technology domains (robotics, machine learning and computer vision) with social and behavioral sciences (psychology, cognitive science, and urban policy planning).
The program will integrate the training of technologists, who are able to develop robots that can coordinate with people, and social scientists, who can translate studies regarding the social effects of robotics into actionable lessons. Robotics is defined broadly here to include intelligent systems encompassing smart buildings and embedded infrastructure. Program participants will be trained in 1) technology: building and controlling robots, collecting, and learning from large datasets; 2) cognitive science: designing socially cognizant systems; 3) policy: assessing unintended consequences and planning for positive societal impact. The program lays the groundwork for this training via a new curriculum for a robotics specialization that combines existing technology and social science courses, as well as new interdisciplinary courses. The program will emphasize experiential learning, through the Rutgers Robotics Live Lab and interdisciplinary research projects from the partnering graduate programs, as well as internship opportunities through an Industry Consortium. Trainees will engage in fundamental research to understand and model the social dimensions of robot deployments and advance the long-term goal of dignified living and working in a technologically enhanced society. An important program objective is the recruitment and retention of diverse trainees through a multi-faceted strategy, including a student-led robotics club that focuses on a novice-to-expert strategy, an annual robotics workshop, and a Faculty Talk-it-up Robotics Series for underrepresented populations.
The NSF Research Traineeship (NRT) Program is designed to encourage the development and implementation of bold, new potentially transformative models for STEM graduate education training. The program is dedicated to effective training of STEM graduate students in high priority interdisciplinary or convergent research areas through comprehensive traineeship models that are innovative, evidence-based, and aligned with changing workforce and research needs.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2023 |
Stromswold, Karin (co-PI) [⬀] Feldman, Jacob Kapadia, Mubbasir Schwartz, Matthew |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Sai: Cognitive Models of Human Social Wayfinding For the Redesign of Public Spaces @ Rutgers University New Brunswick
Strengthening American Infrastructure (SAI) is an NSF Program seeking to stimulate human-centered fundamental and potentially transformative research that strengthens America’s infrastructure. Effective infrastructure provides a strong foundation for socioeconomic vitality and broad quality of life improvement. Strong, reliable, and effective infrastructure spurs private-sector innovation, grows the economy, creates jobs, makes public-sector service provision more efficient, strengthens communities, promotes equal opportunity, protects the natural environment, enhances national security, and fuels American leadership. To achieve these goals requires expertise from across the science and engineering disciplines. SAI focuses on how knowledge of human reasoning and decision making, governance, and social and cultural processes enables the building and maintenance of effective infrastructure that improves lives and society and builds on advances in technology and engineering.
In 2020, many public spaces were hastily redesigned to optimize pedestrian flow in order to minimize the spread of COVID-19. Unfortunately, conventional methods for simulating how people move through public spaces do not take into account social factors that affect how people actually navigate in the presence of other people (social wayfinding). For example, these methods do not incorporate how people adjust to avoid others’ personal space, navigate around slower-moving people, or follow instructions from other people. Even worse, existing simulations usually assume everybody has identical abilities, which is rarely true in real populations. The goal of this project is to develop a system for simulating the flow of people through public spaces, including social aspects of human navigation, and incorporating people with a variety of abilities and disabilities. These more realistic simulations will be used to develop novel metrics and protocols for evaluating public spaces, which more thoroughly reflect the rich social behavior of real people.
This project develops a new framework for modeling the flow of people through public spaces, called the Social Wayfinding-Inspired InFrasTructure (SWIIFT) design framework. The framework has three interlocking parts: human subjects experiments on human wayfinding, computational simulations of the flow of people through public spaces, and evaluation metrics for assessing design and re-design of real public spaces. In a series of experiments, human subjects will be immersed via Virtual Reality headsets into simulated spaces. These spaces will contain different numbers of simulated people, including people with variations in mobility (using wheelchairs, canes or walkers; pushing strollers; carrying heavy bags), sensory ability (e.g., visual impairments, hearing impairments), knowledge, and attention. Human subjects will receive different cues about which way to go, including visible pathways, signage, and verbal instructions. Data about the choices they make as they navigate through the virtual spaces will be incorporated into simulations, allowing us to develop realistic models of how people move through spaces under natural conditions. Finally, this framework will use these simulation models to evaluate potential modifications to real spaces, allowing potentially expensive changes to be accurately evaluated before they are carried out. The ultimate goal of this work is to enable public spaces to be made more efficient and more accessible for everyone, regardless of ability.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |