2002 — 2006 |
Taylor, Camillo (co-PI) [⬀] Lee, Daniel Sarkar, Saswati (co-PI) [⬀] Kumar, R. Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: Antidote: Adaptive Network of Robots For Threat and Intrusion Detection and Emergency Response @ University of Pennsylvania
Abstract 0205336
This research pursues the development of a network of robots with cameras and their optimal, dynamic positioning for monitoring a cluttered scene, under different conditions of visibility. The robots must discover their neighbors, localize themselves with respect to their neighbors and integrate information with that available from the other members of the team on the fly. Further this information must allow a remotely located human operator to immerse herself in the environment. The project addresses the development of enabling algorithms and technology tailored to such applications as fire-fighting.
|
0.915 |
2004 — 2006 |
Rubin, Harvey (co-PI) [⬀] Lee, Daniel Kumar, R. Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bic: Cellular Inspired Garbage Collection: Ubiquitin System and Memory Management @ University of Pennsylvania
Garbage collection is a crucial component of many modern computer programming languages. Garbage collection frees programmers from the burden of explicit memory management, and is necessary for the independence of functional data objects in object-oriented programming paradigms. Unfortunately, traditional garbage collection algorithms are not ideally suited for concurrent operation in real-time embedded systems. On the other hand, cellular systems must also be able to identify and reclaim proteins that are no longer being used or are damaged or incomplete. The selective destruction and recycling of proteins is accomplished by the ubiquitin system in the cell. Proteins to be selectively destroyed are first identified and tagged with ubiquitin through a series of enzymatic reactions. These proteins are then degraded in the proteosome, and their constituent amino acids are recycled for the construction of new proteins. We propose using hybrid systems to model the enzymatic pathways in the ubiquitin system to gain insight into their computational properties. Our modeling and simulation efforts will describe how the ubiquitin conjugation ligases and the proteosome interact to selectively recognize the appropriate protein substrates. Our results on the ubiquitin system will help motivate new garbage recognition algorithms for memory management in computer systems. These algorithms will be based upon the hierarchical composition of classifiers that recognize characteristic features of the objects in memory. The educational goals of this proposal are closely tied to our research agenda. We hope to introduce students as well as other researchers to the importance of viewing biomolecular sytems as computational systems. The ubiquitin system will serve as a teaching illustration for students in seminars and future workshops, as well as for the public community through our collaboration with the Franklin Institute, the university's AMP summer program, and our annual Robotics for Girls workshop. Other educational activities will serve to inform and open dialogue between researchers in the engineering, biological sciences, and medical communities.
|
0.915 |
2004 — 2006 |
Lee, Daniel Shi, Jianbo [⬀] Taylor, Camillo (co-PI) [⬀] Daniilidis, Kostas (co-PI) [⬀] Kumar, R. Vijay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
RR:Macnet: Mobile Ad-Hoc Camera Networks @ University of Pennsylvania
This project, developing an experimental testbed to work on different aspects of control and sensing for mobile networks of cameras and microphones, envisions a system of cameras, MACNet, moving in three dimensions enabling a Mobile Ad Hoc Camera Network. The development of this testbed provides the experimental infrastructure for the following interdisciplinary projects: Monitoring, Evaluation, and Assessment for Geriatric Health Care, Assistive Technology for People with Disabilities, Digital Anthropology, and Visual Servoing for Coordinating Aerial and Ground Robots. MACNet cameras will be used to track patients and salient dynamic features for the 1st project. MACNet will simulate intelligent homes and museums with active cameras providing feedback to smart wheelchairs and providing information about target areas in the 2nd Project. MACNet will allow datasets of video to be acquired and analyzed for three-dimensional reconstruction and archiving in the 3rdProject; and for the last project, MACNet will provide dynamic platforms with cameras which will simulate aerial vehicles to track, localize, and coordinate with existing ground based autonomous vehicles with applications to surveillance and monitoring for homeland security. Finally, MACNet will be used as a testbed for research on distributed active vision, a theme that will bring together all researchers.
Broader Impact: The results will be directly applicable to a large class of problems in which communication, control, and sensing are coupled, with applications in smart homes, communities for assistive living, and surveillance and monitoring of homeland security. This cross-fertilization will contribute to train students with broader perspectives and potential new approaches to problem solving. The lab will be used by students and the institutions will leverage existing outreach programs in which both faculty and students participate, such as Robotics for Girls and PRIME.
|
0.915 |
2004 — 2010 |
Lee, Daniel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Biologically Inspired Learning Algorithms For Artificial Sensorimotor Systems @ University of Pennsylvania
This project will investigate biologically-inspired methodologies and algorithms for several important problems in machine learning and AI: unsupervised feature learning, multi-modal integration of sensory data, and learning for real-time robotic systems with embedded sensors and actuators. The project will pursue both theoretical investigations of nonlinear properties of neurons and synapses and their role in processing of perceptual (visual, auditory, olfactory) stimuli, as well as the applications of such properties to the design of novel algorithms and architectures for processing of multi-modal perceptual stimuli. This research will integrate advances in the understanding of biological systems by building more advanced artificial sensorimotor systems. This project integrates research and an educational agenda that includes teaching and participation of students from the broader community in activities related to the overall goals of this research.
|
0.915 |
2009 — 2011 |
Lee, Daniel Shi, Jianbo (co-PI) [⬀] Daniilidis, Kostas (co-PI) [⬀] Likhachev, Maxim [⬀] Kuchenbecker, Katherine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-En: Mobile Manipulation @ University of Pennsylvania
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
This project provides infrastructure to the University of Pennsylvania to build from their existing work into the area of mobile manipulation.
The GRASP Lab is an internationally recognized robotics research group at the University of Pennsylvania. It hosts 14 faculty members and approximately 60 Ph.D. students from the primary departments of Computer and Information Science (CIS), Electrical and Systems Engineering (ESE), and Mechanical Engineering and Applied Mechanics (MEAM).
GRASP recently launched a multidisciplinary Masters program in Robotics and involves these students, as well as many Penn undergrads, in its research endeavors. The research conducted by the members of the GRASP laboratory includes such areas as vision, planning, control, multi-agent systems, locomotion, haptics, medical robotics, machine learning and modular robotics. This proposal requests funding for instruments that would enable us to broaden our research to include the important new area of mobile manipulation.
An increasing amount of research in robotics is being devoted to mobile manipulation because it holds great promise in assisting the elderly and the disabled at home, in helping workers with labor intensive tasks at factories, and in lessening the exposure of firemen, policemen and bomb squad members to dangers and hazards. As concluded by the NSF/NASA sponsored workshop on Autonomous Mobile Manipulation (AMM) in 2005, the technical challenges critically requiring the attention of researchers are:
? dexterous manipulation and physical interaction ? multi-sensor perception in unstructured environments ? control and safety near human beings ? technologies for human-robot interaction ? architectures that support fully integrated AMM systems
The researchers at GRASP fully support these recommendations and want to help lead the way in addressing them. Though GRASP conducts a great deal of research on similar challenges in related areas, this group has not worked on the unique and potentially transformative topic of mobile manipulation. The primary inhibitor to pursuing these challenges is that research in mobile manipulation critically depends on the availability of adequate experimental platforms. This group is requesting funding that would allow them to acquire such equipment.
In particular, this group is requesting funding for (a) one human-scale mobile manipulator consisting of a Segway base, Barrett 7-DOF arm, and 3-fingered BarrettHand equipped with tactile sensors and a visual sensing suite; and (b) four small Aldebaran Robotics NAO humanoid robots capable of locomotion and manipulation.
These instruments will allow the GRASP community to perform research in such areas as autonomous mobile manipulation, teleoperated mobile manipulation, navigation among people, bipedal locomotion, and coordination of multiple mobile manipulation platforms. Advances in this area are beneficial to society in a variety ways. For example, mobile manipulation is one of the most critical areas of research in household robotics, which aims at helping people (especially the elderly and the disabled) with their chores. Mobile manipulators will also see great use in office and industrial settings, where they can exibly automate a huge variety of manual tasks.
|
0.915 |
2010 — 2017 |
Brainard, David (co-PI) [⬀] Lee, Daniel Taylor, Camillo (co-PI) [⬀] Daniilidis, Kostas [⬀] Muzzio, Isabel (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Igert: Complex Scene Perception @ University of Pennsylvania
This Integrative Graduate Education and Research Training (IGERT) award to the University of Pennsylvania supports the development of a new training paradigm for perception scientists and engineers, and is designed to provide them with a unique grasp of the computational and psychophysical underpinnings of the phenomena of perception. It will create a new role model of a well-rounded perceptual scientist with a firm grasp of both computational and experimental analytic skills. The existence of such a cadre of U.S. researchers will contribute to the country's global competitiveness in the growing machine perception and robotics industry.
Research and training activities are organized around five thematic areas related to complex scene perception: (1) Spatial perception and navigation; (2) Perception of material and terrain properties; (3) Neural responses to natural signals, saliency and attention; (4) Object Recognition in context and visual memory; and (5) Agile Perception. Interdisciplinary research will enable new insights into the astounding performance of human and animal perception as well as the design of new algorithms that will make robots perceive and act in complex scenes.
IGERT trainees will commit in advance of acceptance to a five-year graduate training program, comprising the following components: (1) Core disciplinary training; (2) one-year cross-disciplinary training in a chosen second discipline; (3) participation in two foundational and one integrational IGERT courses; (4) attendance of an interdisciplinary IGERT seminar; (5) co-advising throughout the 5 graduate years by an interdisciplinary faculty team ; and (6) completion of the Ph.D. dissertation.
IGERT is an NSF-wide program intended to meet the challenges of educating U.S. Ph.D. scientists and engineers with the interdisciplinary background, deep knowledge in a chosen discipline, and the technical, professional, and personal skills needed for the career demands of the future. The program is intended to catalyze a cultural change in graduate education by establishing innovative new models for graduate education and training in a fertile environment for collaborative research that transcends traditional disciplinary boundaries.
|
0.915 |
2013 — 2018 |
Kumar, R. Vijay Lee, Daniel Kuchenbecker, Katherine |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri: Large: Collaborative Research: Human-Robot Coordinated Manipulation and Transportation of Large Objects @ University of Pennsylvania
Motivated by the complementary abilities of humans and humanoids, the objective of this proposal is to develop the science and technology necessary for realizing human-robot cooperative object manipulation and transportation. The key concepts that this research seeks to promote are adaptability to human activity under minimal communication, and robustness to variability and uncertainty in the environment, achieved through a layered representation and deliberate processing of the available information. Moreover, this project aims to make maximum use of a minimal set of sensors to plan and control the actions of the robot, while ensuring safe and efficient cooperative transportation. The embodiment of this research is a humanoid co-worker that bears most of the load, when helping a person to carry an object, without requiring excessive communication, or prior training on the part of the human.
By introducing concrete methods for human-robot physical collaboration in semi-structured environments, this project enables a unique synergy between robots and humans that has the potential to increase productivity, and reduce accidents and injuries. In doing so, it also promotes the advancement of new practical applications of robots in construction, manufacturing, logistics, and home services. By developing open-source, portable algorithms for humanoid robots and mobile manipulators, this effort results in cost and time savings for researchers, developers, educators, and end-users in robotics. Finally, through an aggressive educational and community outreach plan, and by actively engaging K-12 students in an exciting RoboTech Fellows program, this project seeks to increase diversity and attract underrepresented groups to STEM.
|
0.915 |
2016 — 2017 |
Lee, Daniel Yim, Mark [⬀] Kumar, R. Vijay Daniilidis, Kostas (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nsf National Robotics Initiative (Nri) 2016 Pi Meeting @ University of Pennsylvania
The objective of this award is to organize the annual Principal Investigators (PI) meeting for the National Robotics Initiative (NRI), which was launched in 2011. The PI meeting brings together the community of researchers, companies, and program officers who are actively engaged in the NRI to provide cross-project coordination in terms of common intellectual challenges, methods for education and training, best practices in terms of transition of results, and a centralized and lasting repository illustrating the research ideas explored and milestones achieved by the NRI projects. The meeting will be two days during late fall 2016 in the vicinity of Washington DC. The format will include short presentations by all the attending PIs, a poster session, keynote speeches, and panel discussions. Invitations to the meeting will include all PIs with active NRI grants, program managers with robotics-related programs, and members of the press.
|
0.915 |
2016 — 2018 |
Schmidt, Marc F. (co-PI) [⬀] Bassett, Danielle (co-PI) [⬀] Lee, Daniel Shi, Jianbo (co-PI) [⬀] Daniilidis, Kostas [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Development of An Observatory For Quantitative Analysis of Collective Behavior in Animals @ University of Pennsylvania
This project, developing a new instrument to enable an accurate quantitative analysis of the movement of animals and vocal expressions in real world scenes, aims to facilitate innovative research in the study of animal behavior and neuroscience in complex realistic environments. While much progress has been made investigating brain mechanisms of behavior, these have been limited primarily to studying individual subjects in relatively simple settings. For many social species, including humans, understanding neurobiological processes within the confines of these more complex environments is critical because their brains have evolved to perceive and evaluate signals within a social context. Indeed, today's advances in video capture hardware and storage and in algorithms in computer vision and network science make this facilitation with animals possible. Past work has relied on subjective and time-consuming observations from video streams, which suffer from imprecision, low dimensionality, and the limitations of the expert analyst's sensory discriminability. This instrument will not only automate the process of detecting behaviors but also provide an exact numeric characterization in time and space for each individual in the social group. While not explicitly part of the instrument, the quantitative description provided by our system will allow the ability to correlate social context with neural measurements, a task that may only be accomplished when sufficient spatiotemporal precision has been achieved.
The instrument enables research in the behavioral and neural sciences and development of novel algorithms in computer vision and network theory. In the behavioral sciences, the instrumentation allows the generation of network models of social behavior in small groups of animals or humans that can be used to ask questions that can range from how the dynamics of the networks influence sexual selection, reproductive success, and even health messaging to how vocal decision making in individuals gives rise to social dominance hierarchies. In the neural sciences, the precise spatio-temporal information the system would provide can be used to evaluate the neural bases of sensory processing and behavioral decision under precisely defined social contexts. Sensory responses to a given vocal stimulus, for example, can be evaluated by the context in which the animal heard the stimulus and both his and the sender's prior behavioral history in the group. In computer vision, we propose novel approaches for the calibration of multiple cameras "in the wild", the combination of appearance and geometry for the extraction of exact 3D pose and body parts from video, the learning of attentional focus among animals in a group, and the estimation of sound source and the classification of vocalizations. New approaches will be used on hierarchical discovery of behaviors in graphs, the incorporation of interactions beyond the pairwise level with simplicial complices, and a novel theory of graph dynamics for the temporal evolution of social behavior. The instrumentation benefits behavioral and neural scientists. Therefore, the code and algorithms developed will be open-source so that the scientific community can extend them based on the application. The proposed work also impacts computer vision and network science because the fundamental algorithms designed should advance the state of the art. For performance evaluation of other computer vision algorithms, established datasets will be employed.
|
0.915 |