2009 — 2013 |
Akinci, Burcu (co-PI) [⬀] Huber, Daniel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Automating the Creation of as-Built Building Information Models @ Carnegie-Mellon University
This award is funded under the American Recovery and Reinvestment Act of 2009 (public Law 111-5).
Building information models (BIMs), which represent the three dimensional (3D) geometry and high-level semantics of a facility, are increasingly used in the Architecture, Engineering, Construction, and Facility Management (AEC/FM) industry. Most BIM work focuses on representing the as-designed conditions of a facility, but the actual as-built or as-used conditions can differ significantly from the design due to changes during construction or renovations. Currently, the utilization of as-built BIMs is limited because they are difficult and time-consuming to create and because existing BIM standards do not fully support representing as-built conditions. This research will address these barriers by developing algorithms to automate the creation of as-built models from point cloud data collected using laser scanners and by developing new representations that support the needs of BIM stakeholders. The modeling objective will focus on three aspects of the points-to-BIM transformation process: Geometric modeling, in which raw points are segmented into geometric components, such as planar regions, and modeled parametrically (e.g., plane parameters and boundaries); Semantic labeling, in which modeled components are assigned meaningful labels, such as "wall" or "ceiling"; and occlusion inference, in which surfaces that are not visualized are estimated based on the geometry of visible surfaces. The representation objective will focus on two aspects of the problem of representing as-built BIMs: levels of detail addresses the difficulty of handling large 3D point sets inherent in as-built models. Representations will be formalized that support multiple levels of detail, which will enable efficient high-level analysis, while supporting detailed analysis down to the level of raw data points. Metadata representation targets development of descriptions of how information is derived from raw data and how raw data is collected. Approaches will be formalized to support the representation of secondary data, such as deviations from idealized models, missing data due to occlusion, and sensor configuration and placement. Taken together, these objectives comprise an end-to-end approach to streamline the points-to-BIM conversion process and it is likely that it transform the current way of using/leveraging BIM and 3D imaging technologies. Evaluation of these approaches will be conducted using laser scan data from different types of scanners used in case studies generated by our group and by our collaborators.
This research is expected to transform the way that BIMs are created and utilized. The algorithms and representation strategies developed under this research are intended to drastically simplify process of creating of as-built BIMs and will create new opportunities of analyzing and utilizing BIMs during construction and facility management. The reverse engineering aspects of this research will also advance the general area of generic 3D scene interpretation, with impact in diverse domains, including robotics (e.g., creating building models for indoor mobile robots), building safety (e.g., automatic mapping of buildings for first responders), and construction site monitoring. This research will be incorporated into existing Carnegie Mellon courses as well as a new project course on as-built BIMs, to be co-taught by the PIs. The visual nature of the project lends itself to inclusion in K-12 and minority outreach programs in which the team participates. We plan to make the products of this research available via the Internet, including data sets and software, which is beneficial, since 3D data sets of this kind are not generally available and are difficult/costly to create.
|
0.951 |
2012 — 2017 |
Huber, Daniel Rybski, Paul |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Nri-Small: the Intelligent Workcell - Enabling Robots and People to Work Together Safely in Manufacturing Environments @ Carnegie-Mellon University
The research objective of this award is to investigate methods to enable people and industrial robots to work safely within the same workspace. Current robotic manufacturing practice requires the physical separation of people and robots, which ensures safety, but is inefficient in terms of time and resources, and limits the tasks suitable for robotic manufacturing. This research will develop an "Intelligent Workcell," which augments the traditional robotic workcell with perception systems that observe workers within the workspace. Methods to explicitly track workers and estimate their body pose will enable dynamically adaptive safety zones surrounding the robot, thereby preventing the robot from injuring workers. Algorithms will be developed to recognize the activities that workers are performing. These algorithms will learn a task-independent vocabulary of fundamental action components, which will form the building blocks for a hierarchical activity recognition framework. Finally, mechanisms for providing feedback to workers about the robot's intended actions will be studied.
This research is expected to provide new capabilities in robotic workcell safety and monitoring, allowing people and industrial robots to work safely and effectively in the same environment. Such capabilities would improve the efficiency of existing robotic workcells, since the robot would not be required to stop whenever a person enters the workspace (as is current practice). Furthermore, new manufacturing processes that involve robots and people working together on a single task would be enabled. Students at the graduate and undergraduate level will benefit from using the prototype Intelligent Workcell in project courses, and grade-school students will participate in short courses and workshops designed to ignite interest in STEM activities related to industrial robotics and computer vision.
|
0.951 |
2015 — 2016 |
Huber, Daniel Hebert, Martial [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
2015 National Robotics Initiative Pi Meeting @ Carnegie-Mellon University
This project is to organize and execute the annual principal investigator (PI) meeting for the National Robotics Initiative (NRI). The workshop will bring together all the PIs engaged in the NRI for a two day workshop to discuss research, educational initiatives, and methods for transition of results.
The workshop includes oral presentations by involved program managers, the PIs, a poster session with presentation of all sponsored projects, tutorials, and several keynote speeches. An important aspect of the workshop is program-wide networking and exploration of mechanisms to optimize the results of the overall initiative with respect to research, training, and societal impact.
|
0.951 |