1999 — 2003 |
Li, Kai [⬀] Singh, Jaswinder (co-PI) [⬀] Funkhouser, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Next Generation Software: Adaptive, Performance-Portable Software For Next-Generation and Immersive Applications
EIA-9975011 Princeton University Kai Li
Next Generation Software: Adaptive, Performance-Portable Software for Next-Generation and Immersive Applications
A new generation of applications is becoming very important for high-performance computing, including collaborative design, interactive walkthroughs and large data visualization, and telepresence. They require tremendous resources including CPU, memory, storage, and audio/visual devices, and they have substantially different characteristics, performance goals and system interactions than traditional scientific applications. For example, they have extremely irregular and unpredictable data access needs and workload distributions, they interact more dynamically and with many more types of input/output sensors and devices, they involve dynamic user interaction and steering, and their goal is to deliver the best possible quality at a fixed output refresh rate rather than a solution of fixed quality in the minimum possible time. As computer architectures become more complex, it becomes increasingly difficult to develop such applications to achieve the desired performance. Three properties are critical: (i) high performance for rich interactive behavior, (ii) adaptability and isolation in all layers (i.e. the complexity, and unpredictability demand that each layer of application or system software must adapt to the layers above and below it-through performance modeling and through runtime feedback and adaptation-and should try to shield the neighboring layers from each other's complexity), and (iii) performance portability across component upgrades and across the different major types of platforms that may be used in such environments. Our goal is to develop the software building blocks, runtime systems and design methodologies to assist such application development.
|
1 |
2001 — 2007 |
Funkhouser, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Simulation of Lighting and Acoustics in Interactive Virtual Environments
The long-term goal of my integrated research and educational plan is to develop interactive virtual environment sys-tems supporting realistic aural/visual simulation of large 3D models containing multiple interacting users.
Research Plan
The main objective of the proposed research project is to develop efficient algorithms for simulating the propagation of light and sound waves through a virtual environment by scattering at surfaces of a 3D model. This research problem is central to providing a realistic experience in an immersive virtual environment, as its solution enables global illumination and sound spatialization. It is also significant for several other applications, including motion planning, inverse modeling, scene capture, heat transfer, radio power prediction, fire propagation, and traffic analysis, which are all important in fields beyond immersive virtual environments.
For simulations of both lighting and acoustics, the fundamental problem is to compute a solution to an integral equation expressing the wave field at every point in terms of the wave field on surrounding surfaces. The main difficulty is that the wave field has discontinuities due to occlusions, caustics, and specular highlights, which induce large variations over small portions of the integration domain (i.e. surfaces or directions). Previous integration methods based on radiosity and Monte Carlo path tracing are generally not practical for typical virtual environments.
I plan to investigate a hybrid beam tracing and path tracing approach. The general strategy is to trace beams that partition the space of rays into topologically distinct bundles corresponding to different sequences of scattering events at surfaces of the 3D scene (reverberation sequences), and then use them to guide sampling in an interactive path tracing algorithm. The motivation for this approach is that attributes of a relatively small number of beams traced during the first phase can provide useful information about the wave field that can be used to guide and accelerate evaluation of samples during the second phase. This approach enables efficient methods for: 1) enumeration of reverberation sequences, 2) decomposition of integration domain, 3) conservative approximation, 4) spatial coherence in ray intersections, 5) sampling of reverberation sequences, 6) progressive refinement, and 7) off-line precomputation. The challenge is to develop methods that trace beams through 3D models quickly and reap the benefits of the traced beams in useful applications.
My research plan is to investigate hybrid beam tracing and path tracing approaches to solve classical problems for virtual environments. The new research contributions will be made at four levels: theory, algorithms, applications, and experiments. First, I plan to investigate new theory for modeling wave propagation as a discrete set of reverber- ation paths incorporating multiple scattering effects including diffractions. Second, I plan to develop new algorithms that efficiently find significant reverberation paths with general types of scattering in general 3D models. Third, I plan to investigate new applications where the proposed approach for computing reverberation paths can be used to solve classical problems. Finally, I plan to perform experiments to evaluate the results of computed simulations both quantitatively and qualitatively in comparison to measured wave fields. The overall outcome of this research will be a computational framework and a suite of methods for computing general reverberation paths in general 3D models and evaluating them in interactive applications.
Throughout this project, I plan to investigate the synergies between sound and light and to apply the lessons learned from one wave phenomenon to the other. Based on historical precedent, I believe that it will be possible to develop better simulations of virtual environments by studying both sound and light together.
Educational Plan
The objectives of my educational plan are to teach students and to develop new methods for education. In particular, one special goal of mine is to enable the use of interactive virtual environment systems in the educational process. I plan to investigate this novel media for education by designing new interdisciplinary courses that will allow students and teachers from widely varying backgrounds to learn about and experiment with virtual environment systems. I also plan to develop new educational materials (textbook, course notes, software tools, and 3D models), to mentor students (graduate, undergraduate, and K-12), and to develop outreach programs for disadvantaged people (mentoring under- privileged students and deploying handicap-assisting applications).
By both developing interactive virtual environment systems for research, and investigating their use for teaching, my research and educational activities are uniquely integrated.
|
1 |
2001 — 2005 |
Chazelle, Bernard (co-PI) [⬀] Dobkin, David (co-PI) [⬀] Finkelstein, Adam (co-PI) [⬀] Funkhouser, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Im:3d Shape-Based Retrieval and Its Applications
This research will investigate methods for automatic retrieval and analysis of 3D models. It will develop computational representations of 3D shape for which indices can be built, similarity queries can be answered efficiently, and interesting features can be computed robustly. Next, it will build user interfaces which permit untrained users to specify shape-based queries. This will include queries specified with text, 3D models, 2D sketching, and high-level methods based on constraints and rules. It will combine elements of computer graphics, computer vision, and computational geometry.
Applications of shape-based query methods will include Internet search engines, computer-aided design, molecular biology, medicine, and security. In each application the researchers will work with domain experts to understand the critical elements of the 3D databases and the challenging shape queries for which new methods are required. For example, working with molecular biologists will help develop query tools for the Protein Data Bank to find macromolecules matching a given shape. These methods will aid classification of proteins for which only low-resolution electron density maps are available, and aid searches for proteins matching a specific binding site.
|
1 |
2004 — 2008 |
Li, Kai [⬀] Funkhouser, Thomas Rusinkiewicz, Szymon (co-PI) [⬀] Troyanskaya, Olga (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ngs: Software Tools For New-Generation, Display-Centric Applications
The goal of this research project is to develop new software tools and applications for scalable display systems. These primary focus is on methods that coordinate multiple displays, multiple users, and multiple applications to enable true display-centric computing. For coordinating multiple displays, the project will develop dynamic feedback to build adaptive layered multi-resolution display systems and to study how to achieve integrated, continuous calibration capable of delivering high-quality information display. For coordinating multiple users, software tools that manage information display intelligently and securely for seamless exchange of visual information will be developed. For coordinating multiple applications, the project will study how to design an adaptive infrastructure that enables multiple applications to share a scalable display efficiently.
|
1 |
2006 — 2010 |
Funkhouser, Thomas Singh, Mona (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Sei: New Shape Analysis Methods For Structural Bioinformatics
A complete understanding of any biological system or disease necessitates a detailed analysis of how its proteins interact with other molecules. Most methods for predicting and understanding protein function have focused on determining evolutionary relationships in amino acid sequences. However, the molecular function of a protein is determined also by its 3D structure (i.e., how atoms interact within its active sites), and thus a great deal of attention has recently been devoted towards solving the 3D structures of proteins with the hope that computer algorithms can infer functional relationships between them. 3D atomic coordinates are available for tens of thousands of proteins and the number has been increasing exponentially over the last several years. The goal of this project is to develop novel computer algorithms for analyzing protein structures, detecting similarities between them, visualizing how they interact with other molecules, and automatically providing functional classifications for them. For example, given a novel protein structure, new geometric algorithms will be used to determine the locations and shapes of its active sites. Next, the model of the structural and chemical properties of those sites will be used to search large databases for sites with similarities. Finally, the best matches are aligned so that functional annotations can be transferred from the active site of one protein to another. These algorithms will not only be useful for molecular biology, but they will drive research on a broader class of computation methods for detecting features in noisy 3D data, matching shapes of complex 3D structures, and searching large repositories of 3D data. Beyond the research, the project will have impact through its interdisciplinary collaborations, educational and outreach programs, and public dissemination of information. The project is a collaborative effort across diverse disciplines, aiding the project to promote cross-pollination of ideas between fields, and provide new educational opportunities for students to learn in an inter-disciplinary environment. Everything developed as part of this proposal will be made freely available to the public through talks, workshops, web pages, course notes, software libraries, bibliographies, and data sets.
|
1 |
2007 — 2010 |
Funkhouser, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Symmetry Analysis of 3d Shapes and Its Applications in Computer Graphics
Symmetry Analysis of 3D Shapes and its Applications in Computer Graphics Thomas Funkhouser
Symmetry is a fundamental property of 3D shape. Understanding an object's symmetries provides insight into its overall shape, its decomposition into parts, its balance of mass, and its possibilities for motion. Meanwhile, symmetry is ubiquitous in our world. Almost all man-made objects exhibit some perfect symmetry, and many organic structures are nearly symmetric and/or composed of nearly symmetric parts (e.g., the bodies of animals). For decades, however, computer graphics researchers have ignored symmetries when designing geometric processing algorithms, instead focusing upon local shape features and/or differential surface properties when manipulating surfaces. As a result, for example, surface reconstruction algorithms produce asymetric meshes for symmetric objects, mesh compression techniques fail to take advantage of approximate symmetries, and mesh completion algorithms fill holes by extrapolating surface properties near their boundaries rather than by copying shape features from symmetric parts. Clearly, methods for analyzing and exploiting the symmetries of an object could greatly improve these and other surface processing applications.
The research is accomplishing the following: (1) investigating the theory of symmetry for 3D shapes, (2) developing analysis algorithms for characterizing symmetries of 3D shapes, and (3) demonstrating the utility of symmetry analysis in several computer graphics applications. Expected research results include new multiresolution descriptions of the approximate symmetries of an object and new algorithms that use these descriptions for remeshing, compression, completion, reverse engineering, and editing of 3D meshes. The immediate impact on computer graphics is not only new algorithms for shape analysis, but perhaps also a new way of thinking - understanding and preserving global properties of shape (e.g., symmetry) when editing and processing surfaces, rather than focusing only on local geometric properties. The broader impacts will be felt in other fields that benefit from improved symmetry analysis and in integrated educational activities.
|
1 |
2008 — 2013 |
Funkhouser, Thomas Freedman, Michael [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Nets-Anet: a Network Architecture For Federated Virtual/Physical Worlds
This research asks how one might design a network architecture to support three-dimensional virtual worlds as a dominant application platform. The architecture is based on three key design principles. First, rather than being centralized or peer-to-peer, the architecture is based on federation: cooperative but not necessarily collaborative interaction between multiple parties. This enables providers to enforce local administrative and security policies, yet requiring new support for discovery, messaging, and migration between and within domains. Second, application communication is grounded in three-dimensional coordinate spaces: objects can only communicate after being introduced through proximity. This geometric addressing decouples applications from their physical locations on hosts, and introduces interesting security protections from unwanted communication. Third, by using this communication model, the architecture can directly interface with and connect the physical world, leading to new possibilities for virtual interactions.
Much as the Internet was designed with a layered communication model, this research designs a new layered approach for virtual worlds: from a high-level object layer providing a rich programming environment for immersive virtual worlds, to the narrow waist of geometry-based communication, and down to the underlying service layer that implements computation, storage, and communication mechanisms. With backgrounds across networking, systems, and graphics, the investigators have been previously developing a highly expansible and personalizable virtual world system, Meru. This new project will develop the network architecture necessary to enable the seamless interaction and interoperation between many different Meru-based virtual worlds.
Integrating virtual worlds is already a pressing issue and concern among providers. Research towards a unifying networked system architecture would improve these efforts and could lay the groundwork for a next-generation programming platform for the Internet. It would integrate the current divide between the logical, host-centric networks and the emerging sensor networks of tomorrow. By incorporating existing efforts towards building an open, scalable virtual world system, the research will have impact in all of the areas virtual worlds are already bringing change. Fundamentally, virtual worlds, even more so than the Internet, are a platform for inter-personal communication, affecting education, public services and planning, commerce, and social networks.
|
1 |
2009 — 2012 |
Finkelstein, Adam (co-PI) [⬀] Fellbaum, Christiane (co-PI) [⬀] Funkhouser, Thomas Blei, David (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Interactive Discovery and Semantic Labeling of Patterns in Spatial Data
Finding and labeling semantic patterns in large, spatial data sets is one of the most important problems facing computer scientists today. Massive spatial data sets are being acquired in almost every scientific discipline, such as medicine, geology, biology, astrophysics, and others. Finding meaningful patterns in those data is often the bottleneck to scientific discovery. The proposed research is to develop a transformative machine learning methodology, where the process of discovering semantic patterns in large spatial data sets is interactive and semi-autonomous. With the proposed tools and algorithms, the user is provided with an interactive system that shows the most likely segmentations and labelings given the information provided so far, but allows the user to provide additional information as he/she sees fit. The user might adjust a segmentation, provide a label, or specify an expected pattern. The system will adapt in real time to each of these inputs, thus adjusting its predictions throughout the data.
The broad impact of the proposed plan will be enhanced through an integrated educational and outreach plan. Besides the published results of research results, the field will benefit from free distribution of research and education resources, including web pages, bibliographies, software, and data sets, including augmentations to WordNet. Further broad impacts include focused workshops and courses on shape analysis, machine learning, and visualization at both the university and professional levels. Finally, diversity enhancement programs will promote the opportunities for disadvantaged groups in research.
|
1 |
2013 — 2017 |
Funkhouser, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Bigdata: Small: Da: Semantic Modeling of Cities From Scanned Data
Detailed three-dimensional models of urban environments provide critical information for many applications, including emergency response preparation, security analysis, urban planning, and augmented-reality maps. For example, if 3D models of complete cities were publicly available with detailed labels for all semantic objects (e.g., buildings, fire hydrants, fire escapes, doors, windows, trees, etc.), then fire fighters, police forces, and other emergency response teams could use them to make plans for rescue operations, taking into account possible access points, lines of sight, and risks to the neighborhood. Or, if the 3D model contained labeled representations of stop lights, traffic signs, parking spaces, store locations, mailboxes, and ATMs, then augmented reality displays could help people navigate their daily lives.
The research goal of this project is to develop algorithms to build detailed, labeled 3D models from currently available data. Several companies (e.g., Google, Nokia, Microsoft, etc.) are currently collecting photographic imagery and LIDAR data with scanners mounted on cars driving up and down streets of cities throughout the world. This data contains a vast amount of information about our world, but in a very primitive form: pixels and points. The PI is developing algorithms to analyze this raw data to build semantically labeled 3D models: 1) new methods for discovering correspondence relationships between heterogeneous data types, focusing on LIDAR, images, and 3D polygonal models found in online repositories, 2) new ways to infer surface geometry, segmentations, and labels simultaneously based on a model learned from examples, 3) new interactive systems to allow users to visualize and guide the algorithms as they operate by incorporating user input into incrementally updated solutions, and 4) data management algorithms for multiresolution storage, compression, and retrieval of massive scanned 3D data sets.
The broader goals of the project include educational programs, industrial collaboration, free distribution of software and data sets, and outreach activities. Besides the published research results, the PI will disseminate 3D models of major cities that can be used directly in applications developed by other people. He will also distribute code, benchmark data sets, and statistical models that could benefit researchers in a variety of disciplines. This proposed work is integrated with educational programs, including interdisciplinary workshops and courses at the graduate, undergraduate, and professional levels, and diversity enhancement programs that promote opportunities for disadvantaged groups
|
1 |
2015 — 2018 |
Funkhouser, Thomas Xiao, Jianxiong (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Vec: Small: Collaborative Research: Scene Understanding From Rgb-D Images
This project exploits the benefits of RGB-D (color and depth) image collections with extra depth information to significantly advance the state-of-the-art in visual scene understanding, and makes computer vision techniques become usable in practical applications. Recent advance in affordable depth sensors has made depth acquisition significantly easier for ordinary users. These depth cameras are becoming very common in digital devices and help automatic scene understanding. The research team develops technologies to take advantage of depth information. Besides the published research results, the research team plans to distribute source code and benchmark data sets that could benefit researchers in a variety of disciplines. This project is integrated with educational programs, such as interdisciplinary workshops and courses at the graduate, undergraduate, and professional levels and diversity enhancement programs that promote opportunities for disadvantaged groups. The research team is closely collaborating with the industrial partner (Intel), involving interns and technology transfer in real products. The project is also applying the developed algorithms to the assistive technology for the blind and visually impaired.
This research develops algorithms required to perform real-time segmentation, labeling, and recognition of RGB-D images, videos, and 3D scans of indoor environments. Specifically, the PIs develop methods to: (1) acquire large labeled RGB-D datasets for training and evaluation, (2) study algorithms to recognize objects and estimate detailed 3D knowledge about the scene, (3) exploit the object-to-object contextual relationships in 3D, and (4) demonstrate applications to benefit the general public, including household robotics and assistive technologies for the blind.
|
1 |
2017 — 2018 |
Funkhouser, Thomas |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Ci-P: Shapenet: An Information-Rich 3d Model Repository For Graphics, Vision and Robotics Research
The goal of this project is to plan the development of a richly annotated repository of 3D models called ShapeNet that currently exists only in a preliminary form. ShapeNet will include 3-4 million 3D models of everyday objects in 4-5 thousand categories, in a variety of representations. Models in the ShapeNet repository will be annotated with multiple annotation types: geometric (parts, symmetries), semantic (keywords for the shape and its parts), physical (weight, size), and functional (affordances, scene context). The availability of ShapeNet data, capturing the 3D geometry of a significant fraction of object categories in the world, together with associated detailed meta-data and semantic information, will catalyze major developments in graphics, vision and robotics by providing adequate data against which new proposed techniques and methodologies for shape or scene analysis and synthesis can be vetted -- and with which machine learning algorithms can be trained. ShapeNet can be considered an encyclopedia that facilitates the creation of intelligent systems and agents capable of operating autonomously in the world --- because they can have deep knowledge of that world.
While most of the ShapeNet models will be initially found on the Web, the annotations will be obtained through an active learning combination of modest human input (including crowd-sourcing), extensive algorithmic transport, and human verification. During the planning period the effort will focus on mathematical representations of the semantic knowledge associated with 3D models, as well as on a design framework for key algorithms allowing knowledge transport from one model to another. Further challenges to be addressed include the quantification of data quality issues and the specification of all the multimodal (3D, image, language) UIs and APIs needed for users to be able to exploit and search this wealth of data, or to contribute additional models and annotations to it.
|
1 |