1998 — 2001 |
Lin, Ming |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Powre: Efficient Geometric Algorithms For Computer Simulated Environments @ University of North Carolina At Chapel Hill
EIA: 9806027 Lin, Ming C. University of North Carolina-Chapel Hill
POWRE: Efficient Geometric Algorithms for Computer Simulated Environments
This proposal aims at establishing research activities of the PI who is restarting academic career after academic career interruption due to family reasons. The proposed activities explore a relatively new research area for the PI which naturally complements PI's prior accomplishments. The emphasis is on the development of efficient and accurate algorithms and software systems and on demonstration of their application. The set of proposed research problems includes: (i) collision detection between non-convex objects and non-linear models using higher-order bounding volumes and dynamics data structures (ii) multi-resolution free-from deformation for modeling elastic bodies and (iii) applications to virtual prototyping and physically-based modeling.
|
0.939 |
1999 — 2004 |
Lin, Ming |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Robot Algorithms For Haptic Interaction @ University of North Carolina At Chapel Hill
Intelligent systems and simulated environments require intuitive interfaces for man-machine interaction. These may include visual, auditory, and haptic interfaces. Compared to the presentation of visual and auditory information, methods for haptic display are not as well developed. To exploit the possibility of haptic interaction for performing assembly or dis-assembly tasks in an electronic prototyping environment, it is imperative to develop the necessary real-time algorithms and software systems, in addition to the force/torque-feedback devices. The PI proposes to design robot algorithms for haptic interaction. Specifically, these include interactive contact determination for non-linear models and deformable bodies, and fast penetration depth computation between general three-dimensional geometric models. The PI will develop prototype software systems and integrate them with force-feedback devices for haptic interaction in virtual prototyping environments. Besides force display, the resulting algorithms and systems will also be useful for robot motion planning and dynamic simulation. The research efforts will be complemented by developing a teaching curriculum in geometric computing for robotics and virtual prototyping, in order to enable better human resource development. In addition to the intellectual pursuit of our research goals, the PI will also interact with potential users of the proposed research to facilitate technology transition.
|
0.939 |
1999 — 2004 |
Lin, Ming |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Interactive Haptic Simulation For Engineering Design @ University of North Carolina At Chapel Hill
This grant provides funding to investigate interactive physically-based geometric algorithms for real-time haptic simulation, namely interactive force display with efficient contact determination. The specific research issues to be investigated include: (1) design of new algorithms for real-time collision detection between complex geometric models; (2) computation of penetration depth at interactive rates for calculating contact or restoring forces; (3) improved computational efficiency of contact determination algorithms for flexible models undergoing deformation; (4) integration of all the algorithmic advances gained from the proposed research and a proof-of-concept demonstration using haptic simulation; (5) verification of their utility in other design and engineering applications; and (6) rapid dissemination of the research results by releasing public domain software packages based on the prototype software implementations and by transferring the technology to commercial vendors. The goal of this research is to improve the performance of underlying contact determination algorithms for force display by at least an order of magnitude, in order to sustain interactive haptic simulations and to enable natural manipulation of 3D CAD models. Successful completion of the proposed research can have considerable impact in promoting haptic simulation as a new interaction paradigm for three-dimensional engineering design. Other than haptic simulation for engineering design, these algorithms will also be useful for tolerance analysis and maintainability studies in virtual prototyping and design automation.
|
0.939 |
2001 — 2006 |
Lin, Ming Manocha, Dinesh (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Visualization: High Fidelity Virtual Touch: Algorithms, Applications and Evaluation @ University of North Carolina At Chapel Hill
Force feedback devices, or haptic interfaces, have the potential to increase the qualityof human-computer interaction by adding the sense of touch. However, there are still few practical force feedback applications, due in large part to the stringent computational requirements of haptic rendering. In order to maintain a high fidelity system, haptic update rates must be as high as 1000 Hz, rather than the 30 Hz updates for graphical displays. This is especially challenging for 6-degree of freedom (DOF) haptic devices which are used to display forces and torques for arbitrary pairs of objects. This requires accurate contact determination and contact force and torque computation of all collision points in less than a millisecond.
This project focuses on three aspects of high fidelity haptic display or ''virtual touch''. The first goal includes developing new geometric and physically-based algorithms that can improve the state of the art by more than an order of magnitude, in addition to the expected improvements in processor speed and computing power over that time. This will be based on hybrid spatial data structures, simplification hierarchies, multi-resolution representations, bounded error approximations, and massively parallel rasterization hardware. The second goal is to pursue applications that can benefit significantly from the use of high-fidelity 6-DOF haptic displays. This includes virtual prototyping of nano-structures, haptic visualization of biological interaction between molecules, maintenance analysis and interactive modeling and painting. The third goal is the evaluation of 6-DOF haptic rendering systems as a tool for human-computer interface. This will be done in collaboration with Boeing, Sandia Labs, and Sensable Technologies. If successful, the proposed research will provide enabling algorithms and a prototype software system for designing a high-fidelity virtual touch system.
|
0.939 |
2004 — 2011 |
Lin, Ming De |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Development Small Animal Digital Subtraction Angiography
bioimaging /biomedical imaging; imaging /visualization /scanning; technology /technique development
|
0.928 |
2004 — 2010 |
Lin, Ming |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Physically-Inspired Modeling For Haptic Rendering @ University of North Carolina At Chapel Hill
Physically-Inspired Modeling for Haptic Rendering
Ming C. Lin Department of Computer Science University of North Carolina at Chapel Hill
The sense of touch is one of the most important sensory channels and is used for object identification, data manipulation and concept exploration. Therefore, force feedback via haptic (touch-enabled) devices offers many possibilities for enhanced human-computer interaction. Extending the frontier of visual computing, this project proposes to develop physically-inspired modeling and simulation techniques for high-fidelity haptic display that will augment visual display.
The proposed research will be driven by two target applications in science and education: nanomanipulation and haptic painting. Both applications will also be used to enhance science and art education at middle and high schools.
INTELLECTUAL MERIT: This research is expected to lay the scientific foundation for an emerging paradigm of physically-based haptic interaction with virtual environments. It includes new algorithmic insights, efficient computational methodology, and system integration for two challenging applications. The underlying representations, algorithms and software systems for fast contact computation, interactive modeling of flexible objects, multi-level optimization, use of programmable graphics hardware, simulation acceleration techniques, and VR device augmentation will also offer fundamental advances for virtual environments, physically-based modeling, and scientific visualization.
BROADER IMPACT: By extending the frontier of high-fidelity haptic rendering, the proposed research can develop a significant augmentation to existing graphical display and scientific visualization. Furthermore, each proposed application has the potential of making a considerable impact on its own.
|
0.939 |
2004 — 2008 |
Lin, Ming Manocha, Dinesh [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Goali: Multiresolution Algorithms For Virtual Prototyping of Massive Cad Models @ University of North Carolina At Chapel Hill
The research is expected to lay the scientific foundation for human centric, simulation-based engineering design through a novel "multiresolution" framework that incorporates virtual prototyping. Virtual prototyping is often used to reduce design time, lower production cost, and improve the level of innovation in developing mechanical parts of varying scale: from nanometer-sized objects such as nanoscale robots, to large man-made computer aided design of complex systems such as airplanes, power plants and submarines composed of millions of parts. This project focuses on advancing the fundamental understanding of the design process through the creation of novel algorithms and systems that are based on the "multiresolution" framework. This framework describes geometry, spatial arrangements, numerics, and physical simulation across different scales. Key issues to be addressed include the realization of visualization, modeling and simulation techniques, new level-of detail representations and novel multiresolution algorithms for interactive display, proximity query and physics-based simulation and manipulation of massive CAD models. To ensure the relevance of the framework and incorporated algorithms to engineering design, tests and validation will be conducted through the virtual prototyping of highly complex or massive CAD models provided by the industrial collaborators and GOALI partner.
With the increasing complexity of engineering design, it is expected that this approach could potentially offer robust and efficient solutions to large size problems by adequately modeling across the scales, providing mutual interaction among the multiple entities in mechanical, physical or biological systems. The system is envisioned to not only reduce the time and costs associated with the design and review process of complex mechanical systems; it also has the potential of generating effective animation sequences for electronic maintenance training. Outreach to middle and high schools students, linked through the power of the visualization tools where students could experience virtual manipulation of structures within a complex system such as an airplane is expected to raise the interest in engineering design education. The collaboration with Boeing will provide both the ability to validate the framework and to capture the interest of K-12 students through the introduction to real engineering design problems.
|
0.939 |
2004 — 2005 |
Lin, Ming De |
P41Activity Code Description: Undocumented code - click on the grant title for more information. |
Optimized Radiographic Spectra For Digital Subtraction Radiography in the Mouse |
0.928 |
2005 — 2006 |
Lin, Ming Shapiro, Vadim (co-PI) [⬀] Gupta, Satyandra Regli, William [⬀] Piasecki, Michael (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-Team: Exploiting Cyber-Infrastructure For Creation and Use of Multi-Disciplinary Engineering Models
This CI-Team demonstration will create a comprehensive, multi-disciplinary, engineering model to support in-silico prototyping of snake-inspired robotic systems. Snake-inspired robots have many potential applications, including those in medicine, civil engineering, search and rescue, and homeland security. This model will be created during a coordinated set of multi-disciplinary classes developed by the PIs and concurrently taught across the partner institutions and accessible via distance learning systems. The CI-Team plans an ambitious use of cyber-collaboration and education technologies to prototype a set of core courses in a curriculum for "Engineering Informatics" that spans our institutions and unites computer and information sciences with traditional engineering domains.
The team is a highly inter-disciplinary group from four universities consisting of computer scientists and engineers with the complimentary expertise needed to create both the shared model and the educational deliverables. The scientific challenge to the team is to produce an engineering model that integrates semantic descriptions of robotic components, behavioral and simulation software, software for snake robot control and navigation, as well as the tools needed to perform analysis, component surrogation and mission assessment. The educational challenge is to develop course materials that are both multi-disciplinary and scientifically rich. The goal is to educate students so they can span and integrate disciplines: semantics, engineering modeling, and computational tools. This project will deeply connect different sub-fields of engineering and computer science, enabling the new inter-disciplinary trained engineers to rapidly create new snake-inspired robot designs.
BROADER IMPACTS: This project contributes to the transformation of engineering into an "informatics" discipline and broadens the interface between computer science and engineering. The CI-Team aims to establish the content of an "Engineering Informatics" curriculum around the snake robot domain and use it as a basis for integrating fundamental concepts from engineering and computer science. Further, the technical results contained in the engineering model will support two areas of major national need. First, it advances the state of the art in snake-inspired robotic systems and produces a repository for use by educators and researchers. Second, through active collaboration with partners at NIST and DOE, the team will transition concepts into ongoing standards efforts sponsored by ISO and the W3C. Finally, the development of the Engineering Informatics discipline will contribute to a cyberinfrastructure-savvy workforce and by integrating the results of this demonstration project with on-going efforts for broadened participation will also create a diverse cyberinfrastructure workforce and user community.
|
0.961 |
2006 |
Lin, Ming Manocha, Dinesh [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Conference Support For Edge Computing Workshop @ University of North Carolina At Chapel Hill
We request support for a workshop on Edge Computing using New Commodity Architectures. The workshop will be held at UNC Chapel Hill in May 2006. It will bring together leading researchers and developers from computer architecture, computer graphics, compilers, database and data streaming, high performance computing and GPGPU. We also expect significant participation from industry and federal agencies. We request travel support for invited speakers and graduate students.
|
0.939 |
2007 — 2009 |
Lin, Ming Manocha, Dinesh (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci-Team Implementation Project: Collaborative Research: Cyber-Infrastructure For Engineering Informatics Education @ University of North Carolina At Chapel Hill
The objective of this project is the creation of a comprehensive, multi-disciplinary approach to engineering informatics education. The team will use the domain of biologically-inspired robotic systems as a means of engaging engineering and computer science students in the creation of physically realized systems. These systems have been shown to have important applications in medicine, civil engineering, search and rescue, and homeland security. This project will also develop and deploy the novel cyber-infrastructure and software tools needed to advance the state-of-the-art in bio-inspired robotic systems and biologically-inspired robotics education. A repository of educational materials, designs and models will be made available over the Internet and provided for use by educators and researchers around the country. In this way, this project aims to create mechanisms for education and training of multi-disciplinary engineers who are versed in the cyber-infrastructure tools and understand how they can use them to transform and harness collective human problem solving capabilities.
The project contributes to the transformation of engineering into an ``informatics'' discipline and tightens the interaction computer science and engineering. Ultimately, engineering informatics will become an instrumental part of undergraduate and graduate curricula in engineering and computer science. In addition, the bio-inspired robotics domain will prove to be a source of exciting and attractive materials and demonstrations. These materials and demonstrations will be used in outreach and secondary education activities to expose students to engineering and computer science concepts and increase the participation of under-represented groups in these professions. The team plans to leverage numerous ongoing outreach and training activities at the respective institutions to maximize the impact of the project.
|
0.939 |
2009 — 2014 |
Lin, Ming Manocha, Dinesh (co-PI) [⬀] Bishop, Gary (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Hcc-Small:Interactive Auditory Displays @ University of North Carolina At Chapel Hill
Interactive Auditory Displays
PI: Ming C. Lin Co-PIs: Gary Bishop and Dinesh Manocha Department of Computer Science University of North Carolina at Chapel Hill
An auditory display utilizes sound to communicate information to a user and offers an alternative means of visualization. By harnessing the sense of hearing, audio rendering can further enhance a user's experience in a multimodal virtual world. Acoustic realism has many areas of applicability including virtual reality, computer gaming, training systems, desktop interfaces, education, and scientific visualization.
We are conducting an ambitious research program to develop interactive auditory displays. Our goal is to develop new algorithms for physics-based sound synthesis and sound propagation for interactive applications including computer gaming, training systems, and enabling technologies. The approach involves the fusion of both geometry (for high frequencies) and physics (for low frequencies) to model sound propagation and the development of techniques for acoustic levels of detail. To this end we are developing efficient numerical algorithms based on domain decomposition and exploiting modern architecture features to further accelerate the overall performance. We are also evaluating the performance of our algorithms on different applications. In addition to acoustic simulation, our research is generating a fundamental scientific foundation and interactive performance methods for solving wave/sound propagation problems in highly complex domains that span many scientific and engineering disciplines.
|
0.939 |
2009 — 2013 |
Mitran, Sorin (co-PI) [⬀] Lin, Ming Manocha, Dinesh [⬀] Fowler, Robert |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Petaflops Acoustic Simulation @ University of North Carolina At Chapel Hill
PetaFlops Acoustic Simulation
PI: Dinesh Manocha Department of Computer Science University of North Carolina This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5)."
This project proposes to develop a scalable, petaflop computational infrastructure for acoustic simulation. The main focus will be on developing massively parallel algorithms that can exploit the computational capabilities of many-core accelerators, such as graphics processing units (GPUs), to achieve the desired performance. The main research components include: (1) development of highly accurate and low dispersion numerical methods for solving acoustic wave equation; (2) massively parallel algorithms for efficient numeric and geometric acoustic propagation; (3) software libraries to run millions of threads on GPU clusters to achieve petaflop acoustic performance based on sound field decomposition; and (4) acoustic analysis of complex CAD models. The broader impacts of this effort are interdisciplinary and span across academia and industry, including new application software libraries for many-core accelerators.
|
0.939 |
2010 — 2014 |
Lin, Ming Manocha, Dinesh [⬀] Kasik, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Goali: Digital Layout and Assembly of Large Cad Structures @ University of North Carolina At Chapel Hill
The research objective of this GOALI award between UNC Chapel Hill and Boeing is to develop novel robot algorithms to generate digital layouts and assemblies of large CAD structures, including the parts and their motion. The underlying research theme is to create novel planning and motion (PLM) simulation algorithms that can accomodate the underlying physical and geometric complexity of large CAD models frequently used in PLM applications. The research will build on recent developments in algorithmic robotics, computational geometry, dynamic simulation, and parallel computing to create efficient methods to perform computations on large CAD structures. This will include novel geometric, planning and simulation algorithms for space utilization, accessibility problems, assembly and disassembly of objects, ergnomics analysis, and other applications. This research is expected to lay the scientific foundation for developing digital environments of large CAD structures that can create a communication "loop back" between design and manufacturing engineers. Furthermore, it will lead to a new set of planning and simulation algorithms that can exploit the computational capabilities of multi-core CPUs and many-core GPUs for fast computations.
If successful, this research would result in new set of algorithms for virtual prototyping, dynamic simulation, robotics, and geometric computing. The proposed work could enable rapid digital prototyping of complex mechanical structure, lower high costs of physical markups, and minimize time loss due to poor design decisions. Furthermore, it could dramatically reduce the rework that may otherwise be necessary during manufacturing. The PIs plan to release new software libraries over the WWW, and expose these ideas to a broader audience by organizing workshops and tutorials. The plausible animations generated by the simulation and visualization tools could have broad appeal to K-12 students and can attract students from other areas to exploit the potential of cyber-infrastructure.
|
0.939 |
2012 — 2014 |
Lin, Ming |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: Interactive Reconstruction and Visualization of Metropolitan-Scale Traffic @ University of North Carolina At Chapel Hill
Traffic congestion is a global challenge. Besides the obvious energy and environmental impacts, traffic congestion imposes tangible costs on society. It is unlikely that traditional physically-centered mitigation strategies by themselves will be successful or sustainable in the current economical and environmental climate. Numerous strategies have been proposed to construct Intelligent Transportation Systems (ITS), by incorporating sensing, information, and communication technologies in transportation infrastructure and vehicles. In this EAGER proposal, we present an early-concept exploration to investigate an innovative and transformative approach for ITS. We envision that this exploratory research could advance the next generation of ITSs by introducing a tightly-integrated real-time traffic simulation, estimation, and visualization for traffic management. We are developing novel hybrid methods for real-time flow estimation, traffic reconstruction and visualization, as well as designing GPU and many-core algorithms to accelerate the overall performance.
If successful, this research could enable adaptive route planning for vehicle guidance and navigational aid to alleviate traffic congestion through an algorithmic lens. The proposed unified framework also has the potential to provide computational advances for diverse applications, including regulating traffic, improved urban planning, transportation system design, virtual tourism, education, entertainment, surveillance, and emergency response. The set of pedagogical and outreach activities complement and extend the research impact through integrated education-research programs and effective dissemination of research results.
|
0.939 |
2013 — 2017 |
Lin, Ming Fuchs, Henry (co-PI) [⬀] Alterovitz, Ron (co-PI) [⬀] Frahm, Jan-Michael (co-PI) [⬀] Manocha, Dinesh [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ii-New: a Robot Testbed For Real-Time Motion Strategies and Autonomous Personal Assistants @ University of North Carolina At Chapel Hill
This infrastructure proposal supports the acquisition of a personal robot and a high-end multi- GPU workstation to develop a new robot testbed for designing and evaluating the next generation of parallel robot algorithms and open-source software systems on modern commodity computing platforms. The robot platform will be based on a Meka M1 robot, a state-of-the-art robot with compliant, dextrous arms plus camera and range sensors. This robot will be used to develop a new set of motion strategy and planning algorithms and evaluate the capabilities of the robots for two driving applications: (1) assisting older adults and people with disabilities with activities of daily living and (2) active tele-presence to give people in a remote environment the ability to physically interact with people in the robot?s environment.
This robot testbed could lead to novel technologies for the development of personal robots for assistance with tasks of daily living and active telepresence. These new capabilities could significantly improve the quality of life for the elderly, people with disabilities, and other individuals. The development of real-time motion strategies could also benefit other areas, such as surgical simulation, CAD/CAM, virtual prototyping, and virtual reality environments. The software libraries developed under this project would be made widely available through the public-domain release for all research and educational activities across science, engineering, and medical domains. Teh research team will also integrate research with education, reach out to under-represented groups via programs such as the IBM-sponsored Girls? summer camps, actively involving undergraduates in the proposed research, and organizing workshops on many-core computing for real-time motion strategies. Finally, the team expects the new robotics curriculum enabled by the proposed equipment acquisition to help increase enrollment in Computer Science.
|
0.939 |
2013 — 2019 |
Lin, Ming |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cgv: Small: Interactive Sound Rendering For Large-Scale Virtual Environments @ University of North Carolina At Chapel Hill
Auditory experience is an integral part of our daily life. Our perception of sound affects how we interpret and respond to various events around us. Overall, interactive modeling and simulation of sound effects and auditory events can significantly enhance numerous scientific and engineering applications, and also support more intuitive human-computer interaction for desktop and mobile applications. It also offers an alternative means to visualize datasets with complex characteristics (multi-dimensional, abstract, conceptual, spatial-temporal, etc.). Yet despite the fact that hearing is one of our dominant senses, sound rendering has not received as much attention as visual rendering to better serve as an effective communication channel for human-computer systems, and interactive audio rendering still poses major computational challenges. In this project, the PI focuses on rendering of aural effects, with attention to a greater correlation between sound and visual rendering, to communicate information (events, spatial extent, physical setting, emotion, ambience, etc.) to a user in a virtual world and to thereby increase the user's sense of presence and spaciousness while improving his/her ability to locate sound sources. The PI's goal is to make radical advance in interactive sound rendering and application-specific auditory interaction techniques in order to achieve high-fidelity auditory interfaces for large-scale virtual reality. In particular, she will address the computational bottlenecks in example-guided, physics-based sound synthesis, develop new hybrid algorithms for creating realistic acoustic effects in complex, dynamic 3D virtual environments, demonstrate the techniques on acoustic walkthrough for a variety of applications, and evaluate the resulting auditory systems and their impact on target applications. The work will build upon the PI's prior accomplishments to make several major scientific advances that will significantly extend the state of the art in auditory displays and human-centric computing. Project outcomes will include new hybrid acoustic algorithms for realistic sound effects, novel example-guided physics-based sound synthesis, innovative applications of auditory displays, and better understanding of human auditory perception.
Broader Impacts: Applications of interactive sound rendering enabled by this project will span a wide variety of domains, include assistive technology for the visually impaired, multimodal human-centric interfaces, immersive teleconferencing, rapid prototyping of acoustic spaces for urban planning, structural design, and noise control. Project outcomes, including scientific advances and software systems, will be disseminated through websites, publications, workshops, community outreach, and other professional contacts. In addition to acoustic simulation, this research will ultimately offer fundamental scientific foundations for solving wave/sound propagation problems in complex domains for seismology, geophysics, meteorology, engineering design, urban planning, etc.
|
0.946 |
2015 — 2019 |
Lin, Ming |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager/Cybermanufacturing: Modular System Design For Cybermanufacturing of Customized Apparel @ University of Maryland College Park
The apparel industry is of critical importance to the US and worldwide economy in terms of investment, revenue, trade, and employment. Despite the economic uncertainties and fluctuations, the global apparel industry continues to grow at a healthy pace and is expected to reach approximately $3.2 trillion dollars in 2015 with an estimated annual growth rate in excess of 4 percent. The U.S. apparel market is the largest in the world, comprising about 28 percent of the global market. While customized apparel is highly desirable, customized clothing currently comprises only 2 to 3 percent of the apparel market, due to high manufacturing cost, which includes significant human labor. This EArly-concept Grant for Exploratory Research (EAGER) project plans to design an end-to-end system architecture and a proof-of-concept prototype application software for affordable individualized clothing and personalized manufacturing. The resulting approach can facilitate high-quality yet low-cost individualized apparel, enable innovative product design and/or manufacturing, and potentially transform the fashion industry. The resulting scientific and technological advances have the potential to improve the status of the US apparel manufacturing industry in the world market. Complementing these research goals, we will introduce a new graduate course on cyber-manufacturing, release example software systems, and organize workshops related to cybermanufacturing. The multi-disciplinary curriculum development and cyber-engineering research training, featured through the departmental outreach and the University science fairs, can also help attract underrepresented groups, especially women and minority students, to computer science and engineering.
This early-concept exploration focuses on providing a novel computational framework and software system architecture required to solve challenging cyber-manufacturing of customized apparel by offering an alternative, clean, green, and resource-efficient approach. It will feature a seamless integrated system that can provide real-time, customized design of apparel with human-in-the-loop prototyping, virtual testing, and intelligent manufacturing. The prototype system will consist of new, simple-to-use 3D body measurement systems on portable devices, real-time pattern selection, interactive cloud-based design optimization, reliable visual inspection, predictable virtual try-on, and adept fabrication with different fabric materials. The overall project contributes to building a scientific foundation, system framework, and computational principles to integrate different constitutive models, by linking the computational processes of low-cost sensing, simulation analysis, design optimization, to the agile, rapid cybermanufacturing of a customized product, thereby providing cross-cutting scientific advances.
|
0.946 |
2019 — 2022 |
Klatzky, Roberta Lin, Ming |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Chs: Small: Audio-Visual Reconstruction For Immersive Virtualized Reality @ University of Maryland College Park
Maintaining the sense of presence is a major challenge in immersive virtual environments. An important aspect of immersion is the feeling of cohesiveness between different senses, including the visual and auditory; for example, an object that looks like wood should also sound like wood. Sound synthesis can improve a user's sensory cohesion when interacting with objects, but it requires accurate real-world material parameters. While much prior work in computer vision has focused on acquiring the geometric shape and visual characteristic of objects, the resulting point-clouds and images can assist in more accurate recovery of audio parameters for sound synthesis, along with acoustic scattering and absorption properties for sound rendering and propagation. A hypothesis of this research is that, conversely, auditory metrics can also assist in determining an object's geometry, including holes and occlusions, in a manner analogous to sonar detection (but of course 3D geometry reconstruction is far more challenging than object detection). The audio-visual reconstruction enabled by this project will have broad impact across many domains, including assistive technology for persons who are visually impaired, multimodal human-centric interfaces, immersive teleconferencing, rapid prototyping of acoustic spaces for urban planning, structural design, and noise control, to name just a few. Project outcomes including scientific advances and software systems will be disseminated through websites, publications, workshops, community outreach, and other professional events.
This project explores a novel paradigm of audio-visual reconstruction of real-world scenes, where audio cues are used to guide the classification of objects and materials for 3D geometry reconstruction. At the same time, the visual information can be used to initialize and accelerate the identification of acoustic material parameters. Some of the major research challenges that will be addressed include audio-guided 3D model reconstruction, design of audio-visual neural networks for material and object identification, learning-based acoustic material classification of a large physical or virtual space, and optimization-based acoustic material refinement using geometric and wave-based methods. Perceptually-grounded evaluation and validation of the new methods and applications will be performed.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.946 |