2001 — 2008 |
Hart, John (co-PI) [⬀] Ebert, David Rheingans, Penny (co-PI) [⬀] Marcum, David (co-PI) [⬀] Gaither, Kelly |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr/Ap+Im: Procedural Representation and Visualization Enabling Personalized Computational Fluid Dynamics
Computer power has increased dramatically over the past decade and has allowed computational fluid dynamics (CFD) researchers to more accurately simulate many types of complex flow. These simulations have enabled great leaps forward in the design and safety of ships, airplanes, automobiles, and other vehicles. However, this new power has also yielded terabytes of data, and CFD researchers now face a very difficult task in trying to find, extract, and analyze important flow features (e.g., time varying vortices, shock waves) buried within these monstrous datasets. Unlike the explosive growth in computer power, visualization tools for very large datasets have evolved modestly and cannot yet help with these tasks significantly. In particular, since detailed visualization of such large datasets is impractical, CFD researchers must work at a very cumbersome, low level to dice their datasets into workable pieces.
CFD researchers desperately need new techniques that simplify and automate the iterative process of finding the appropriate portion of their data set. They need a system that will allow the user to articulate appropriate types of features of interest, provide a compact representation of those features, and effectively visualize the feature information locally. The system will have to overcome the challenges of loading a sufficient portion of the data set over a network connection into a desktop machine, mapping the entire data set to a visual representation, and rendering the results at interactive rates.
This project will attack these CFD visualization problems by developing techniques for creating and using a procedural abstraction for a dataset. The major research objectives are to: 1. Detect features (e.g. shocks) in complex flows using topological operators. 2. Characterize the data relative to these features using a procedural representation consisting of implicit models and free-form deformations. 3. Adapt the procedural representation to the appropriate level of detail using multi-resolution techniques. 4. Encapsulate domain-specific knowledge as metadata to explore these extremely large datasets. 5. Visualize the data directly from the procedural representation. 6. Verify the accuracy of the procedural representation by tracking approximation error. 7. Apply these techniques to the large-scale computational flow simulation problems currently studied at Stanford and Mississippi State University. The resulting system will allow CFD researchers to work more effectively by interactively exploring their data to pinpoint the features of interest. Moreover, the results of this project will provide solutions not only for CFD researchers, but also for a wide variety of other visualization challenges and applications. The project's main goal is to develop techniques that allow visualization exploration, feature detection, extraction, and analysis at a higher, more effective level through the use of procedural data abstraction and representation.
|
0.915 |
2001 — 2005 |
Maciejewski, Anthony Hirleman, Edwin (co-PI) [⬀] Tan, Hong [⬀] Ebert, David Pizlo, Zygmunt (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Haptic Texture Perception and Rendering For Personal Robotics
The proposed work focuses on human-robot interaction, namely on the robot physically sensing the human hand. Specifically, the PIs will study the microstructure (texture) of the contact surfaces between a robot and a human hand, to infer the perceptual dimensionality of haptic texture sensing (perceptual model), and establish the mapping of relevant spaces. Methods for producing intuitive and efficient synthetic textures will be investigated. Rendering algorithms will be developed for synthesizing textures with desired perceptual qualities. The work is expected to contribute to various areas of haptic perception, texture studies, and multimodal rendering of information.
|
0.915 |
2002 — 2005 |
Hoffmann, Christoph [⬀] Sameh, Ahmed (co-PI) [⬀] Ebert, David Grama, Ananth Bottum, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Mri: Acquisition of Equipment For Purdue Envision Center For Data Perceptualization
EIA-0216131 Christoph Hoffmann James Bottum; Davis S. Ebert; Ananth Grama; Ahmed H. Sameh Purdue University
MRI: Acquisition of Equipment for Purdue Envision Center for Data Perceptualization
This proposal, developing techniques to effectively utilize the information capacity available to human comprehension, aims at acquiring the visualization, sensing, and haptics infrastructure required for a new center. The center, connected to ultra high-speed network hubs, cable broadcast and high performance computing facilities, and on-campus technology incubators, will support applications' needs by a spectrum of research graphics, visualization, and systems infrastructure, as well as provide development, technology transfer, education, and outreach. The infrastructure includes 3D and next generation ultra-high resolution displays, and sensing and haptic devices. The project addresses the challenges posed by the devices with respect to perceptualization techniques, infrastructure for supporting data and processing rates for effective use, and integration of applications into the environment. Educational and outreach efforts include a comprehensive graphics curriculum built around the facility, minority recruitment, and retention efforts, national visualization fore, series of symposia and workshops, development of online educational material, public domain software, and use of Access grid as a vehicle for dissemination and collaboration. Moreover, the I-Light high-speed optical fibre infrastructure (10Gb/sec) will be connected with the facility.
|
0.915 |
2002 — 2007 |
Ebert, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Visualization: Advanced Weather Data Visualization
The atmospheric science community requires visualization of observed, measured, and simulated data for accurate analysis of the atmosphere and improved weather prediction. Unlike many scientific communities, weather observers and atmospheric scientists rely heavily on important visual cues in the atmosphere to determine the potential severity of many storms. However, the current state-of-the-art in weather visualization from systems such as Vis5D, VisAD, or D3D, lack important visual information that is crucial for atmospheric scientists to fully understand the development and evolution of weather systems. Recognizing the importance of these visual cues, this project will significantly enhance the visualization of weather data through the development of innovative software techniques that will provide more accurate and effective visual representations of weather data. Simple visualization practices, such as depth cueing, isosurface texturing, volume shading, shadows, and correct natural color effects (such as sunlight) are absent in current weather data visualization software. While advanced computer graphics applications (e.g., movie production) have effectively used these techniques for some time, they have yet to be applied in a robust way to weather data. In this project, we will not only fill this gap to create improved, visually accurate weather data visualization, but also increase the quantity and clarity of the information conveyed from the resulting visualizations. Using mature numerical weather prediction software, the Advanced Regional Prediction System (ARPS), to generate numerically simulated severe weather events, new software techniques will be developed to enhance the visualization of this data and begin a new era in weather data visualization. Beyond current capabilities of standard isosurfaces, scalar volume renderings, and two-dimensional images lies important rendering capabilities for weather visualization, such as shaded volumes, shadows, light-transport, and simulated natural cloud modeling. In this project, we will develop, enhance, and apply these techniques to atmospheric data in ways which have yet to be attempted. The primary goal of our research is to produce visually accurate images of weather model data that will provide more accurate information than current methods and use the same cognitive model and analysis process as the forecasters already use, allowing them to increase their effectiveness. We will additionally develop techniques to effectively incorporate non-visual data and allow the selective visualization of the visual / non-visual weather data to enable better understanding of the relationships between these variables and quantities. Our goal is to develop these improved techniques, while also allowing interactive exploration of the observed, measured, and model data. Through the use of programmable graphics hardware with three-dimensional texture-mapping, we will implement techniques for interactive visually accurate weather visualization with low-albedo illumination, physics-based atmospheric scattering and attenuation, and volumetric shadowing. We will also implement slower high-albedo illumination models at coarser resolutions to give approximate multiple scattering effects and utilize this scattering information in the illumination calculation per-pixel fragment through three-dimensional texture mapping hardware. We will use perceptually motivated mapping of non-visual weather quantities (e.g., temperature, dewpoint, wind, atmospheric pressure, vorticity) to glyphs, particles, and isosurfaces to provide more information in an easily understandable manner, extending on our previous work in rceptually-motivated glyph rendering, fast isosurface rendering, and volume illustration. Given the capabilities of current graphics hardware, we won't be able to produce truly visually accurate images and animations of time-varying atmospheric data for at least the first half of the project, although we expect to be able to produce good approximations at interactive rates. We also plan to incorporate simple key-frame recording tools into the visualization system for off-line generation of atmospheric visualizations. The weather models produced contain multiple variables at each spatial location. By employing scientific-based combinations of these variables, it is possible to localize specific features contained in these models. We will extend our preliminary work in the development of multi-dimensional transfer function methods for multivariate data to effectively convey information from this complex model data. This improved interactive weather visualization system will increase the effectiveness of atmospheric analysis, improve severe storm forecasting, and enhance the formulation, parameterizations, and physics of numerical weather prediction models. Additionally, it will improve the training of weather observers and atmospheric science students (both undergraduate and graduate), and provide understandable animations to help in basic weather education at the K-12 level. The ultimate goal of this research is to produce a visually accurate, interactive rendering of a numerical severe thunderstorm simulation, thereby enhancing the ability of both the scientist and general user to discover and explore atmospheric processes in an unprecedented way.
|
0.915 |
2003 — 2008 |
Ebert, David Tan, Hong (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Quantifying and Increasing Information Transmission With Data Perceptualization
This project will develop enabling perceptualization tools to quantify, and dependably increase the communication of information through perceptual channels using an information theory framework. New perceptually tuned volume visualization and haptic rendering techniques for perceptually effective communication of both scalar and vector data will be developed. The appropriateness and utility of advanced volume rendering and shading techniques for efficient information transmission, and the most effective combination of visual and haptic modalities based on data variable characteristics will be determined. User studies will allow measurements to qualify, and quantify information transmission for each of these components. Close work with researchers in severe storm predication and cytoskeleton modeling will be used to verify the utility of the work, and produce more effective tools for biology and atmospheric science researchers.
This fundamental advancement in perceptualization techniques will have a dramatic impact on many scientific fields, including astrophysics, biology, computational fluid dynamics, medicine, meteorology, nanotechnology, and seismology. The improvement in data perceptualization can also be directly applied to applications in information visualization, such as data mining, digital libraries, corporate management, financial data analysis, network intrusion detection, and homeland security.
This research will also have a broader impact on the education of the general public, undergraduate engineering students, and K-12 students through the development of interactive learning modules and demonstrations that allow them to see and feel thunderstorms, tornadoes, cellular cytoskeletons, and other scientific data.
|
0.915 |
2005 — 2010 |
Lasher-Trapp, Sonia (co-PI) [⬀] Ebert, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: An Advanced Interactive Multifield, Multisource Atmospheric Visual Analysis Environment
This project will develop a novel system to investigate and analyze many important aspects of cumulus cloud dynamics, cloud evolution, and precipitation formation to an extent that has previously been impossible. Clouds and precipitation affect our daily lives, personal safety, commercial decisions, and our future sustainability on Earth. Clouds and precipitation are important at all regional scales: local, state, national, and global. For example, clouds influence the daily maximum and minimum temperatures over our homes and they modulate the global temperature by affecting the amount of incoming solar radiation and outgoing longwave radiation. As the inhabitants of earth become increasingly concerned about global warming and climate change on global and regional levels, it is necessary to understand the roles of clouds and precipitation in the Earth System in order to predict the future state of our planet.
However, understanding and predicting atmospheric phenomena are very difficult tasks which require the measurement and modeling of properties on a wide variety of scales (cloud scale, storm scale, mesoscale, globally), fusion of computational model data, measured data, and the simultaneous fusion of hundreds of scalar and vector fields that vary over time. Current tools for atmospheric visualization are not capable of integrating these various data sources, communicating the complex three-dimensional, time-varying information necessary to accurately understand and predict atmospheric events, or for the integration of visual representations into the scientific analysis and discovery process.
This project will provide a fundamental advance in visualization and interaction techniques to solve these multiscale, multifield, data fusion, analysis, time-critical decision making, and interaction problems. These new multiscale, multifield, atmospheric visualization tools will: incorporate novel, effective, photorealistic and illustrative multifield visualization techniques; fuse observational and model data; improve the understanding of cloud dynamics, cloud evolution and precipitation formation; create effective multiscale visual representations; be rapidly deployed for research, training, and education; and produce an environment for actionable, comprehensive and efficient visual analysis.
Both computer science and atmospheric science research challenges addressed in this project will benefit other fields by:
1. Improving understanding of cumulus entrainment and warm rain formation, leading to better parameterizations in weather forecasting models and possibly global climate models. 2. Improving training of students and atmospheric scientists to perform their science in three dimensional environments. 3. Unifying access to co-registered model and measured data across multiple scales, greatly improving the understanding of the atmosphere, and advancing atmospheric models and weather prediction. 4. Creating a fused, comparative visual analysis environment to reduce the ambiguity inherent in the use of a variety of data sources by calibrating multiple, measured and simulation data sources. 5. Creating a physically plausible, parameterized database of canonical cloud models for use in atmospheric science research, rendering research, illumination simulation and validation (e.g., headlamp visibility in various weather conditions) and in the visual effects industry. 6. Developing a new architecture and visualization tools for large scale, multiscale, multifield data integration, fusion, analysis, and experimentation for use by the larger atmospheric science community. 7. Developing modules for educating high school and undergraduate students about the principles of cloud and precipitation formation.
The techniques to be developed will significantly change the state-of-the-art of visualization and large-scale data analysis, and have a dramatic impact on many fields using multifield, multiscale data, including computational fluid dynamics, biology, medicine, astrophysics, and nanoscale-microscale integration. Advanced information communication through advanced visual analysis tools will increase the rate of scientific discovery by improving the effectiveness of scientists and forecasters.
|
0.915 |
2009 — 2014 |
Cason, Timothy (co-PI) [⬀] Ebert, David Hummels, David (co-PI) [⬀] Samak, Anya |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Tls - Applied Visual Analytics For Economic Decision-Making
Scientists have discovered that individuals are often unable to make optimal decisions when problems are complex due to limitations on cognitive abilities. This interdisciplinary project employs visual analytics as a transformational analytical tool in economics. The investigators use visual analytics to improve decision making and identify key motivations in knowledge creation in various economic problems. The project?s suite of tools allows users to interactively explore datasets and decision spaces as well as compare alternate hypothesis and develop new hypothesis. Further, keystrokes and information pertinent to understanding decision-making and knowledge generation are recorded, allowing the investigators to make predictions about the decision-making process on a broad scale and providing guidance for theoretical models of decision-making. This is the first thorough investigation of the value of visual analytics for economic decision-making.
Intellectual Merit This project brings together a team of scientists from economics, electrical and computer engineering and cognitive science, fields that are rarely linked. The fundamental objective of this three-year project is to improve individual and group economic decision making through the introduction of visual analytics as a necessary tool for dealing with complex information sets. The project?s second objective is to quantify the effectiveness of visual analytics for decision making. Visual analytics has emerged as an important approach to data analysis in many fields such as medicine, business, and the physical sciences, and the investigators are the first to quantify its value for decision-making using rigorous experimental methods. The final objective is to develop a unique suite of visual analytics tools to help economists and policy-makers analyze large datasets.
Broader Impact: The use of visual analytics for economic decision-making is extremely beneficial to policy makers. Use of these tools should have an immediate and positive impact on the capacity to analyze complex economic datasets. These tools can also be used in many fields with problems in analytical reasoning. The visual analytics tools that result at the completion of the project will be made available online for classroom use, which will have a broad impact on education. Visual analytics tools are unique in that they are both simple enough and captivating for K-12 students, while also being helpful to students at the undergraduate and graduate levels. The project will use over 650 undergraduate student subjects drawn from a large and diversified student population and will provide these students with important exposure to modern research methods. The VSEEL laboratory at Purdue has an excellent record of involving members of underrepresented minority groups at both the undergraduate and graduate levels and this project is expected to continue this tradition. Over 40 percent of these student subjects will be women and about one-third will be underrepresented minorities. Based on past experience, we expect that at least one-half of the Ph.D. student researchers will be women and/or minorities.
|
0.915 |
2011 — 2013 |
Ebert, David Mcdonald, David (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fodava Ii - the Science of Interaction Workshop
Abstract - Ebert (FODAVA II Workshop)
Society, as well as the science and engineering communities, is experiencing unprecedented growth in our capability to generate and access data. Turning this data deluge into usable, actionable information is a challenging, necessary task that is crucial to effective decision making, scientific discovery, and engineering advances. This growing need led to the development of the field of visual analytics. Key findings over the past years have illustrated that collaboration and interaction are key components that complete the integrated computational-human decision making loop. This occurs at many levels from individual manipulation of data representation to interactive cognitive discovery combined with automated analysis, to coordinative and collective interactive analysis among groups of individuals. Therefore, a research agenda for the "science of interaction" is needed that will support ubiquitous and collaborative analysis and discovery utilizing new, transparent interaction tools.
This workshop will help define the research topics within this Science of Interaction for data and visual analytics. The workshop will gather leading researchers from the variety of disciplines that underpin this new topic. Focal topics will be (1) ubiquitous, embodied interaction; (2) capturing user intent to guide the analytical process; (3) knowledge-based interfaces based on visual cognition and machine reasoning; (4) effective collaboration and collaboration tools; (5) principals of design, perception, and usability; and (6) composability and integration of tools. The main outcome of the workshop will be the workshop report defining a research roadmap for the Science of Interaction for Visual and Data Analytics, as well as a summary paper to be submitted to IEEE Computer Graphics and Applications? Visualization Viewpoints or IEEE Computer magazine.
|
0.915 |
2015 — 2016 |
Butzke, Christian Owens, Phillip Crawford, Melba (co-PI) [⬀] Ebert, David Peroulis, Dimitrios (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Few: Technology and Information Fusion Needs to Address the Food, Energy, Water Systems (Fews) Nexus Challenges
This proposal addresses a multidisciplinary workshop with academic researchers, corporate technology providers, and agricultural producers to define research challenges and a research road-map to address the following major FEWS challenges: 1)Developing novel targeted remote sensing and in-situ sensing technology that can be practically fielded and used in food and water system management. 2)Developing novel integrated hydrology, soil, microclimate, and plant/agricultural production models that interact accurately and across traditional scales for understanding local, regional, and national impacts. 3)Turning this developing and pending FEWS data deluge into usable, actionable information for agricultural producers, local and regional decision makers, and citizens.
The workshop addresses the emerging issues in the food/water/energy system throughout the diverse geography of United States and over various crops and environmental conditions to better understand and model, and ultimate devise a solution for the changes to the FEWS system. The solution must be multifaceted, multidisciplinary in order to incorporate sensing, hydrology, visual analytics, and the potential for increased climate change. The workshop will generate a report and other artifacts that will lead to research into solving these challenges and have an impact on scientific fields including, sensing technology, hydrology, soil science, climate, data fusion, analysis, visualization, and data driven decision making, as well as agricultural production, local and regional economies, sustainability and planning.
|
0.915 |
2019 — 2020 |
Ramani, Karthik [⬀] Ebert, David Peppler, Kylie Zhang, Song Redick, Thomas (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Convergence Accelerator Phase I (Raise): Skill-Learn: Affordable Augmented Reality Platform For Scaling Up Manufacturing Workforce, Skilling, A
The NSF Convergence Accelerator supports team-based, multidisciplinary efforts that address challenges of national importance and show potential for deliverables in the near future.
The broader impact/ potential benefit of this Convergence Accelerator Phase I project will be immediately applicable to the manufacturing sector, which has a multiplier effect on the economy and jobs. Automation is splitting the American labor force into two worlds: a relatively small number of highly educated professionals earning good wages, and less-educated workers who are left with businesses that pay low wages. Although we have had technology breakthroughs, the overall productivity growth is slow partly due to a workforce that lacks critical new competencies, such as procedural instruction learning, digital fluency, and other essential skills. The current and future workforce needs to be geared up for a culture of constant change. Companies have been relying on the age-old way of one-on-one Worker Apprenticeship model to train their new workforce. However, the recent need for larger scale and rapid training has created a bottleneck in terms of time and cost, especially for small and medium enterprises (SMEs). This research team comprising of mechanical and electrical engineers, psychologists, computer scientists, and education researchers will work toward accomplishing a goal of creating a scalable low-cost solution for (re)skilling the workforce. In order to address this (re)skilling challenge, we propose to emulate the worker apprenticeship and develop a new low-cost and flexible way for SMEs themselves to author in Augmented Reality (AR). By overlaying digital content over the physical world we will address issues of worker training that SMEs continue to face. We plan to break down the problem of worker (re)skilling into three categories: Authoring: knowledge transfer, Training-Learning: system scaling for consumption, and Feedback: providing active feedback to users. AR has shown to be a reliable mode of instructional training, improving speed and reliability, minimizing errors, and reducing the cognitive load of the user, especially for spatially situated instructions. SMEs are often unaware, reluctant, and lack access because of the high overhead costs, unknown returns, and lack of expertise involved in designing, creating, and maintaining the AR content for the success of this technology. We propose to develop technologies which enable experts in the SMEs to create content on their own and provide a smooth transfer of knowledge. The ease of creation of AR, and elimination of dependency on programming experts, will translate to a reduction in AR development time and cost.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2020 — 2021 |
Ebert, David Lu, Yung-Hsiang [⬀] Barbarash, David Zakharov, Wei |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative:Rapid:Leveraging New Data Sources to Analyze the Risk of Covid-19 in Crowded Locations.
The goal of this project is to create a software infrastructure that will help scientists investigate the risk of the spread of COVID-19 and analyze future epidemics in crowded locations using real-time public webcam videos and location based services (LBS) data. It is motivated by the observation that COVID-19 clusters often arise at sites involving high densities of people. Current strategies suggest coarse scale interventions to prevent this, such as cancellation of activities, which incur substantial economic and social costs. More detailed fine scaled analysis of the movement and interaction patterns of people at crowded locations can suggest interventions, such as changes to crowd management procedures and the design of built environments, that yield social distance without being as disruptive to human activities and the economy. The field of pedestrian dynamics provides mathematical models that can generate such detailed insight. However, these models need data on human behavior, which varies significantly with context and culture. This project will leverage novel data streams, such as public webcams and location based services, to inform the pedestrian dynamics model. Relevant data, models, and software will be made available to benefit other researchers working in this domain, subject to privacy restrictions. The project team will also perform outreach to decision makers so that the scientific insights yield actionable policies contributing to public health. The net result will be critical scientific insight that can generate a transformative impact on the response to the COVID-19 pandemic, including a possible second wave, so that it protects public health while minimizing adverse effects from the interventions.
We will accomplish the above work through the following methods and innovations. LBS data can identify crowded locations at a scale of tens of meters and help screen for potential risk by analyzing the long range movement of individuals there. Worldwide video streams can yield finer-grained details of social closeness and other behavioral patterns desirable for accurate modeling. On the other hand, the videos may not be available for potentially high risk locations, nor can they directly answer ?what-if? questions. Videos from contexts similar to the one being modeled will be used to calibrate pedestrian dynamics model parameters, such as walking speeds. Then the trajectories of individual pedestrians will be simulated in the target locations to estimate social closeness. An infection transmission model will be applied to these trajectories to yield estimates of infection spread. This will result in a novel methodology to include diverse real time data into pedestrian dynamics models so that they can quickly and accurately capture human movement patterns in new and evolving situations. The cyberinfrastructure will automatically discover real-time video streams on the Internet and analyze them to determine the pedestrian density, movements, and social distances. The pedestrian dynamics model will be reformulated from the current force-based definition to one that uses pedestrian density and individual speed, both of which can be measured effectively through video analysis. The revised model will be used to produce scientific insight to inform policies, such as steps to mitigate localized outbreaks of COVID-19 and for the systematic reopening, potential re-closing, and permanent changes to economic and social activities.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |