2007 — 2011 |
Hamann, Bernd [⬀] Weber, Gunther Pascucci, Valerio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Topology-Based Methods For Analysis and Visualization of Noisy Data @ University of California-Davis
Topology-based Methods for Analysis and Visualization of Noisy Data Principal Investigator: Bernd Hamann, University of California, Davis
Abstract
The size of scientific data sets that are generated by evolving supercomputers, large sensor networks, and high-resolution imaging devices is increasing rapidly, at an exponential rate. This project addresses the need for more effective data analysis methods. It develops technologies concerned with the analysis and representation of very large scientific data sets, emphasizing concepts that capture qualitative characteristics. In light of the limitations of purely visualization-based approaches applied to "raw" scientific data sets directly, this project aims at devising new concepts for visualizing very large and complex data sets. The methods being developed first extract meaningful qualitative information from a given data set, which is then used to present the higher-level information content of the data set in a significantly more compact form, thus stressing relevant qualitative behavior.
The project builds on concepts from classical topology and geometry, which have contributed substantially to the development of the relatively new fields of computational topology and computational geometry. These two fields hold great potential for substantially advancing the visualization technology for understanding extremely large, complicated data sets. This projects adapts (and generalizes) computational topology and computational geometry algorithms that are well-established for smooth mathematical functions to real-world, finite-sample data sets, i.e., functions sampled at a finite number of points (that could possibly be connected by a mesh). Real-world data sets are noisy, which further complicates the application of topological methods that were developed originally for smooth functions. This project investigates the generalization of techniques based on Morse and Morse-Smale theory (studying critical-point behavior and drawing qualitative conclusions about functions) to discretized scalar fields that change over time and also contain noise.
|
0.976 |
2009 — 2013 |
Clyne, John Ebert, David Gaither, Kelly Pascucci, Valerio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Enabling Transformational Science and Engineering Through Integrated Collaborative Visualization and Data Analysis For the National User Community @ University of Texas At Austin
This proposal will be awarded using funds made available by the American Recovery and Reinvestment Act of 2009 (Public Law 111-5), and meets the requirements established in Section 2 of the White House Memorandum entitled, Ensuring Responsible Spending of Recovery Act Funds, dated March 20, 2009. I also affirm, as the cognizant Program Officer, that the proposal does not support projects described in Section 1604 of Division A of the Recovery Act.
Enabling Transformational Science and Engineering through Integrated Collaborative Visualization and Data Analysis for the National User Community
Visualization is one of the most important and commonly used methods of analyzing and interpreting digital assets. For many types of computational research, it is the only viable means of extracting information and developing understanding from data. However, non-visual data analysis techniques, statistical analysis, data mining, data reduction, etc. also play integral roles in many areas of knowledge discovery. This award will, for the first time, provide a comprehensive suite of large-scale visualization and data analysis (VDA) services to the open science community. By leveraging existing tools and techniques, integrating state-of-the-art research products, and providing exceptional user support, we will deploy a national, general-purpose visualization and data analysis discovery environment. The deliverables include: o a remote visualization resource of extreme capability (Longhorn): o 256 nodes (2048 processor cores, 24 Tflops peak) with 512 GPUs, 12 terabytes of aggregate memory, 200 terabytes of local system storage o tightly integrated with Ranger (the TeraGrid's inaugural Track2 system) to handle digital assets at the largest scale (and potentially future TACC HPC systems); 1. a comprehensive collection of open source and commercial end-user VDA software tools; 2. expert visualization support, including advanced interactive user support and training, from a team comprising many of the leading visualization researchers in the US; and 3. a framework for rapidly integrating new visualization technologies from leading research teams (including our own) to increase user capabilities throughout the project. aborative Visualization and Data Analysis for the National User Community
|
0.976 |
2010 — 2011 |
Bargteil, Adam Pascucci, Valerio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager (G&V): Exploring Morse Theoretic Tools For Automatic Mesh Generation and Simulation On Surfaces
Abstract
The simulation of realistic physical phenomena, such as fluid interactions and deformable bodies, has become an indispensable part of both computational physics and computer animation. Such simulations produce stunning visual effects for the entertainment industry but also lead to new discoveries in diverse fields, such as astrophysics, energy production, or understanding the global climate. However, prior to running these simulations on a computer, the mathematical representation of the domain must be discretized in order to minimize computational errors (i.e., to obtain accurate physical results). The increased resolution of modern simulations is making this an increasingly important issue, especially for simulations requiring periodic remeshing or necessitating a fully automated approach.
Current practice in the coupling of discretization and computation has significant weaknesses. The computational tools often demand specific element shapes, e.g. hexahedra, over-constraining the discretization. On the other hand, meshing quality is generally measured by geometric quantities that provide only a limited connection to overall simulation performance. This research is demonstrating a new approach. Theoretical mathematics is used to develop, for the first time, a discretization scheme that is explicitly dependent on the structure of scalar fields generated by the simulation. The key insight is that a topological structure, the Morse-Smale (MS) complex, acts as a natural quadrilateral decomposition of a domain based on a given scalar field, called a background function. The background function behaves as a mechanism for encoding key information from a simulation. The MS complex then acts as a coarse mesh that coincides geometrically with the input domain while aligning itself with simulation properties. Finally, through optimization and subdivision, fine-grained meshes are generated that adapt locally to the resolution needed. This produces a discretization that more accurately follows the target simulations using fewer elements.
|
1 |
2010 — 2016 |
Pascucci, Valerio Sutherland, James |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Scalable Algorithms For Multiscale Modeling and Analysis of Turbulent Combustion
Pascucci 0904631
Accurate simulation of turbulent combustion is a major open problem requiring petascale computing to resolve highly nonlinear coupling of physical processes over a wide range of length and time scales. The PIs approach to develop new modeling and algorithmic approaches for this problem to tackle effectively High Performance Computing (HPC) for combustion simulation at the Petascale. The PI's approach combines three techniques: automatic algorithm parallelization, multidimensional data analysis for model reduction, and multi-scale modeling with topological analysis to connect models at different scales. The algorithm parallelization is based on an algorithmic analysis that detects dependencies among computing stages, using graph theory to detect and exploit parallelism more effectively than current algorithms. This approach is independent from and complimentary to MPI distributed parallelism and allows achieving the finer grain parallelism necessary to exploit the multi core resources available on each computing node. The PIs also plan a powerful new approach to model multiphysics flows, such as turbulent combustion that leverages direct numerical simulation (DNS) and one-dimensional turbulence (ODT) to provide surrogate 'truth sets'. High-dimensional DNS data sets, containing terabytes of data, can be analyzed to extract lower-dimensional manifolds known to exist. Techniques such as principal component analysis can identify the optimal basis for representing manifolds in this high-dimensional data. Once a basis has been identified and extracted from the data sets generated by ODT, transport equations for the variables forming the basis may be derived and solved in a large-eddy simulation (LES). The LES can then be used to generate new ODT simulations which can feed back to the LES, thereby creating a dynamic modeling approach that uses down-scale, highly resolved statistical information to construct models to be used on larger scales (LES). This modeling approach is a prime candidate for early testing on petascale systems. The researchers in this study have already demonstrated the ability to scale DNS and LES to terascale computing systems, and availability of petascale computing will directly enable these modeling approaches. Application of the algorithmic and modeling advances will be made to oxyfuel combustion of natural gas. Oxyfuel combustion is one technique to facilitate carbon capture and sequestration to mitigate carbon dioxide emissions from power plants burning fossil fuels. While application will be made to natural gas systems, the techniques and algorithms developed here will apply directly to other systems including coal and transportation fuels such as diesel and gasoline. This project will provide unique educational experiences for students, including summer internships at national laboratories. Incorporating in regular classes the lessons learned in this project will help educate the future work force. Additionally, the research will strengthen collaborations between university researchers and national laboratory staff involved in simulation and model development, who will also participate in mentoring students.
|
1 |
2013 — 2017 |
Bargteil, Adam Pascucci, Valerio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cgv: Large: Collaborative Research: Coupling Simulation and Mesh Generation Using Computational Topology
Many simulation algorithms depend on an underlying spatial discretization---a mesh that decomposes the domain into a finite set of elements that can be analyzed by a computer. The quality of the simulation is, in part, determined by the quality of the mesh. However, in the past, mesh generation and simulation were done as separate processes. Better results can be achieved by tightly integrating simulation with mesh generation and recent advices in computational topology provide the key to doing so. Computational topology allows for the analysis of the structure of data---in this case simulation variables. By understanding the structure of the simulation the mesh generation algorithms can adapt to it, providing meshes that are closely linked to the actual simulation.
Because simulation is a powerful tool for discovery throughout computational science, this research has the potential to have broad impact across all fields of science and enable new scientific breakthroughs that could have tremendous societal impact. This research will also produce open-source software, short courses, and workshops around the topic of coupling simulation and meshing. The interdisciplinary nature of this project will lead to a rich educational and research environment for graduate and undergraduate students. The project web site provides access to research results, software and educational materials (http://sealab.cs.utah.edu/SimulationMeshingTopology).
|
1 |
2016 — 2018 |
Pascucci, Valerio Angelucci, Alessandra (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Computational Infrastructure For Brain Research: Eager: a Scalable Solution For Processing High Resolution Brain Connectomics Data
Obtaining a "connectome" or map of the wiring of the brain is crucial to understanding brain structure and function, and has been set as long-term goals of several international government-funded initiatives due to the potential benefits for improving health, treating brain diseases, and understanding development. As technologies for sample preparation and microscopy advance, it is becoming feasible to image large sections of brain tissue. However, the vast quantities of data produced with these techniques is far outpacing the ability of neuroscientists to analyze the data. This project will address the data analysis challenge by developing new computational software tools that facilitate use of advanced computing for connectomics studies, in alignment with NSF's mission to promote the progress of science and advance national health, prosperity and welfare.
Understanding the microarchitecture and neuronal morphologies that comprise neural circuitry in the brain is crucial to understanding brain function. This EAGER project aims to build the computational and data infrastructure that is necessary to manage and process large microscopy imaging data sets for connectomics studies, bringing High Performance Computing (HPC) resources into the neuroscience workflow. The project will employ a data model that enables scientists to visualize, interact with, and process data of any size that is stored in any remote location, from USB drives to high-performance parallel file systems. The software infrastructure will furthermore enable automatic mapping of analysis procedures designed by neuroscientists to remote HPC systems. The system will leverage state-of-the-art tools and practices developed in the HPC community, and aims to result in greatly accelerated studies of connectivity in the brain at scale.
This Early-concept Grants for Exploratory Research (EAGER) award by the CISE Division of Advanced Cyberinfrastructure is jointly supported by the CISE Division of Information and Intelligent Systems, with funds associated with the NSF Understanding the Brain, BRAIN Initiative activities, and for developing national research infrastructure for neuroscience. This project also aligns with NSF objectives under the National Strategic Computing Initiative.
|
1 |
2016 — 2017 |
Pascucci, Valerio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pfi:Air - Tt: Cost Effective Solutions For Storage and Access of Massive Imagery
This PFI: AIR Technology Translation project focuses on the potential to revolutionize how microscopy and medical devices are used and the science questions that they can answer. When image data size is no longer a restricting factor, new domains of study become possible relating the micro-scale to macro-scale, such as understanding the neural connectomics of the visual cortex. By removing the barrier of time, effort, and expertise to use large imagery, VisStore will enable scientists to scale their existing workflows. Such capability would open new investigations into fundamental biological processes, the origin and progression of diseases, and ultimately the drugs and procedures for curing them. Although initially tailored to life sciences applications, VisStore can be integrated in microscopy for new devices and emerging disciplines, such as precision medicine, material sciences and semiconductors. Furthermore, VisStore has the potential to ease the transition from current workflows to fully online cloud-based ones. Furthermore, VisStore and the hierarchical streaming infrastructure have the potential to become the de-facto standard for large volumetric images. This Accelerating Innovation Technology Translation project will support R&D to build a prototype of VisStore, a plug-and-play device for easy storing, archiving, accessing, distributing, and processing massive volumetric images coming from microscopy or medical devices. It translates research discovery toward commercial applications in the microscopy market which continues to grow, topping $4.1 billion in 2014 with an anticipated CAGR growth of 7.1%, while the addressed cyber-infrastructures to reliably store, easily access, and efficiently process such data have not kept pace. This has led to a discrepancy between the quality of data that could be produced, and what actually is used, as scientists unnecessarily restrict image sizes to match computational capabilities. Brute force solutions for scaling to massive images are expensive, difficult to maintain, and require expertise usually out of reach for smaller institutions. VisStore is a combined software/hardware/cloud solution that enables ease of use for image data of any size. No more complicated than a USB drive, VisStore allows users to easily access, process, and distribute giga and terapixel 2D and 3D images within a workgroup, a company, or even globally distributed environments. The technology behind VisStore enhances the state-of-the-art for handing massive image volumes. Modern software tools often stop scaling when data size exceeds main memory, and this has been a limiting factor for microscopy imagery. When dealing with image data, the hierarchical streaming software infrastructure implemented in VisStore essentially extends the memory hierarchy of a workstation to both an external network-attached hard drive (NAS), and even cloud-based storage. With each component acting as a cache, VisStore achieves performance through on-demand data access to proprietary file layout that minimizes the amount of data transferred between levels, enabling efficient scaling to images of any size. This will support development for: (i) automated ingestion and conversion of images coming from microscopy or medical devices; (ii) a simple user interface and tool to manage local and remote storage of data; and (iii) a tool to select and export data for integration with existing workflows. The project engages the Moran Eye Center, the Associated Regional and University Pathologists, Inc. (ARUP) laboratories and the Oregon Health & Science University to develop and tests a prototype acquiring giga- and teravoxel images and test its commercial value to translate this technology from research discovery towards a commercial reality. In particular, the graduate and undergraduate students supported by the project will be directly involved in these entrepreneurial activities. They will cooperate directly with the early adopters of the technology at the collaborating institutions and receive hands-on experience in how to identify and resolve their pain points and ultimately translate the raw technology into a product with commercial value.
|
1 |
2018 — 2020 |
Deelman, Ewa [⬀] Nabrzyski, Jaroslaw Mandal, Anirban (co-PI) [⬀] Ricci, Robert Pascucci, Valerio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pilot Study For a Cyberinfrastructure Center of Excellence @ University of Southern California
NSF's major multi-user research facilities (large facilities) are sophisticated research instruments and platforms - such as large telescopes, interferometers and distributed sensor arrays - that serve diverse scientific disciplines from astronomy and physics to geoscience and biological science. Large facilities are increasingly dependent on advanced cyberinfrastructure (CI) - computing, data and software systems, networking, and associated human capital - to enable broad delivery and analysis of facility-generated data. As a result of these cyber infrastructure tools, scientists and the public gain new insights into fundamental questions about the structure and history of the universe, the world we live in today, and how our plants and animals may change in the coming decades. The goal of this pilot project is to develop a model for a Cyberinfrastructure Center of Excellence (CI CoE) that facilitates community building and sharing and applies knowledge of best practices and innovative solutions for facility CI.
The pilot project will explore how such a center would facilitate CI improvements for existing facilities and for the design of new facilities that exploit advanced CI architecture designs and leverage establish tools and solutions. The pilot project will also catalyze a key function of an eventual CI CoE - to provide a forum for exchange of experience and knowledge among CI experts. The project will also gather best practices for large facilities, with the aim of enhancing individual facility CI efforts in the broader CI context. The discussion forum and planning effort for a future CI CoE will also address training and workforce development by expanding the pool of skilled facility CI experts and forging career paths for CI professionals. The result of this work will be a strategic plan for a CI CoE that will be evaluated and refined through community interactions: workshops and direct engagement with the facilities and the broader CI community. This project is being supported by the Office of Advanced Cyberinfrastructure in the Directorate for Computer and Information Science and Engineering and the Division of Emerging Frontiers in the Directorate for Biological Sciences.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.976 |
2019 — 2021 |
Pascucci, Valerio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager: the Next Generation of Smart Cyberinfrastructure: Efficiency and Productivity Through Artificial Intelligence
Efficient cyberinfrastructure (advanced computing, data, software and networking infrastructure) is a critical component of the support that NSF provides for new discoveries in science and engineering. Cyberinfrastructure is complex and traditionally requires years of human hand-tuning to fully achieve maximal performance for scientific users. We propose to introduce Artificial Intelligence (AI) as a way to automatically and quickly optimize the performance and broadest use of recent NSF-supported advanced computing resources. Through this pilot effort our ultimate aim is to enable and accelerate scientific advances in widely diverse fields such as biology, chemistry, oceanography, materials science, climate modeling, and cosmology.
As the research cyberinfrastructure grows rapidly in scale and complexity, it is essential to integrate new technologies based on Machine Learning (ML) and AI to ensure that the investments in new hardware and software components result in proportional improvements in performance and capability. This project will undertake a transformative research activity targeting: (1) scaling ML algorithms to make them easily available to the scientific community; and (2) improving cyberinfrastructure efficiency through AI-based predictive models. This technical work will be complemented and informed by a community engagement effort to jointly catalog the state of the art and identify future challenges and opportunities in enabling a new smart cyberinfrastructure.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |
2021 — 2026 |
Deelman, Ewa [⬀] Pascucci, Valerio Mandal, Anirban (co-PI) [⬀] Nabrzyski, Jaroslaw Murillo, Angela |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Ci Coe: Ci Compass: An Nsf Cyberinfrastructure (Ci) Center of Excellence For Navigating the Major Facilities Data Lifecycle @ University of Southern California
Innovative and robust Cyberinfrastructure (CI) is critical to the science missions of the NSF Major Facilities (MFs), which are at the forefront of science and engineering innovations, enabling pathbreaking discoveries across a broad spectrum of scientific areas. The MFs serve scientists, researchers and the public at large by capturing, curating, and serving data from a variety of scientific instruments (from telescopes to sensors). The amount of data collected and disseminated by the MFs is continuously growing in complexity and size and new software solutions are being developed at an increasing pace. MFs do not always have all the expertise, human resources, or budget to take advantage of the new capabilities or to solve every technological issue themselves. The proposed NSF Cyberinfrastructure Center of Excellence, CI Compass, brings together experts from multiple disciplines, with a common passion for scientific CI, into a problem-solving team that curates the best of what the community knows; shares expertise and experiences; connects communities in response to emerging challenges; and builds on and innovates within the emerging technology landscape. By supporting MFs to enhance and evolve the underlying CI, the proposed CI Compass will amplify the largest of NSF’s science investments, and have a transformative, broad societal impact on a multitude of MF science and engineering areas and the community of scientists, engineers, and educators MFs serve. CI Compass will also impact the broader NSF CI ecosystem through dissemination of CI Compass outcomes, which can be adapted and adopted by other large-scale CI projects and thus empower them to more efficiently serve their user communities.
The goal of the proposed CI Compass is to enhance the CI underlying the MF data lifecycle (DLC) that represents the transformation of raw data captured by state-of-the-art scientific MF instruments into interoperable and integration-ready data products that can be visualized, disseminated, and converted into insights and knowledge. CI Compass will engage with MFs and contribute knowledge and expertise to the MF DLC CI by offering a collection of services that includes evaluating CI plans, helping design new architectures and solutions, developing proofs of concept, and assessing applicability and performance of existing CI solutions. CI Compass will also enable knowledge-sharing across MFs and the CI community, by brokering connections between MF CI professionals, facilitating topical working groups, and organizing community meetings. CI Compass will also disseminate the best practices and lessons learned via online channels, publications, and community events.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.976 |
2021 |
Pascucci, Valerio |
U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Data Science Core @ University of Massachusetts Amherst
SUMMARY / ABSTRACT ? Data Science Core With the introduction of the Big Data paradigm, scientific investigation has become increasingly dependent on the ability to collect, manage, and process large amounts of data. Unfortunately, the scientific benefits of abundant data sources have often been dwarfed by the inadequacy of the existing data science tools, as scientists spent excessive time managing and processing data instead of focusing on the science. Worse, people would face insurmountable challenges to associate the results published in journal papers with the data and workflows needed to replicate the science. The Data Science Core of the Berghia Brain Project plans to provide the large distributed team of scientists the cyberinfrastructure needed to accomplish their aims without diverting time and energy from the science activities. In particular, each of the five Research Project's activities will require, to varying degrees, easy, intuitive access to a combination of massive storage, imaging devices, cloud resources, and high-performance computing (HPC) platforms to execute scientific workflows in a reliable and repeatable manner. Internal collaborations and external dissemination of data and results will challenge existing solutions, necessitating a tailored cyberinfrastructure for the Berghia Brain Project. The strategy used to develop this cyberinfrastructure relies on the following three complementary capabilities: (i) Create a data management infrastructure that connects all the institutions in a federated data store; (ii) Develop a scalable computing infrastructure that allows processing in a timely manner the massive amounts of data acquired by each project; and (iii) Develop services for interoperability and provenance tracking that enable sharing software tools, science workflows, and versioned data that will allow publishing the BBP findings with the best standards of reproducible science.
|
0.976 |
2021 — 2024 |
Wuerthwein, Frank Szalay, Alexander Allison, John Taufer, Michela (co-PI) [⬀] Pascucci, Valerio |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Oac: Piloting the National Science Data Fabric: a Platform Agnostic Testbed For Democratizing Data Delivery
Ongoing investments from NSF and other agencies into shared experimental and computing facilities increase data generation by orders of magnitude and presents a challenge for universal, easy, and fast access to data by users and may limit the scientific impact of such facilities. This pilot seeks to demonstrate a trans-disciplinary National Science Data Fabric (NSDF) integrating access to and use of shared storage, networking, computing, and educational resources and, in doing so, will help democratize data-driven sciences through the development of a cyberinfrastructure (CI) platform designed for equitable access. This pilot connects an open network of researchers gathered around earth science, astronomy, biology, chemistry, physics, and materials science, to deploy a testbed for individual and shared scientific use. Supporting the IceCube neutrino observatory and the XenonNT dark matter detector will advance the understanding of the evolution of galaxies and the nature of dark matter and dark energy. Supporting the Materials Common enables the fast-paced design of new materials in critical fields such as energy, security, environment, and healthcare. Active involvement of Historically Black Colleges, the Minority Serving Cyberinfrastructure Consortium, and of Hispanic Serving Institutions assures true democratization of data-driven science and unleashes the intellectual potential of a genuinely diverse scientific community presenting the best potential for US innovation.
The National Science Data Fabric (NSDF) pilot builds a testbed experimenting with critical technology needed to democratize data-driven sciences by constructing a CI platform designed for equitable access. In particular, NSDF experiments with key technologies that empower user communities to develop their solutions and support domain-specific requirements while avoiding duplication of technology. A programmable Content Delivery Network (CDN) will be a central component that interoperates with different appliances and storage solutions ranging from leadership-class computing facilities, campus-wide computing resources, commercial cloud, and research labs of individual investigators. With this strategy, NSDF connects storage, compute, and networking components with a software stack that empowers end-users with scalable tools that are easy to use, integrate and scale. Community-driven education and outreach will guarantee equitable access to all resources and engage an open network of universities, including minority-serving institutions in a federated data fabric configurable for individual and shared scientific use. By offering a shared, modular, containerized data delivery environment, operating at the best economies of scale, the NSDF pilot will demonstrate a key technology to fill the “missing middle” in the national computational infrastructure and will help address the “missing millions” challenge of American talent in STEM.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
1 |