1985 |
Marcus, Daniel C |
F06Activity Code Description: Undocumented code - click on the grant title for more information. |
Electrophysiology of the Utricle |
0.915 |
2008 |
Marcus, Daniel Scott |
U54Activity Code Description: To support any part of the full range of research and development from very basic to clinical; may involve ancillary supportive activities such as protracted patient care necessary to the primary research or R&D effort. The spectrum of activities comprises a multidisciplinary attack on a specific disease entity or biomedical problem area. These differ from program project in that they are usually developed in response to an announcement of the programmatic needs of an Institute or Division and subsequently receive continuous attention from its staff. Centers may also serve as regional or national resources for special research purposes, with funding component staff helping to identify appropriate priority needs. |
Washington University Consoritum @ Brigham and Women's Hospital |
0.913 |
2008 — 2014 |
Marcus, Daniel Scott |
U01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. U19Activity Code Description: To support a research program of multiple projects directed toward a specific major objective, basic theme or program goal, requiring a broadly based, multidisciplinary and often long-term approach. A cooperative agreement research program generally involves the organized efforts of large groups, members of which are conducting research projects designed to elucidate the various aspects of a specific objective. Substantial Federal programmatic staff involvement is intended to assist investigators during performance of the research activities, as defined in the terms and conditions of award. The investigators have primary authorities and responsibilities to define research objectives and approaches, and to plan, conduct, analyze, and publish results, interpretations and conclusions of their studies. Each research project is usually under the leadership of an established investigator in an area representing his/her special interest and competencies. Each project supported through this mechanism should contribute to or be directly related to the common theme of the total research effort. The award can provide support for certain basic shared resources, including clinical components, which facilitate the total research effort. These scientifically meritorious projects should demonstrate an essential element of unity and interdependence. |
Dominantly Inhrited Alzheimer's Network (Dian) Informatics Core
Autosomal dominant Alzheimer's Disease (ADAD) represents a small fraction (<1%) of all Alzheimer's Disease cases, but it presents a unique window into the disease. Because individuals possessing known ADAD-causing mutations are destined to develop the disease at an early and relatively predictable age, they can be studied from a presymptomatic stage and the progression of the disease can be observed. The Dominantly Inherited Alzheimer's Network (DIAN) will, for the first time, study ADAD in a systematic and comprehensive manner, acquiring biochemical, neuroimaging, cognitive, and clinical measures from 240 individuals from families with known ADAD mutations. The Informatics Core will be responsible for managing all of the data acquired within DIAN. A centralized database - the DIAN Central Archive (DCA) - will be deployed to store the data and make it available to investigators in a user-friendly manner. Data will be acquired at seven performance sites and uploaded into the DIAN Central Repository (DCA). Once uploaded, the data will reside in quarantine until has passed several rounds of quality control checks to identify missing fields, outliers, and other discrepancies. Imaging data will be distributed to dedicated quality control sites for systematic postprocessing and inspection. Once released from quarantine, the data will be made available to DIAN investigators via a secure user-friendly web interface. The interface will allow users to locate and join data across all of the measures obtained in the study. We will also make available public releases of prepared, anonymized data sets. Participant privacy and overall system security will be addressed with the utmost attention in all aspects of the Core's infrastructure. The Informatics Core will maintain close interactions with each of the other cores. Regular reports will be prepared with the Biostatistics Core and presented to the Administration Core and Steering Committee. Under the direction of the Administration and Clinical Cores, we will create a website for distributing information about DIAN to the research community and the general public. Data entry and sample tracking web pages will be implemented to support the Genetics, Neuropathology, and Biomarkers cores. Support for the Imaging Core will include image upload/download tools, image file de-identification procedures, exchange with quality control sites, and implementation of automated processing and analysis workflows.
|
1.009 |
2009 — 2013 |
Fouke, Sarah Jost Marcus, Daniel Scott |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Birn Resources Facilitate the Personalization of Malignant Brain Tumor
DESCRIPTION (provided by applicant): Glioblastoma (GBM) is the most common primary malignant neoplasm of the adult brain. Even after multimodal therapy, treatment outcomes remain poor, with a median survival of approximately one year. A central challenge facing investigators in the modern era is how to resolve the heterogeneity inherent in GBM pathology using technology and how to identify individual genetic or molecular markers that indicate how treatment can be individualized to improve outcomes with an emphasis on using this heterogeneity to improve patient care. With advances in imaging and the potential for genetic sequence analysis, increasingly clinicians and researchers have focused on specific clinical, imaging, and genetic biomarkers to allow the personalization of brain tumor treatment in an attempt to improve the limitations we have faced in extending patient survival from this devastating disease. Specific methodologies have been developed to allow genetic microarray analysis of patient's tumor tissue, and this type of research is ongoing at one of our participating institutions, Swedish Medical Center. In addition, centers such as Washington University School of Medicine in St. Louis, Missouri have extensive experience pursuing advanced imaging biomarkers and their applications to clinical neuro-oncology research. Of importance, however, although clinicians and researchers have come to recognize that in-vivo imaging technologies may have as much if not more relevance than genetic biomarkers in the personalization of brain tumor treatment, clinical trials attempting to validate these biomarkers and correlate them with particular outcomes have been limited by a lack of technology infrastructure that would allow multi-site image acquisition, processing, data analysis, subsequent correlation with clinical and genetic data, and ultimately sharing of anonymized data with other researchers from a central archiving site. This proposal seeks to use BIRN infrastructure to integrate neuroimaging, genetic microarray, and clinical data with a focus on integrating imaging biomarkers into prospective clinical research in patients with malignant brain tumors. In this project, a consortium of neuro-oncology research centers will be federated to obtain a unified set of clinical, genetic, and imaging data. In the initial phase, 100 patients with malignant brain tumors at two participating sites will be studied. Our ultimate goal will be to use the developed protocols and informatics infrastructure to expand the consortium to include a large number neuro-oncology clinical sites suitable for executing large scale clinical trials that will facilitate the generation of data to identify which imaging biomarkers are relevant for the personalization of brain tumor treatment and ultimately improvement of outcomes for patients with this devastating disease.
|
1.009 |
2009 — 2020 |
Marcus, Daniel Scott |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
The Xnat Imaging Informatics Platform
PROJECT SUMMARY This proposal aims to continue the development of XNAT. XNAT is an imaging informatics platform designed to facilitate common management and productivity tasks for imaging and associated data. We will develop the next generation of XNAT technology to support the ongoing evolution of imaging research. Development will focus on modernizing and expanding the current system. In Aim 1, we will implement new web application infrastructure that includes a new archive file management system, a new event bus to manage cross-service orchestration and a new Javascript library to simplify user interface development. We will also implement new core services, including a Docker Container service, a dynamic scripting engine, and a global XNAT federation. In Aim 2, we will implement two innovative new capabilities that build on the services developed in Aim 1. The XNAT Publisher framework will streamline the process of data sharing by automating the creation and curation of data releases following best practices for data publication and stewardship. The XNAT Machine Learning framework will streamline the development and use of machine learning applications by integrating XNAT with the TensorFlow machine learning environment and implementing provenance and other monitoring features to help avoid the pitfalls that often plague machine learning efforts. For both Aim 1 and 2, all capabilities will be developed and evaluated in the context of real world scientific programs that are actively using the XNAT platform. In Aim 3, we will provide extensive support to the XNAT community, including training workshops, online documentation, discussion forums, and . These activities will be targeted at both XNAT users and developers.
|
1.009 |
2011 — 2019 |
Hershey, Tamara G (co-PI) [⬀] Marcus, Daniel Scott |
P30Activity Code Description: To support shared resources and facilities for categorical research by a number of investigators from different disciplines who provide a multidisciplinary approach to a joint research effort or from the same discipline who focus on a common research problem. The core grant is integrated with the center's component projects or program projects, though funded independently from them. This support, by providing more accessible resources, is expected to assure a greater productivity than from the separate projects and program projects. |
Ninds Center Core For Brain Imaging
Overall Center Summary The NINDS P30 center core at Washington University (WashU), which operates as the Neuroimaging Informatics and Analysis Center (NIAC), was established in 2004 to facilitate neuroimaging as a core research tool in the drive to better understand and treat neurological conditions and diseases. Neuroimaging-based research at Washington University covers the spectrum from preclinical small animal studies to large-scale clinical trials to translational use of novel imaging methods in the clinic. In recent years, this research has become more expansive to include larger patient cohorts, more complex imaging protocols, greater diversity of clinical, genetic, and behavioral measures, and more extensive quantitative post-processing and analysis. This research covers a diversity of neurological diseases and conditions, including stroke, traumatic brain injury, Parkinson's Disease, Alzheimer's disease, multiple sclerosis, and epilepsy. The overarching mission of the NIAC is to facilitate the full range of this research through a suite of innovative imaging physics, informatics, and analysis services supported by a rich consulting and educational program. With this continuation proposal, we will expand the Center's services to support the evolving practices of our investigators. Specific Aim 1 will facilitate large scale, interdisciplinary neuroimaging-based research by providing a comprehensive informatics infrastructure that provides data management and automated image processing services for small animal imaging and human imaging obtained at WashU facilities and in multi- center clinical studies. Specific Aim 2 will facilitate the development and adoption of new neuroimaging methods in clinical research and clinical practice by providing active expert consulting, MRI physics support, and advanced software development services. The NIAC will include three cores to achieve these aims. The Administrative Core will oversee the Center's operations and allocation of resources, operate an active outreach and education program, and ensure a high level of communication between the Center's faculty and staff and the neuroscience community. The Informatics Core will provide software and hardware to integrate the imaging facilities, provide database services, execute automated and semi-automated image processing pipelines, and share data within collaborative networks and the broader research community. The Imaging Core will provide direct user support through consulting services and training programs and will lead the effort to develop and transition new methods to the Center's users. The Center will be overseen by a Steering Committee composed of Center users and domain experts with deep and diverse experience. The three cores, under the Steering Committee's guidance, will work closely together to provide a comprehensive suite of services and resources to support the University's neuroimaging community.
|
1.009 |
2014 |
Marcus, Daniel Scott |
UF1Activity Code Description: To support a discrete, specific, circumscribed project to be performed by the named investigator(s) in an area representing specific interest and competencies based on the mission of the agency, using standard peer review criteria. This is the multi-year funded equivalent of the U01 but can be used also for multi-year funding of other research project cooperative agreements such as UM1 as appropriate. |
Dian Informatics Core
Autosomal dominant Alzheimer?s Disease (ADAD) represents a small fraction (<1%) of all Alzheimer?s Disease cases, but it presents a unique window into the disease. Because individuals possessing known ADADcausing mutations are destined to develop the disease at an early and relatively predictable age, they can be studied from an asymptomatic stage and the progression of the disease can be observed. The Dominantly Inherited Alzheimer?s Network (DIAN) will continue its study of ADAD in a systematic and comprehensive manner, acquiring biochemical, neuroimaging, cognitive, and clinical measures from individuals from families with known ADAD mutations. The Informatics Core will be responsible for managing all of the data acquired within DIAN, except for mutation status. A centralized database ? the Central Neuroimaging Data Archive (CNDA) ? will be deployed to store and make data available to investigators in a user-friendly manner. Data will be acquired at the performance sites and uploaded into the CNDA. Once uploaded, the data will reside in quarantine until passing several rounds of quality control checks to identify missing fields, outliers, and other discrepancies. Imaging data will be distributed to dedicated quality control sites for systematic post-processing and inspection. Once released from quarantine, the data will be made available to DIAN investigators via a data freeze produced and carefully orchestrated with the Biostatistics Core. Participant privacy and overall system security will be addressed with the utmost attention in all aspects of the Core?s infrastructure. The Informatics Core will maintain close interactions with each of the other cores. Regular reports will be prepared with the Biostatistics Core and presented to the Administration Core and Steering Committee. Support for the Imaging Core will include image upload/download tools, image file de-identification procedures, exchange with quality control sites, and implementation of automated processing and analysis workflows. All of these procedures and systems currently are in place in DIAN and function smoothly.
|
1.009 |
2015 — 2019 |
Marcus, Daniel Scott Van Essen, David C [⬀] |
R24Activity Code Description: Undocumented code - click on the grant title for more information. |
Connectome Coordination Facility
? DESCRIPTION (provided by applicant): This project will establish a Connectome Coordination Facility (CCF) by capitalizing on recent successes of the Human Connectome Project (HCP), which has acquired, analyzed, and shared multimodal neuroimaging data and behavioral data on a large population of healthy adults. Major advances by the HCP include (i) the establishment of data acquisition protocols that yield high quality data across multiple modalities; (ii) the implementation of preprocessing pipelines that take full advantage of the high quality imaging data; and (iii) the establishment of a robust informatics infrastructure that has allowed widespread sharing of the HCP data within the neuroimaging and neuroscience communities. The CCF will build on these accomplishments and serve the human neuroimaging community in three ways. One aim is to provide consultation and support services to the research community for the primary purpose of harmonizing image acquisition protocols with those of the HCP. The effort will establish a help desk whose support functions will include transfer of data acquisition sequences and image reconstruction algorithms; providing updates and improvements for these sequences and algorithms; harmonization of imaging protocols and image reconstruction support for different software platforms and versions; and consultation for potential problems (e.g. image artifacts). A second aim is to provide services that maximize comparability of data acquired by CCF contributors. These services will include pre-data acquisition guidance to contributors to ensure that each project's behavioral data are obtained using HCP-compatible methods. This will entail coordination with data contributors to develop mechanisms to streamline transfers of de-identified data from the study sites to the CCF database. The data from each study will include the unprocessed images, minimally preprocessed data generated by each project's internal pipelines, and all data associated with the project's behavioral battery. Manual and automated quality control procedures will be implemented based on existing HCP methods to generate quality metrics that will be published with the data. A standardized set of pipelines will be run in order to produce minimally preprocessed data that is fully harmonized with the other data sets in the CCF database. A third aim is to maintain the existing ConnectomeDB data repository infrastructure for Human Connectome Data and expand it to include Connectome data from other research laboratories that are funded as U01 projects under the Connectomes Related to Human Diseases RFA. The platform will be developed and operated following policies and procedures vetted by the HCP to ensure the privacy and security of the data hosted by the CCF. Together, these three aims will establish the CCF as a central hub for connectomics data aggregation and sharing. The CCF's suite of harmonization services from data acquisition through data sharing will ensure an unprecedented level of compatibility across data sets. The resulting database will enable the scientific community to conduct novel analyses to better understand brain function in health and disease.
|
1.009 |
2016 — 2019 |
Marcus, Daniel Scott |
P30Activity Code Description: To support shared resources and facilities for categorical research by a number of investigators from different disciplines who provide a multidisciplinary approach to a joint research effort or from the same discipline who focus on a common research problem. The core grant is integrated with the center's component projects or program projects, though funded independently from them. This support, by providing more accessible resources, is expected to assure a greater productivity than from the separate projects and program projects. |
Informatics Core
Informatics Core Summary The Neuroimaging Informatics and Analysis Center (NIAC) Informatics Core will provide services to assist WashU researchers in running increasingly complex neuroimaging studies. These services are integrated into the NIAC's Central Neuroimaging Data Archive (CNDA), which manages an organized workflow designed to ensure that images are correctly acquired, securely archived, carefully documented, systematically processed, and linked with related non-imaging data. The CNDA supports dozens of local and multi-center research studies in diverse fields including neuro-oncology, stroke, Alzheimer's disease, traumatic brain injury, multiple sclerosis, AIDS, and epilepsy. Aim 1 is to operate and expand the Central Neuroimaging Data Archive. The CNDA will be upgraded to XNAT 2.0, which will include a variety of improvements: a completely redesigned user interface, new security features, significant performance improvements, novel visualization capabilities, streamlined project management tools, and additional collaboration and data sharing features. The CNDA will be integrated with all research scanners and radiology information systems to enable streamlined and automated data archiving. Working closely with the Center for Biomedical Informatics (CBMI), we will integrate the CNDA with the REDCap electronic data capture system and I2B2 clinical data research database. The CNDA's capabilities will be expanded to support importing, managing, and processing data from the optical and preclinical imaging devices used at WashU. The core also operates the Translational Imaging Platform, which directly interfaces with clinical workflows to enable access to patient scans and creation of derived imaging metrics for use by clinical investigators working to move laboratory-based methods to the clinic. Aim 2 is to implement and support automated image processing pipelines. In collaboration with the Imaging Core, the Informatics core will develop, test, and deploy pipelines to process data acquired following standard acquisition protocols developed by the Center and its partners in the Neuroimaging Laboratories and Optical Radiology Laboratory. Finally, Aim 3 is to deploy and support the Center's computing infrastructure. The Core operates a scalable, high availability Linux-based computing infrastructure that supports the Center's database and web servers, pipeline execution cluster, user work stations, and computing lab. This infrastructure utilizes VMWare virtualization software and Puppet configuration management software to optimize system utilization and maintain consistent standard system specifications. The Core also provides NIAC users with access to the Center for High Performance Computing, which provides massively scalable resources for the most demanding image analysis tasks. These aims will be executed by a highly experienced team of software developers, research scientists, and support staff. In close coordination with the Administrative Core, the Informatics Core will actively engage with the WashU neuroimaging community to assist them in using the Core's services.
|
1.009 |
2016 — 2020 |
Marcus, Daniel Scott Wahl, Richard Leo |
U24Activity Code Description: To support research projects contributing to improvement of the capability of resources to serve biomedical research. |
Integrative Imaging Informatics For Cancer Research
? DESCRIPTION (provided by applicant): The development of non-invasive imaging procedures that provide surrogate markers for prognosis and early response to therapy has the potential to improve clinicians' ability to effectively treat patients with oncologic disease. However, the incorporation of advanced imaging into oncology research settings has been slowed by a lack of adequate informatics technology. The Integrative Imaging Informatics for Cancer Research (I3CR) program will address this gap informatics technology by developing two informatics platforms. The I3CR Data Management Platform will serve as an informatics hub that integrates image acquisition, analysis workflows, phenotypic and genomic data sources, and clinical information and treatment systems. The Data Management Platform will be developed as a major new extension of the XNAT imaging informatics system to include additional cancer-specific data types, new web services, support for a number of widely used data formats, and customized user interface components. In addition, I3CR will collaborate with a number of external technology providers to integrate the Data Management Platform with widely used visualization and analysis applications, complementary database systems, and clinical diagnostic and treatment systems. The I3CR Knowledge Management Platform will enable researchers to publish portfolios that document and track the source data, derived data, software applications and scripts, and computing resources used to generate scientific findings. The Knowledge Management Platform will integrate seamlessly with widely used workflow engines and the Data Management Platform to generate knowledge portfolios that can be tied directly to publications to enable reproducible research. The central hypothesis of the I3CR program is that integrative informatics technology will enable cancer researchers to overcome technical hurdles that currently impede scientific progress. We will evaluate this hypothesis within the context of a federated network of cancer imaging researchers, including members of the NCI Quantitative Imaging Network that will undertake pilot projects in radiation therapy dose optimization and glioblastoma tumor mapping. All software will be open source and supported through a number of outreach channels, including annual technical workshops, monthly teleconferences, and an active web-based discussion forum.
|
1.009 |
2020 — 2021 |
Marcus, Daniel Scott Shoghi, Kooresh Isaac |
U24Activity Code Description: To support research projects contributing to improvement of the capability of resources to serve biomedical research. |
Development of An Open-Source Preclinical Imaging Informatics Platform For Cancer Research
ABSTRACT Preclinical imaging is widely used in cancer research to devise novel tumor detection strategies, assess tumor burden and physiology/biology, as well as to validate novel therapeutic strategies and predictive and biomarkers of response to therapy. More recently, the use of patient-derived tumor xenografts (PDX) and genetically engineered mouse models (GEMMs) has ushered an era of co-clinical trials where preclinical studies can inform clinical trials, thus potentially bridging the translational gap in cancer research. However, differences in the deployment of the instruments, differences in imaging formats and protocols, differences in animal models, and variability in analytic pipelines among other factors result in non-tractable data and poor reproducibility. Importantly, current databases are not compatible with complexity and growing demands in preclinical cancer imaging which include big data needs and collection of metadata/annotation to support NCI?s precision medicine initiative. Thus, there is an unmet need to develop a unifying imaging informatics and workflow management platform to support cancer research, which will ultimately support the premise of translational precision medicine. We propose to develop an open-source preclinical imaging informatics platform?Preclinical Imaging XNAT- enabled Informatics (PIXI)?to manage the workflow of preclinical imaging laboratories, harmonize imaging databases, and enable deployment of analytic and computational pipelines in preclinical imaging. PIXI will be based on XNAT as the underlying informatics architecture. XNAT is used by over 200 academic institutions and industry entities as the backbone for data management across a wide range of imaging applications in clinical research, and thus offers a robust platform for the development and deployment of PIXI. Through this effort, we will 1) develop the PIXI database and server to capture preclinical imaging associated data, metadata, and preclinical imaging workflow and experiments; 2) develop the PIXI ?point-of-service? interface, notebook capabilities, and software development kit (SDK); and 3) develop the PIXI container-based application (?App?) environment to implement portable analytic pipelines. Overall, the development of a preclinical imaging informatics platform is expected to have a profound impact on the management of preclinical imaging in cancer research which will ultimately support translational precision medicine.
|
1.009 |
2020 — 2021 |
Marcus, Daniel Scott Van Essen, David C (co-PI) [⬀] |
R24Activity Code Description: Undocumented code - click on the grant title for more information. |
Connectome Coordination Facility Ii
Project Summary This project will continue operation of the Connectome Coordination Facility (CCF) by capitalizing on the successes of the Human Connectome Project (HCP), which acquired, analyzed, and shared multimodal neuroimaging data and behavioral data on a large population of healthy adults. Major advances by the HCP include (i) the establishment of data acquisition protocols that yield high quality data across multiple modalities; (ii) the implementation of preprocessing pipelines that take full advantage of the high quality imaging data; and (iii) the establishment of a robust informatics infrastructure that has allowed widespread sharing of the HCP data within the neuroimaging and neuroscience communities. The CCF builds on these accomplishments and serves the human neuroimaging community in three ways. One aim is to provide consultation and support services to the research community for the primary purpose of harmonizing image acquisition protocols with those of the HCP. To this end, we have established a help desk whose support functions include transfer of data acquisition sequences and image reconstruction algorithms; providing updates and improvements for these sequences and algorithms; harmonization of imaging protocols and image reconstruction support for different software platforms and versions; and consultation for potential problems (e.g. image artifacts). A second aim is to provide services that maximize comparability of data acquired by CCF contributors. These services include pre-data acquisition guidance to contributors to ensure that each project?s behavioral data are obtained using HCP-compatible methods. This entails coordination with data contributors to develop and maitain mechanisms to streamline transfers of de-identified data from the study sites to the CCF database. The data from each study include the acquired images and all data associated with the project?s behavioral battery. Manual and automated quality control procedures based on existing HCP methods are executed to generate quality metrics that are published with the data. A standardized set of pipelines are then run in order to produce minimally preprocessed data that is fully harmonized with the other data sets in the CCF database. A third aim is to maintain the existing data repository infrastructure for Human Connectome Data and expand it to include connectome data from other research laboratories that are funded under the Connectomes Related to Human Diseases program. Together, these three aims enable the CCF to serve a central hub for connectomics data aggregation and harmonization. The CCF?s suite of harmonization services from data acquisition through data sharing ensure an unprecedented level of compatibility across data sets. The resulting database enables the scientific community to conduct novel analyses to better understand brain function in health and disease.
|
1.009 |
2021 |
Marcus, Daniel Scott |
U24Activity Code Description: To support research projects contributing to improvement of the capability of resources to serve biomedical research. |
Sustaining the Integrative Imaging Informatics For Cancer Research (I3cr) Center
Abstract Artificial intelligence (AI) and other computationally intensive methods have the potential to revolutionize cancer imaging research and patient care. The broad adoption of these technologies depends on the availability development of imaging informatics tools to assist users in managing massive data sets, generating well-curated annotations, and accessing scalable computing resources. The Integrative Imaging Informatics for Cancer Research (I3CR) Center has played a lead role in implementing cancer imaging informatics technology, with a focus on expanding the widely used open source XNAT informatics platform to better support computational workflows in cancer imaging. I3CR has also developed knowledge management tools to better track data processing and analysis, including tools for orchestrating and tracking container-based computing pipelines. As result of this work, XNAT has emerged as the most widely used imaging informatics platform in cancer research. It is deployed in over 200 organizations across academia and industry and has been adopted across a wide range of research contexts, including preclinical imaging, multi-site clinical trials, and clinical translation. As the I3CR informatics platform has matured and been widely adopted, the Center is now evolving to the next phase of development to more broadly sustain the platform. In the work proposed here, we will sustain the I3CR?s ongoing engineering initiatives and expand its outreach efforts. The Center?s sustainment activities will build on and extend the I3CR platform?s expansive set of data and knowledge management capabilities, with a focus on addressing key emergent needs within our user base. In Aim 1, we will continue to develop the XNAT-based data management platform, including adding standards-based clinical interfaces, developing a cohort discovery service with natural language processing support, implementing a task automation service, and extending its container-based computing service to support high performance computing and cloud computing environments. In Aim 2, we will implement a suite of integrations with complementary image analysis and data sharing platforms, including The Cancer Image Archive and the NCI Imaging Data Commons. In Aim 3, we will expand the I3CR?s outreach initiatives to ensure broad and effective adoption of the I3CR informatics platforms. The Center?s training program will utilize the XNAT Academy education platform to host a series of online training programs directed at specific audiences including developers, data scientists, integrators, and system administrators. A suite of supporting cloud services will be implemented assist the community in adopting and developing on the I3CR platform.
|
1.009 |
2021 |
Marcus, Daniel Scott |
S10Activity Code Description: To make available to institutions with a high concentration of NIH extramural research awards, research instruments which will be used on a shared basis. |
A High Performance Research Image Repository (Rir) For the Washington University Center of High Performance Computing (Chpc)
Project Summary/Abstract: We propose to build a Research Image Repository (RIR) to house large collections of biomedical imaging data. The RIR will include datasets produced locally at Washington University: The Connectome Coordination Facility (CCF) (which itself includes the Human Connectome Project (HCP) Young Adult study, The Lifespan related projects, the Disease related projects, and assorted HCP-related projects), The Knight Alzheimer Disease Research Center (ADRC), the Adolescent Brain Cognitive Development (ABCD) Study, The Comprehensive Neuro-Oncology Data Repository (CONDR), and the clinically-based PACS image repository. In addition, copies of external data collections such as the UK Biobank, The Alzheimer's Disease Neuroimaging Initiative (ADNI), and The Cancer Image Archive (TCIA) will be maintained. The RIR includes a data management software solution that will introduce many novel features (such as `data tagging' to enrich datasets, and advanced search features) and will allow us to leverage existing storage including the Center for High Performance Computing's (CHPC) 1.4PB of BeeGFS `scratch' storage, solid-state NVMe drives integrated into the compute nodes, and 10PB of ZFS-based storage. All storage will be presented to the user as a single file-system, while data will be migrated to different performance tiers based on the storage requirements of the datasets or processing algorithms. The RIR will be integrated into the CHPC for data processing. The proposal also includes two NVIDIA DGX A100 GPU servers providing state-of-the-art GPU- based processing power. The combination of high-quality, diverse sets of biomedical imaging data with next- generation computing power will have a transformative effect on biomedical imaging processing pipelines and nowhere will the effects be more profound than in the emerging field of Deep Learning for image processing.
|
1.009 |