2009 — 2011 |
Grethe, Jeffrey S Makeig, Scott [⬀] |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
A Human Electrophysiology, Associated Anatomic Data and Integrated Tool Resource @ University of California San Diego
DESCRIPTION (provided by applicant): Current technology allows recording of brain electrical and/or magnetic activity from 256 or more scalp sites with high temporal resolution, plus concurrent behavioral and other psychophysiological time series, while dense human intracranial data are routinely acquired during some brain surgery and surgery planning procedures. Subject anatomic magnetic resonance (MR), computerized tomography (CT), and/or diffusion tensor (DT) head images may also be available. Standard analysis approaches extract only a small part of the rich information about human brain dynamics contained in these data. We propose a collaboration between the UCSD Swartz Center for Computational Neuroscience (home to the EEGLAB software environment development project), the UCSD Center for Research in Biological Systems (home to the Biomedical Informatics Research Network (BIRN) coordinating center), and leaders in six other human electrophysiological research communities to develop a public 'A Human Electrophysiology, Associated Anatomic Data and Integrated Tool (HeadIT) resource'. This framework will be built on the BIRN Data Repository framework (www.nbirn.net/bdr), thereby expanding its scope and capabilities. The HeadIT resource will share existing, high- quality, well-documented data sets, allowing their archival preservation and continued public availability for re-analysis and meta-analysis with increasingly powerful analysis tools. Initially, the HeadIT repository, extending a foundational database within the BIRN Data Repository will contain a rich collection of human electrophysiological data contributed by SCCN and others and physically distributed across storage nodes hosted by centers focused on seven research fields: epilepsy, neurorehabilitation, attention, magnetic recording, child development, neuroinformatics, and multimodal imaging. The HeadIT resource will include a software facility for accessing and analyzing repository data in the EEGLAB (sccn.ucsd.edu/eeglab) and other widely-used Matlab-based electrophysiological tool environments. EEGLAB will be extended to include a foundational tool set for performing meta-analyses across more than one archived HeadIT study. We will develop minimal information standards and quality assurance tests for contributed HeadIT data, a facility for interactive data visualization, and will test and validate the operability of the HeadIT resource via named ongoing research collaborations that will serve as the initial user community for tool and data framework development and testing. PUBLIC HEALTH RELEVANCE: The proposed 'A Human Electrophysiology, Associated Anatomic Data and Integrated Tool (HeadIT) Resource'will allow re-analysis of freely available recordings of brain activity and associated behavioral and physiologic measures using freely available analysis tools. This will allow large multi-study meta-analyses for patterns not visible in any single study, re-analyses to validate previously published conclusions from existing data, and application of successively more advanced tools to complex and costly data collected in a wide range of clinical and basic research areas.
|
0.936 |
2013 — 2017 |
Grethe, Jeffrey S. Martone, Maryann E [⬀] |
U24Activity Code Description: To support research projects contributing to improvement of the capability of resources to serve biomedical research. |
Niddk Network Coordinating Unit @ University of California San Diego
DESCRIPTION (provided by applicant): This application outlines plans for the establishment of the NIDDK Interconnectivity Network Coordinating Center (INCC) to expand and enhance the current NIDDK Consortium Interconnectivity Network (dkCOIN) community and infrastructure. The current dkCOIN was established in recognition of the need to interconnect research communities, both basic and clinical, by providing seamless access to large pools of data relevant to the mission of NIDDK. The aims of the dkCOIN project are similar in scope to those of the existing Neuroscience Information Framework (NIF), a project established in 2008 by the NIH Blueprint for Neuroscience Research Institutes to provide a landscape analysis of the myriad of tools and data available via the web, and to create a portal where these resources could be collectively accessed. The NIF was designed to break down silos of information through its novel data federation technology and its concept based search. NIF takes a global view of resources such that there are no neuroscience resources or metabolic resources, only biomedical resources that contain information of more or less relevance to different communities. Thus, while the NIF user interface is domain-specific; the NIF information system and the resources it federates are broadly applicable to biomedical science. In this proposal, we outline plans to extend and enhance dkCOIN through the use of the NIF infrastructure, data federation and expertise. Through this merger, we can immediately add considerable value to the current dkCOIN by bringing in NIF's expansive data federation, resource catalog and semantic search services. The portal will continue to present the data and tools according to the needs and customs of the NIDDK community, but the backend will tap into an expanded resource pool that cuts across all domains of biomedical science, rather than a restricted set. New development will focus on the creation of workflows using tools in use by the NIDDK community with this vast array of integrated data. These tools and workflows will link to cloud computational and storage resources, in order to ensure that the network is sustainable. New development will also focus on more effective means to connect NIDDK researchers with resources that are available to them to support their research projects. Development will be driven by use cases supplied by NIDDK-supported researchers to ensure that it meets NI DDK's objectives. We believe that the strategy outlined here provides a cost effective and innovative means to ensure that researchers have access to the data and tools they require regardless of where they reside, and provides a sustainable model for future similar efforts.
|
0.936 |
2015 — 2016 |
Grethe, Jeffrey S. Martone, Maryann E [⬀] |
U24Activity Code Description: To support research projects contributing to improvement of the capability of resources to serve biomedical research. |
Operation, Support and Strategic Enhancement of the Neuroscience Information Framework @ University of California San Diego
? DESCRIPTION: The Neuroscience Information Framework (NIF; http://neuinfo.org) is currently managed, maintained, and hosted by researchers in the Center for Research in Biological Systems (CRBS) at the University of California, San Diego. Our group is the principal developer of the NIF system and has overseen its growth since 2008 from a modest catalog of 300 resources developed during the first phase of NIF, to the largest source of neuroscience resources on the web. As defined here, resources include data, databases, software/web-based tools, materials, literature, networks, terminologies, or information that would accelerate the pace of neuroscience research and discovery. NIF was instantiated because many of these valuable tools and services were largely unknown to the scientific community they were meant to serve. With the launch of major brain initiatives in the US and Europe, the amount of neuroscience data and tools will continue to increase. NIF can be viewed as a cost effective PubMed and PubMed Central for digital assets, e.g., databases, software tools, alternative media, to make them collectively searchable and present a unified discovery environment for biomedical researchers. The NIF is heavily used, as measured by the number of visitors per month (more than 40,000) to the NIF web resources and the large number of repeat users (~35%) that visit the NIF discovery portal on a regular basis. NIF's services, standards and products are also heavily used as our web services are now regularly receiving more than 15 million hits per month. NIF has developed a novel sustainability plan that provides for continued enhancement and population of the resource. NIF has developed a reputation as a trusted community partner, allowing us to gain cooperation with large segments of the neuroscience community, as well as publishers, non-profit and government organizations. Through these networks, we've been able to launch major initiatives, help launch new collaborative efforts in basic and clinical neuroscience, and implement standards to help transform the way that we cite and track research resources. The work to be performed during the award period will be directed towards operating and maintaining the current NIF system while providing necessary strategic enhancements and providing broad outreach and dissemination efforts to encourage utilization of the NIF. A core new area of development will be centered around providing analytic tools to explore and identify lacunas of knowledge in Neuroscience. The initial work will focus on developing analytic heatmaps for activation foci from the neuroimaging literature.
|
0.936 |
2016 — 2019 |
Stocks, Karen Gupta, Amarnath (co-PI) [⬀] Zaslavsky, Ilya Grethe, Jeffrey |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Earthcube Building Blocks: Collaborative Proposal: Earthcube Data Discovery Hub @ University of California-San Diego
Making data discovery more efficient, comprehensive and user-friendly is a critical challenge articulated by geoscientists as well as researchers in other fields. While information discovery portals and search engines have been developed for many data repositories, and systems that simultaneously search multiple resources have been created, cross-disciplinary data discovery remains a serious issue. It becomes especially acute with rapid increases in the volume and diversity of observations, reflecting different components of the Earth System and collected by multiple research groups, government organizations and commercial companies. The main goal of the EarthCube Data Discovery Hub project is to greatly reduce the time and effort necessary to locate and evaluate geoscience information resources across disciplines, and increase the value of investment in data generation by promoting data reuse and reducing duplication of effort. Project outcomes will benefit a wide group of scientists by providing them a user-friendly and powerful gateway to information resources across multiple data facilities and community contributions, and mechanisms for improving the system to answer their research queries in a consistent manner. The project will also benefit geoscience research in several ecosystems that are being used as examples to test the data tools being developed, including rivers, coral reefs and other marine ecosystems, and the critical zone where rock, soil, water, air and living organisms interact. The EarthCube Data Discovery Hub will be developed as a comprehensive data discovery and content enhancement system, which will leverage improved and community-curated metadata descriptions and integrate previously unregistered information sources. The project will further extend, improve and operationalize the inventory catalog developed in an earlier CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) project, which currently includes over 2 million metadata documents from multiple sources. The key technological innovations include: pioneering the development of an automated cross-domain metadata augmentation and curation pipeline enabled by a large integrated geoscience ontology; mechanisms for ?deep registration? of geoscience data from different sources based on a novel data type registry; an online use case management system; and a methodology for processing several types of complex geoscience queries that cannot be answered by existing systems. In addition, the project will support scientific progress in several representative cross-disciplinary research scenarios, using the contexts of river geochemistry, coral reef and other marine ecosystem analysis, and critical zone science. The project will implement innovative community engagement mechanisms, including community annotation of automatically curated metadata, iterative improvement of geoscience ontology based on community feedback, and joint development of cross-disciplinary use cases semantically aligned with data descriptions.
|
0.915 |
2018 — 2021 |
Grethe, Jeffrey S. Martone, Maryann E (co-PI) [⬀] |
U24Activity Code Description: To support research projects contributing to improvement of the capability of resources to serve biomedical research. |
Dknet Coordinating Unit: An Information Network For Fair Resources and Data @ University of California, San Diego
Project Summary We outline plans for the next generation NIDDK Information Network (dkNET), a centralized portal for discovery and information about research resources-data, reagents, organisms, tools available to researchers through NIDDK-supported centers and other relevant projects. In this phase, we will continue to support and maintain aggregated data from biomedical databases. But since the launch of dkNET in 2012, the data and resource landscape has changed, there are new concerns about rigor and reproducibility, and we are seeing data mandates and increased demand for data and data services. We will therefore capitalize upon our considerable success in developing and deploying Research Resource Identifiers (RRIDs), a system for identifying, tracking and aggregating data about research resources in the literature currently used in over 350 journals. RRIDs are unique identifiers assigned to individual research resources and form the basis of a citation and tracking system for use within the scientific literature. Through RRIDs, we will further develop our Resource Information Network, including analytics tools, to help researchers not only to identify appropriate resources, but to provide them with up to date information about the use and performance of these resources. We will also develop services in support of NIH's new rigor and reproducibility guidelines to help researchers develop plans to identify and validate research resources and help support centers in tracking use of their products. In response to user feedback and new NIH mandates and recommendations for data management and sharing, we will also add new services and tools in support of data science. During this past phase, the PI's helped develop the FAIR principles, recommendations for making data Findable, Accessible, Interoperable and Reusable. Major data initiatives in both the US and Europe are now supporting FAIR. Full implementation of FAIR requires that the community have a way to develop and disseminate best practices and standards for metadata, data formats, etc. NIDDK is uniquely poised to help coordinate and accelerate this process for NIDDK supported researchers. We believe to gain support for proper data management and data publishing, it is important not just to provide help in compliance, but to provide tools and services for integrating and analyzing FAIR data to address critical questions. In conjunction with the Signaling Pathways Project (SPP, formerly the Nuclear Receptor Signaling Atlas, or NURSA), we will develop a new meta-analysis platform for FAIR `omics data. Through the application of biocuration and consensomic analysis of NIDDK-funded and relevant `omics data assets, dkNET users will have access to a user-friendly but powerful platform for modeling signaling events in metabolic organs and intersections between cellular signaling pathways and metabolic disease. These capabilities will be accompanied by an aggressive and extensive outreach and dissemination plan to help broaden awareness of and use of dkNET and NIDDK-supported centers.
|
0.936 |
2019 |
Grethe, Jeffrey S. Martone, Maryann E (co-PI) [⬀] |
U24Activity Code Description: To support research projects contributing to improvement of the capability of resources to serve biomedical research. |
Conproject-001 @ University of California, San Diego |
0.936 |
2021 |
Ferguson, Adam R [⬀] Fouad, Karim (co-PI) [⬀] Grethe, Jeffrey S. Lemmon, Vance P (co-PI) [⬀] |
U24Activity Code Description: To support research projects contributing to improvement of the capability of resources to serve biomedical research. |
Pan-Neurotrauma Data Commons @ University of California, San Francisco
PROJECT SUMMARY/ABSTRACT Trauma to the central nervous system (CNS: spinal cord and brain) together affect more than 2.5 million people per year in the US, with economic costs of $80 billion in healthcare and loss-of-productivity. Yet, the precise pathophysiological processes impairing recovery remain poorly understood. This lack of knowledge is exacerbated by poor reproducibility of findings in animal models and limits translation of therapeutics across species and into humans. Part of the problem is that neurotrauma is intrinsically complex, involving heterogeneous damage to the central nervous system (CNS), by far the most complex organ system in the body. This results in a multifaceted CNS syndrome reflected across heterogeneous endpoints and multiple scales of analysis. Multi-scale heterogeneity makes traumatic brain injury (TBI) and spinal cord injury (SCI) difficult to understand using traditional analytical approaches that focus on a single endpoint for testing therapeutic efficacy. Single endpoint-testing provides a narrow window into the complex system of changes that describe SCI and TBI. Understanding these disorders involves managing datasets that include high volume anatomy data, high velocity physiology decision-support data, the high variety functional/behavioral data, and assessing correlations among these endpoints. In this sense, neurotrauma is fundamentally a data management problem that involves the classic ?3Vs of big data? (volume, velocity, variety). Of these, variety is perhaps the greatest data challenge in neurotrauma research for reproducibility in basic discovery, cross-species translation, and ultimately clinical implementation. For the proposed Data Repositories Cooperative Agreement (U24) we will build on our prior work managing data variety in the Open Data Commons for SCI (odc-sci.org) and TBI (odc-tbi.org) to make neurotrauma data Findable, Accessible, Interoperable, and Reusable (FAIR). The milestone-driven aims will: 1) further develop and harden our data lifecycle management system with end-to-end data version control and provenance tracking, data certification, and data citation; 2) develop in-cloud data dashboards and visualizations to monitor data quality and to promote data reuse, exploration, and hypothesis generation; 3) establish a pan- neurotrauma (PANORAUMA) data commons that brings together separate data assets currently supported by our multi-PI (MPI) team by aligning a patchwork of governance structures and policies. The goal of the proposed project is to develop a pooled repository for preclinical discovery, reproducibility testing, and translational discovery both within and across neurotrauma types. Our team is well-positioned to execute this project given that we developed some of the largest multicenter, multispecies neurotrauma data repositories of neurotrauma to-date (N>10,000 subjects 20,000 curated variables); the Neuroscience Information Framework (NIF); data terminologies and standards for these fields (MIASCI, NIFSTD); and policy work with the International Neuroinformatics Coordinating Facility (INCF). The PANORAUMA cooperative agreement is highly responsive to PAR-20-089, leveraging early successes in SCI and TBI data sharing to improve quality and sustainability.
|
0.921 |