2006 — 2008 |
Jenkins, Christopher |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Corewall--Integrated Environment For Interpretation of Geoscientific Data From Sediment and Crystalline Cores @ University of Colorado At Boulder
In broad collaboration with the IODP, ICDP, ice-core, limnogeology and paleolimnology research communities, the CoreWall Consortium (the LacCore at the U. of Minnesota, Borehole Research Group at the Lamont-Doherty Earth Observatory of Columbia University, Electronic Visualization Laboratory at U. of Illinois at Chicago, and INSTAAR at the U. of Colorado) will jointly develop CoreWall Suite, a real-time stratigraphic correlation, core description (CD) and data visualization system to be used by the marine, terrestrial and Antarctic science communities. The work model forged by the CoreWall Suite will significantly alter and enhance the current approaches used for core description and analysis of sediment and rock cores by providing an integrated environment for these activities, for both field and repository environments. In particular, the CoreWall Suite will incorporate the new NCLIP software, the updated version of CLIP (e.g.,Sagan and Splicer) that was developed by the Lamont group more than 10 years ago. The CoreWall Suite will be portable, able to work on a variety of display platforms such as laptops and tiled displays, hence allowing it to be used in the field, laboratory and individual office settings. This project entails a variety of significant broader impacts including involvement of undergraduate and graduate students. The CoreWall Suite will be useful for a variety of outreach activities as well as classroom use.
|
0.915 |
2010 — 2011 |
Jenkins, Christopher |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Rapid: Seamless Marine-Wetlands-Coastal Soils Database to Support Urgent Decision-Making Against the Deepwater Horizon Coastal Oiling @ University of Colorado At Boulder
For several years, the PIs have been developing a data system which can represent the soils "substrates, sediments" of the coastal fringe. They have been devising solutions to these problems over the last several years by: (a) extending a large marine soils database to onshore landscapes in the Australian outback, (b) researching the geologic/ecologic changes in the Mississippi delta region, (c) innovating in making numerical and linguistic data co-mappable (the heterogenous data problem). The coastal zone is heavily populated and invested in worldwide. This project will open the way to a better depiction and understanding of the geomaterial and environmental patterns over large expanses of the zone. In particular, better availability of data will improve planning and numerical model performance based on improved inputs and extended opportunities for validations. In regard to the Deepwater Horizon disaster this project will immediately provide better information focussed on and supporting the cleanup and remediation efforts. The project will strengthen the cleanup decision support systems and numerical models, and will provide a new source of data to work with the associated research-side scientific mapping and experimental work.
|
0.915 |
2011 — 2012 |
Jenkins, Christopher |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: Population Ecology Models For Carbonate Sediments @ University of Colorado At Boulder
EAGER: Collaborative Research: Population Ecology Models for Carbonate Sediments
Christopher C. Jenkins EAR-1118297 University of Colorado, Boulder PI: Potts, Donald C. EAR-1118306 University of California-Santa Cruz
Carbonate sediments are deposited extensively in the oceans and are a significant factor in global carbon budgets. But they are challenging to understand because they are created biologically, then modified geologically. Prior to this project, they had been modeled with rather simple rules-based methods which go under the name ?carbonate factory?. This project takes the next step and directly applies biological principles using advanced numerical modeling on the actual processes ? ?carbonate farm?. The population and community ecology of carbonate-skeleton organisms such as corals, bryozoans and algae is used to build realistic, detailed 3D time-geologic architectures. This opens productive new research paths into the carbon cycle, ocean acidification, rock properties, oil-gas resources, and environmental responses to global change. Perceptions that process modeling is not achievable are wrong ? the key is to employ good math tactics on the correct set of processes, for the key organisms, at appropriate time-space scales. In prototype this project has already produced emergent model features such as high-production ?bone yard? ecologies, and correct statistics for the variability of the accumulated sediments. The outputs can be compared even at the core-description level with Integrated Ocean Drilling Program (IODP) core logs and oil-industry reservoir variability statistics, promising cross-discipline results.
|
0.915 |
2012 — 2013 |
Jenkins, Christopher |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Eager Collaborative Research: Bringing Together Computational and Linguistic Methods to Extract 'Dark' Geosciences Data For the Earthcube Framework @ University of Colorado At Boulder
A large percentage of vaulable geoscience data is based on the analysis of discrete samples and is collected manually (e.g., paleontological collections, structural/tectonic data, petrographic/mineralogic data, economic data, geochemical measurements, rock mechanics, etc.) Often, these data are reported only in tables in the published literature or in .pdf or spreadsheets on individual investigator websites. Commonly these data are not registerd on or entered into standardized, publicly accessible databases. As a result, for this data to be discovered and used/reused, researchers or other interested parties must manually comb through the text, figures, and appendices of journal articles or websites of individual investigators, sometimes having to sift through raw experimental data. This process is extremely time intensive and slows down the time needed to make scientific discoveries or allow verification of research results. As a result the vast amount of surface earth geoscience data is currently inaccessible. This inaccessible data is termed "Dark Data". This EAGER combines the expertise of top-notch computer scientists and geoscientists whose goal is to create a search algorithm to bring this dark data to light in a way that will enable the next generation of integrative geoscience research. The approach will involved development of an innovative search engine "crawler" that will comb the geoscience literature and bring dark data to light from the text and figures in this corpus. The cyberinfrastructure tool being developed will be able to interpret the semantics of English text and the concepts of geoscience. The tool will be piloted by examining entries on the Macrostrat database, a structured spatial database of lithologic and geochronologic information, and then employing a geoscience ontology by means of the Hazy framework for information extraction. Questions to be addressed will be to find out to what extent dark data is presently accessible and if it can be extracted and placed into an accessible format and repository where it can be discovered by web services or other search engines. Broader impacts of the work include training of graduate students and increasing the infrastructure for science through the development of a new and much needed data search tool.
|
0.915 |
2014 — 2017 |
Palmer, Martha (co-PI) [⬀] Jenkins, Christopher Martin, James (co-PI) [⬀] Duerr, Ruth |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
C1f21 Dibbs: Porting Practical Natural Language Processing (Nlp) and Machine Learning (Ml) Semantics From Biomedicine to the Earth, Ice and Life Sciences @ University of Colorado At Boulder
Semantics is the study of word-based information. The sciences are filled with word-based descriptive data: field observations, materials and habitat identifications, parameter names and units, events and processes. Semantics are also important in medicine, where the human body and illnesses have to be described. To enhance interoperability among these word-based (semantic) systems, and to more readily explore the rapidly growing quantities of semantic data, there has been a movement towards organizing word-based data in ways that allow machine-assisted, automated analysis. Biomedicine has made great progress in organizing and using semantic information because of substantial funding investments. This project builds upon extensive investments in the biomedical field, providing an opportunity to rapidly develop the organization of semantic concepts for other domain sciences.
A toolkit developed by the Center for Computational Language and Education Research (CLEAR TK) will be used to build semantic resources (taxonomies, ontologies, and semantic networks) for three science domains (geology, cryology, and biology). CLEAR TK is a state-of-the-art natural language processing (NLP) and machine learning (ML) system that also has essential tools for machine-assisted annotation, validation, document tagging, and event extraction. The CLEAR TK system has been used operationally for biomedical semantic applications, including in high-profile hospitals. In this project, developments are focused upon the science fields of geology, ice and snow, and biology. In these fields, accurate extraction of semantic information from the word-based data is required so users can quickly find the data they really need. This project provides a valuable opportunity to expand and evaluate semantic capabilities in conjunction with several scientific domain experts.
|
0.915 |