From 1 - 10 / 85
  • GeoSciML v3 (www.geosciml.org) and EarthResourceML v2 (www.earthresourceml.org) are the latest releases of geoscience data transfer standards from the IUGS-CGI Interoperability Working Group (IWG). The data standards each comprise a UML model and complex features GML schemas, extending the spatial standards of the Open Geospatial Consortium (OGC), including GML v3.2, O&M v2, and SWE Common v2. Future development of GeoSciML and EarthResourceML will occur under a collaborative IUGS-OGC arrangement. GeoSciML covers a wide range of geological data, including geological units, structures, earth materials, boreholes, geomorphology, petrophysical properties, and sampling and analytical metadata. The model was refactored from a single application schema in version 2 into a number of smaller, more manageable schemas in version 3. EarthResourceML covers solid earth resources (mineral occurrences, resources and reserves) and their exploitation (mines and mining activities). The model has been extended to accommodate the requirements of the EU INSPIRE data sharing initiative, seeing the addition of mineral exploration activity and environmental aspects (ie, mining waste) to the model. GeoSciML-Portrayal is a simple-features GML application schema based on a simplified core of GeoSciML. It supports presentation of geological map units, contacts, and faults in Web Map Services, and provides a link between simple-feature data delivery and more complex GeoSciML WFS services. The schema establishes naming conventions for fields commonly used to symbolize geological maps to enable visual harmonization of map services. The IWG have established a vocabulary service at http://resource.geosciml.org, serving geoscience vocabularies in RDF-SKOS format. Vocabularies are not included in GeoSciML and EarthResourceML, but the models recommend a standard pattern to reference controlled vocabularies using HTTP-URI links. GeoSciML and EarthResourceML have been adopted or recommended as the data exchange standards in key international interoperability initiatives, including OneGeology, the INSPIRE project, the US Geoscience Information Network, and the Australia/NZ Government Geoscience Information Committee.

  • Quarterly column on issues in Australian stratigraphy

  • We propose an automated capture system that follows the fundamental scientific methodology. It starts with the instrument that captures the data, uses web services to make standardised data reduction programs more widely accessible, and finally uses internationally agreed data transfer standards to make geochemical data seamlessly accessible online from a series of internationally distributed certified repositories. The Australian National Data Service (http://www.ands.org.au/) is funding a range of data capture solutions to ensure that the data creation and data capture phases of research are fully integrated to enable effective ingestion into research data and metadata stores at the institution or elsewhere. They are developing a national discovery service that enables access to data in institutional stores with rich context. No data is stored in this system, only metadata with pointers back to the original data. This enables researchers to keep their own data but also enables access to many repositories at once. Such a system will require standardisation at all phases of the process of analytical geochemistry. The geochemistry community needs to work together to develop standards for attributes as the data are collected from the instrument, to develop more standardised processing of the raw data and to agree on what is required for publishing. An online-collaborative workspace such as this would be ideal for geochemical data and the provision of standardised, open source software would greatly enhance the persistence of individual geochemistry data collections and facilitate reuse and repurposing. This conforms to the guidelines from Geoinformatics for Geochemistry (http://www.geoinfogeochem.org/) which requires metadata on how the samples were analysed.

  • Discusses reasons to use the Australian Stratigraphic Units Database (ASUD), and new features of the web query page and reports

  • Geoscience Australia is supporting the exploration and development of offshore oil and gas resources and establishment of Australia's national representative system of marine protected areas through provision of spatial information about the physical and biological character of the seabed. Central to this approach is prediction of Australia's seabed biodiversity from spatially continuous data of physical seabed properties. However, information for these properties is usually collected at sparsely-distributed discrete locations, particularly in the deep ocean. Thus, methods for generating spatially continuous information from point samples become essential tools. Such methods are, however, often data- or even variable- specific and it is difficult to select an appropriate method for any given dataset. Improving the accuracy of these physical data for biodiversity prediction, by searching for the most robust spatial interpolation methods to predict physical seabed properties, is essential to better inform resource management practises. In this regard, we conducted a simulation experiment to compare the performance of statistical and mathematical methods for spatial interpolation using samples of seabed mud content across the Australian margin. Five factors that affect the accuracy of spatial interpolation were considered: 1) region; 2) statistical method; 3) sample density; 4) searching neighbourhood; and 5) sample stratification by geomorphic provinces. Bathymetry, distance-to-coast and slope were used as secondary variables. In this study, we only report the results of the comparison of 14 methods (37 sub-methods) using samples of seabed mud content with five levels of sample density across the southwest Australian margin. The results of the simulation experiment can be applied to spatial data modelling of various physical parameters in different disciplines and have application to a variety of resource management applications for Australia's marine region.

  • Scientific data are being generated at an ever increasing rate. Existing volumes of data can no longer be effectively processed by humans, and efficient and timely processing by computers requires development of standardised machine readable formats and interfaces. Although there is also a growing need to share data, information and services across multiple disciplines, many standards currently being developed tend to be discipline specific. To enable cross-disciplinary research a more modular approach to standards development is required so that common components (e.g., location, units of measure, geometric shape, instrument type, etc) can be identified and standardised across all disciplines. Already international standards bodies such as ISO and OGC (Open Geospatial Consortium) are well advanced in developing technical standards that are applicable for interchange of some of these common components such as GML (Geography Markup Language), Observations and Measurements Encoding Standard, SensorML, Spatial Coordinate Systems, Metadata Standards, etc. However the path for developing the remaining discipline specific and discipline independent standards is less coordinated. There is a clear lack of infrastructure and governance not only for the development of the required standards but also for storage, maintenance and extension of these standards over time. There is also no formal mechanism to harmonise decisions made by the various scientific disciplines to avoid unwanted overlap. The National Committee for Data in Science (NCDS) was established in 2008 by the Australian Academy of Science to provide an interdisciplinary focus for scientifc data management. In 2008 an informal request from the NCDS was put to the international Committee on Data for Science and Technology (CODATA) to consider taking on a new coordination role on issues related to the development and governance of standards required for the discovery of, and access to digital scientific data.

  • With the increasing emphasis on electronic rather that paper products, the need for adequate metadata is becoming more and more pressing. The new AGSO Catalog is designed to address this problem at the corporate level. Developed from the AGSO Products Database, the AGSO Catalog is designed to encompass most of AGSOs outputs, datasets and resources. It does this with the help of various intranet and Web interfaces. Projects or authors must initiate Catalog entries, for without an acceptable metadata a product cannot be sold by the Sales Centre, or permission to publish will not be granted. The Catalog is the key to future systems of information distribution and sales. It will permit us to go directly from the metadata to the electronically stored objects, thus enabling automated information distribution and electronic commerce.

  • This documentation manual for the national mineral deposits dataset provides the necessary description of AGSO's mineral deposit database (OZMIN) - its structure, the main data and authority tables used by OZMIN, database table definitions, details on the Microsoft Access version of the database and a listing of those deposits in the dataset.