From 1 - 10 / 70
  • Geoscience Australia has collaboratively developed a number of open source software models and tools to estimate hazard, impact and risk to communicaties for a range of natural hazard to support disaster risk reduction in Australia and the region. These models and tools include: * ANUGA * EQRM * TCRM * TsuDAT * RICS * FiDAT This presentation will discuss the drivers for developing these models and tools using open source software and the benefits to the end-users in the emergency management and planning community as well as the broader research community. Progress and plans for these models and tools will also be outlined in particular those that take advantage of the availability of high performance computing, cloud computing, webservices and global initiatives such as the Global Earthquake Model.

  • Robust methods for generating spatially continuous data from point locations of physical seabed properties are essential for accurate biodiversity prediction. For many national-scale applications, spatially continuous seabed sediment data are typically derived from sparsely and unevenly distributed point locations, particularly in the deep ocean due to the expense and practical limitations of acquiring samples. Methods for deriving spatially continuous data are usually data- and variable-specific making it difficult to select an appropriate method for any given physical seabed property. To improve the spatial modelling of physical seabed properties, this study compared the results of a variety of methods for deriving spatially continuous mud content data for the southwest margin of Australia (523,400 km2) based on 177 sparsely and unevenly distributed point samples. For some methods, secondary variables were also used in the analysis, including: bathymetry, distance-to-coast, seabed slope, and geomorphic province (i.e., shelf, slope, etc.). Effects of sample density were also investigated. The predictive performance of the methods was assessed using a 10-fold cross validation and visual examination. A combined method (random forest and ordinary kriging: RFrf) proved the most accurate method, with an RMAE up to 17% less than the control. No threshold sample density was detected; as sample density increased so did the accuracy of the method. The RMAE of the most accurate method is about 30% lower than that of the best methods in previous publications, further highlighting the robustness of the method developed in this study. The results of this study show that significant improvements in the accuracy of the spatially continuous seabed properties can be achieved through the application of an appropriate interpolation method. The outcomes of this study can be applied to the modelling of a wide range of physical properties for improved marine biodiversity prediction.

  • In response to the devastating Indian Ocean Tsunami (IOT) that occurred on 26 December 2004, Geoscience Australia developed a framework for tsunami risk modelling. The outputs from this methodology have been used by emergency managers throughout Australia to plan and prepare for future events. For Geoscience Australia to be confident in the information that is provided to the various stakeholders, validation of the model and methodology is required. Tsunami modelling at Geoscience Australia employs a hybrid approach which couples two models at the continental shelf. First we use an elastic dislocation model to simulate the initial sea-floor displacement of an earthquake source. The tsunami is then propagated across the deep ocean using URSGA, a finite difference model that solves the non-linear shallow water wave equation across nested grids. We stop this model at the 100 m water depth contour and couple it to a detailed inundation modelling tool, ANUGA, developed by Geoscience Australia and the Australian National University. ANUGA also solves the non-linear shallow water wave equation and uses a finite volume method. It incorporates bottom friction coefficients and can resolve hydraulic shocks and the wetting and drying process. While the huge loss of life from the 2004 Indian Ocean tsunami was tragic it did provide a unique opportunity to record the impact of a large tsunami event. Information gained from post-tsunami surveys and tide gauge recordings at Patong Bay, Thailand and Geraldton, Western Australia is used to validate our tsunami inundation modelling methodology. By using these two locations we can assess the performance of our models at near-source and distal locations. In addition, wave heights observed in the deep ocean from satellite altimetry are utilised to validate our deep water propagation model.

  • Random forest (RF) is one of the top performed methods in predictive modelling. Because of its high predictive accuracy, we introduced it into spatial statistics by combining it with the existing spatial interpolation methods, resulting a few hybrid methods and improved prediction accuracy when applied to marine environmental datasets (Li et al., 2011). The superior performance of these hybrid methods was partially attributed to the features of RF, one component of the hybrids. One of these features inherited from its trees is to be able to deal with irrelevant inputs. It is also argued that the performance of RF is not much influenced by parameter choices, so the hybrids presumably also share this feature. However, these assumptions have not been tested for the spatial interpolation of environmental variables. In this study, we experimentally examined these assumptions using seabed sand and gravel content datasets on the northwest Australian marine margin. Four sets of input variables and two choices of 'number of variables randomly sampled as candidates at each split' were tested in terms of predictive accuracy. The input variables vary from six predictors only to combinations of these predictors and derived variables including the second and third orders and/or possible two-way interactions of these six predictors. However, these derived predictors were regarded as redundant and irrelevant variables because they are correlated with these six predictors and because RF can do implicit variable selection and can model complex interactions among predictors. The results derived from this experiment are analysed, discussed and compared with previous findings. The outcomes of this study have both practical and theoretical importance for predicting environmental variables.

  • The major tsunamis of the last few years have dramatically raised awareness of the possibility of potentially damaging tsunami reaching the shores of Australia and to the other countries in the region. Here we present three probabilistic hazard assessments for tsunami generated by megathrust earthquakes in the Indian, Pacific and southern Atlantic Oceans. One of the assessments was done for Australia, one covered the island nations in the Southwest Pacific and one was for all the countries surrounding the Indian Ocean Basin

  • Geoscience Australia has developed a number of open source risk models to estimate hazard, damage or financial loss to residential communities from natural hazards and is used to underpin disaster risk reduction activities. Two of these models will be discussed here: the Earthquake Risk Model (EQRM) and a hydrodynamic model call ANUGA, developed in collaboratoin with the ANU. Both models have been developed in Python using scientific and GIS packages such as Shapely, Numeric and SciPy. This presentation will outline key lessons learnt in developing scientific software in Python. Methods of maintaining and accessing code quality will be discussed (1) what makes a good unit test (2) how defects in the code were discovered quickly by being able to visualise the output data; and (3) how characterisation tests, which describe the actual behaviour of a system, are useful for finding unintended system changes. The challenges involved in optimising and parallelising Python code will also be presented. This is particularly important in scientific simulations as they use considerable computational resources and involve large data sets. This will be focus on: profiling; NumPyl using C code; and parallelisation of applications to run on clusters. Reduction of memory use by using a class to represent a group of items instead of a single item will also be discussed.

  • The 2004 Indian Ocean Tsunami raised the importance of tsunami as a significant emergency management issue in Australia. The Australian government responded by initiating a range of measures to help safeguard Australia from tsunami, in particular the Australian Tsunami Warning System (ATWS). In addition it is supporting fundamental research into understanding the tsunami risk to Australian communities. The Risk and Impact Analysis Group (RIAG) of Geoscience Australia achieves this through the development of computational methods, models and decision support tools for use in assessing the impact and risk posed by hazards. Together with support from Emergency Management Australia, it is developing a national tsunami hazard map based on earthquakes generated from the subduction zones surrounding Australia. These studies have highlighted sections of the coastline that appear vulnerable to events of this type. The risk is determined by the likelihood of the event and the resultant impact. Modelling the impacts from tsunami events is a complex task. The computer model ANUGA is used to simulate the propagation of a tsunami toward the coast and estimate the level of damage. A simplification is obtained by taking a hybrid approach where two models are combined: relatively simple and fast models are used to simulate the tsunami event and wave propagation through open water, while the impact from tsunami inundation is simulated with a more complex model. A critical requirement for reliable modelling is an accurate representation of the earth's surface that extends from the open ocean through the inter-tidal zone into the onshore areas. However, elevation data may come from a number of sources and will have a range of reliability.

  • The information within this document and associated DVD is intended to assist emergency managers in tsunami planning and preparation activities. The Attorney General's Department (AGD) has supported Geoscience Australia (GA) in developing a range of products to support the understanding of tsunami hazard through the Australian Tsunami Warning System Project. The work reported here is intended to further build the capacity of the QLD State Government in developing inundation models for prioritised locations. Internally stored data /nas/cds/internal/hazard_events/sudden_onset_hazards/tsunami_inundation/gold_coast/gold_coast_tsunami_scenario_2009

  • Source The data was sourced from CSIRO (Victoria) in 2012 by Bob Cechet. It is not known specifically which division of CSIRO, although it is likely to have been the Marine and Atmospheric Research Division (Aspendale), nor the contact details of the person who provided the data to Bob. The data was originally produced by CSIRO for their input into the South-East Queensland Climate Adaptation Research Initiative (SEQCARI). Reference, from an email of 16 March 2012 sent from Bob Cechet to Chris Thomas (Appendix 1 of the README doc stored at the parent folder level with the data), is made to 'download NCEP AVN/GFS files' or to source them from the CSIRO archive. Content The data is compressed into 'tar' files. The name content is separated by a dot where the first section is the climatic variable as outlined in the table format below: Name Translation rain 24 hr accumulated precipitation rh1_3PM Relative humidity at 3pm local time tmax Maximum temperature tmin Minimum temperature tscr_3PM Screen temperature (2 m above ground) at 3pm local time u10_3PM 10-metre above ground eastward wind speed at 3pm local time v10_3PM 10-metre above ground northward wind speed at 3pm local time The second part of the name is the General Circulation Model (GCM) applied: Name Translation gfdlcm21 GFDL CM2.1 miroc3_2_medres MIROC 3.2 (medres) mpi_echam5 MPI ECHAM5 ncep NCEP The third, and final, part of the tarball name is the year range that the results relate to: 1961-2000, 1971-2000, 2001-2040 and 2041-2099 Data format and extent Inside each of the tarball files is a collection of NetCDF files covering each simulation that constitutes the year range (12 simulations for each year). A similar naming protocol is used for the NetCDF files with a two digit extension added to the year for each of the simulations for that year (e.g 01-12). The spatial coverage of the NetCDF files is shown in the bounding box extents as shown below. Max X: -9.92459297180176 Min X: -50.0749073028564 Max Y: 155.149784088135 Min Y: 134.924812316895 The cell size is 0.15 degrees by 0.15 degrees (approximately 17 km square at the equator) The data is stored relative to the WGS 1984 Geographic Coordinate System. The GCMs were forced with the Intergovernmental Panel on Climate Change (IPCC) A2 emission scenario as described in the IPCC Special Report on Emissions Scenarios (SRES) inputs for the future climate. The GCM results were then downscaled from a 2 degree cell resolution by CSIRO using their Cubic Conformal Atmospheric Model (CCAM) to the 0.15 degree cell resolution. Use This data was used within the Rockhampton Project to identify the future climate changes based on the IPCC A2 SRES emissions scenario. The relative difference of the current climate GCM results to the future climate results was applied to the results of higher resolution current climate natural hazard modelling. Refer to GeoCat # 75085 for the details relating to the report and the 59 attached ANZLIC metadata entries for data outputs.

  • This folder contains WindRiskTech data used in preliminary stages of the National Wind Risk Assessment. The data are synthetic TC event sets, generated by a statistical-dynamical model of TCs that can be applied to general circulation models to provide projections of TC activity. Output from two GCMs is available here - the NCAR CCSM3 and the GFDL CM2.1 model. For each, there are a number of scenarios (based on the SRES scenarios from AR4 and previous IPCC reports) and time periods (the time periods are not the same for the A1B scenario). For each mode, scenario and time period, the data are a set of 1000 TC track files in tab-delimited format contained in the huur.zip files in each sub-folder. The output folder contains the output of running TCRM (pre-2011 version) on each of the datasets.