From 1 - 10 / 69
  • The development of the Indian Ocean Tsunami Warning and mitigation System (IOTWS) has occurred rapidly over the past few years and there are now a number of centres that perform tsunami modelling within the Indian Ocean, both for risk assessment and for the provision of forecasts and warnings. The aim of this work is to determine to what extent event-specific tsunami forecasts from different numerical forecast systems differ. This will have implications for the inter-operability of the IOTWS. Forecasts from eight separate tsunami forecast systems are considered. Eight hypothetical earthquake scenarios within the Indian Ocean and ten output points at a range of depths were defined. Each forecast centre provided, where possible, time series of sea-level elevation for each of the scenarios at each location. Comparison of the resulting time series shows that the main details of the tsunami forecast, such as arrival times and characteristics of the leading waves are similar. However, there is considerable variability in the value of the maximum amplitude (hmax) for each event and on average, the standard deviation of hmax is approximately 70% of the mean. This variability is likely due to differences in the implementations of the forecast systems, such as different numerical models, specification of initial conditions, bathymetry datasets, etc. The results suggest that it is possible that tsunami forecasts and advisories from different centres for a particular event may conflict with each other. This represents the range of uncertainty that exists in the real-time situation.

  • Natural hazards such as floods, dam breaks, storm surges and tsunamis impact communities around the world every year. To reduce the impact, accurate modelling is required to predict where water will go, and at what speed, before the event has taken place.

  • The major tsunamis of the last few years have dramatically raised awareness of the possibility of potentially damaging tsunami reaching the shores of Australia and to the other countries in the region. Here we present three probabilistic hazard assessments for tsunami generated by megathrust earthquakes in the Indian, Pacific and southern Atlantic Oceans. One of the assessments was done for Australia, one covered the island nations in the Southwest Pacific and one was for all the countries surrounding the Indian Ocean Basin

  • Random forest (RF) is one of the top performed methods in predictive modelling. Because of its high predictive accuracy, we introduced it into spatial statistics by combining it with the existing spatial interpolation methods, resulting a few hybrid methods and improved prediction accuracy when applied to marine environmental datasets (Li et al., 2011). The superior performance of these hybrid methods was partially attributed to the features of RF, one component of the hybrids. One of these features inherited from its trees is to be able to deal with irrelevant inputs. It is also argued that the performance of RF is not much influenced by parameter choices, so the hybrids presumably also share this feature. However, these assumptions have not been tested for the spatial interpolation of environmental variables. In this study, we experimentally examined these assumptions using seabed sand and gravel content datasets on the northwest Australian marine margin. Four sets of input variables and two choices of 'number of variables randomly sampled as candidates at each split' were tested in terms of predictive accuracy. The input variables vary from six predictors only to combinations of these predictors and derived variables including the second and third orders and/or possible two-way interactions of these six predictors. However, these derived predictors were regarded as redundant and irrelevant variables because they are correlated with these six predictors and because RF can do implicit variable selection and can model complex interactions among predictors. The results derived from this experiment are analysed, discussed and compared with previous findings. The outcomes of this study have both practical and theoretical importance for predicting environmental variables.

  • Natural hazards such as floods, dam breaks, storm surges and tsunamis impact communities around the world every year. To reduce the impact, accurate modelling is required to predict where water will go, and at what speed, before the event has taken place. ANUGA is free, open source, software created to model water flow arising from these events. The resulting knowledge can be used to reduce loss of life and damage to property in communities affected by such disasters by providing vital input to evacuation plans, structural mitigation options and planning. The software was developed collaboratively by the Australian National University (ANU) and Geoscience Australia (GA) and is available at http://sourceforge.net/projects/anuga. ANUGA solves the non-linear shallow water wave equations using the finite volume method with dynamic time stepping. A major capability of ANUGA is that it can model the process of wetting and drying as water enters and leaves an area. This means it is suitable for simulating water flow onto a beach or dry land and around structures such as buildings. ANUGA is also capable of modelling complex flows involving shock waves and rapidly changing flow speeds (transitions from sub critical to super critical flows). ANUGA is a robust software package that contains over 800 unit tests. It has been validated against wave tank experiments [1] and model outputs from the 2004 Indian Ocean tsunami have compared very well with a run-up survey at Patong Beach, Thailand. This particular activity has also underpinned the results provided to Australian emergency managers managing tsunami risk. This presentation will outline the key components of ANUGA, examples for a range of hydrodynamic hazards as well as a sample of validation outputs.

  • Obtaining reliable predictions of the subsurface will provide a critical advantage for explorers seeking mineral deposits at depth and beneath cover. A common approach in achieving this goal is to use deterministic property-based inversion of potential field data to predict a 3D subsurface distribution of physical properties that explain measured gravity or magnetic data. Including all prior geological knowledge as constraints on the inversion ensures that the recovered predictions are consistent with both the geophysical data and the geological knowledge. Physical property models recovered from such geologically-constrained inversion of gravity and magnetic data provide a more reliable prediction of the subsurface than can be obtained without constraints. The non-uniqueness of inversions of potential field data mandates careful and consistent parameterization of the problem to ensure realistic solutions.

  • Geoscience Australia often produces spatially continuous marine environmental information products using spatial interpolation methods. The accuracy of such information is critical for well-informed decisions for marine environmental management and conservation. Improving the accuracy of these data products by searching for robust methods is essential, but it is a vexed task since no method is best for all variables. Therefore, we experimentally compared the performance of 32 methods/sub-methods using seabed gravel content data from the Australian continental EEZ. In this study, we have identified and developed several novel and robust methods that significantly increase the accuracy of interpolated spatial information. Moreover, these methods can be applied to various environmental properties in both marine and terrestrial disciplines.

  • Keynote presentation to cover * the background to tsunami modelling in Australia * what the modelling showed * why the modelling is important to emergency managers * the importance of partnerships * future challenges

  • In this study, we aim to identify the most appropriate methods for spatial interpolation of seabed sand content for the AEEZ using samples extracted on August 2010 from Geoscience Australia's Marine Samples Database. The predictive accuracy changes with methods, input secondary variables, model averaging, search window size and the study region but the choice of mtry. No single method performs best for all the tested scenarios. Of the 18 compared methods, RFIDS and RFOK are the most accurate methods in all three regions. Overall, of the 36 combinations of input secondary variables, methods and regions, RFIDS, 6RFIDS and RFOK were among the most accurate methods in all three regions. Model averaging further improved the prediction accuracy. The most accurate methods reduced the prediction error by up to 7%. RFOKRFIDS, with a search window size of 5, an mtry of 4 and more realistic predictions in comparison with the control, is recommended for predicting sand content across the AEEZ if a single method is required. This study provides suggestions and guidelines for improving the spatial interpolations of marine environmental data.

  • For the past decade, staff at Geoscience Australia (GA), Australia's Commonwealth Government geoscientific agency, have routinely performed 3D gravity and magnetic modelling as part of our geoscience investigations. For this work, we have used a number of different commercial software programs, all of which have been based on a Cartesian mesh spatial framework. These programs have come as executable files that were compiled to operate in a Windows environment on single core personal computers (PCs). In recent times, a number of factors have caused us to consider a new approach to this modelling work. The drivers for change include; 1) models with very large lateral extents where the effects of Earth curvature are a consideration, 2) a desire to ensure that the modelling of separate regions is carried out in a consistent and managed fashion, 3) migration of scientific computing to off-site High Performance Computing (HPC) facilities, and 4) development of virtual globe environments for integration and visualization of 3D spatial objects. Our response has been to do the following; 1) form a collaborative partnership with researchers at the Colorado School of Mines (CSM) and the China University of Geosciences (CUG) to develop software for spherical mesh modelling of gravity and magnetic data, 2) to ensure that we had access to the source code for any modelling software so that we could customize and compile it for the HPC environment of our choosing, 3) to learn about the different types of HPC environments, 4) to investigate which type of HPC environment would have the optimum mix of availability to us, compute resources, and architecture, and 5) to promote the in-house development of a virtual globe application that we make freely available, built on an open-source Eclipse Rich Client Platform (RCP) called `EarthSci' that in turn makes use of the NASA World Wind Software Development Kit (SDK) as the globe rendering engine.