From 1 - 10 / 69
  • One of the important inputs to a probabilistic seismic hazard assessment is the expected rate at which earthquakes within the study region. The rate of earthquakes is a function of the rate at which the crust is being deformed, mostly by tectonic stresses. This paper will present two contrasting methods of estimating the strain rate at the scale of the Australian continent. The first method is based on statistically analysing the recently updated national earthquake catalogue, while the second uses a geodynamic model of the Australian plate and the forces that act upon it. For the first method, we show a couple of examples of the strain rates predicted across Australia using different statistical techniques. However no matter what method is used, the measurable seismic strain rates are typically in the range of 10-16s-1 to around 10-18s-1 depending on location. By contrast, the geodynamic model predicts a much more uniform strain rate of around 10-17s-1 across the continent. The level of uniformity of the true distribution of long term strain rate in Australia is likely to be somewhere between these two extremes. Neither estimate is consistent with the Australian plate being completely rigid and free from internal deformation (i.e. a strain rate of exactly zero). This paper will also give an overview of how this kind of work affects the national earthquake hazard map and how future high precision geodetic estimates of strain rate should help to reduce the uncertainty in this important parameter for probabilistic seismic hazard assessments.

  • Geoscience Australia has developed a number of open source risk models to estimate hazard, damage or financial loss to residential communities from natural hazards and is used to underpin disaster risk reduction activities. Two of these models will be discussed here: the Earthquake Risk Model (EQRM) and a hydrodynamic model call ANUGA, developed in collaboratoin with the ANU. Both models have been developed in Python using scientific and GIS packages such as Shapely, Numeric and SciPy. This presentation will outline key lessons learnt in developing scientific software in Python. Methods of maintaining and accessing code quality will be discussed (1) what makes a good unit test (2) how defects in the code were discovered quickly by being able to visualise the output data; and (3) how characterisation tests, which describe the actual behaviour of a system, are useful for finding unintended system changes. The challenges involved in optimising and parallelising Python code will also be presented. This is particularly important in scientific simulations as they use considerable computational resources and involve large data sets. This will be focus on: profiling; NumPyl using C code; and parallelisation of applications to run on clusters. Reduction of memory use by using a class to represent a group of items instead of a single item will also be discussed.

  • This study tested the performance of 16 species models in predicting the distribution of sponges on the Australian continental shelf using a common set of environmental variables. The models included traditional regression and more recently developed machine learning models. The results demonstrate that the spatial distributions of sponge as a species group can be successfully predicted. A new method of deriving pseudo-absence data (weighted pseudo-absence) was compared with random pseudo-absence data - the new data were able to improve modelling performance for all the models both in terms of statistics (~10%) and in the predicted spatial distributions. Overall, machine learning models achieved the best prediction performance. The direct variable of bottom water temperature and the resource variables that describe bottom water nutrient status were found to be useful surrogates for sponge distribution at the broad regional scale. This study demonstrates that predictive modelling techniques can enhance our understanding of processes that influence spatial patterns of benthic marine biodiversity. Ecological Informatics

  • The quality and type of elevation data used in tsunami inundation models can lead to large variations in the estimated inundation extent and tsunami flow depths and speeds. In order to give confidence to those who use inundation maps, such as emergency managers and spatial planners, standards and guidelines need to be developed and adhered to. However, at present there are no guidelines for the use of different elevation data types in inundation modelling. One reason for this is that there are many types of elevation data that differ in vertical accuracy, spatial resolution, availability and expense; however the differences in output from inundation models using different elevation data types in different environments are largely unknown. This study involved simulating tsunami inundation scenarios for three sites in Indonesia, of which the results for one of these, Padang, is reported here. Models were simulated using several different remotely-sensed elevation data types, including LiDAR, IFSAR, ASTER and SRTM. Model outputs were compared for each data type, including inundation extent, maximum inundation depth and maximum flow speed, as well as computational run-times. While in some cases, inundation extents do not differ greatly, maximum depths can vary substantially, which can lead to vastly different estimates of impact and loss. The results of this study will be critical in informing tsunami scientists and emergency managers of the acceptable resolution and accuracy of elevation data for inundation modelling and subsequently, the development of elevation data standards for inundation modelling in Indonesia.

  • Following the tragic events of the Indian Ocean tsunami on 26 December 2004 it became obvious there were shortcomings in the response and alert systems for the threat of tsunami to Western Australia's (WA) coastal communities. The relative risk of a tsunami event to the towns, remote indigenous communities, and infrastructure for the oil, gas and mining industries was not clearly understood in 2004. Consequently, no current detailed response plans for a tsunami event in WA coastal areas existed. The Boxing Day event affected the WA coastline from Bremer Bay on the south coast, to areas north of Exmouth on the north-west coast, with a number of people requiring rescue from abnormally strong currents and rips. There were also reports of personal belongings at some beaches inundated by wave activity. More than 30 cm of water flowed down a coast-side road in Geraldton on the mid-west coast, and Geordie Bay at Rottnest Island (19 km of the coast of Fremantle) experienced five 'tides' in three hours, resulting in boats hitting the ocean bed a number of times. The vivid images of the devastation caused by the 2004 event across a wide geographical area changed the perception of tsunami and achieved an appreciation of the potential enormity of impact from this low frequency but high consequence natural hazard. With WA's proximity to the Sunda Arc, which is widely recognised as a high probability area for intra-plate earthquakes, the need to develop a better understanding of tsunami risk and model the potential social and economic impacts on communities and critical infrastructure along the Western Australian coast, became a high priority. Under WA's emergency management arrangements, the Fire and Emergency Services Authority (FESA) has responsibility for ensuring effective emergency management is in place for tsunami events across the PPRR framework.

  • Tsunami inundation models are computationally intensive and require high resolution elevation data in the nearshore and coastal environment. In general this limits their practical application to scenario assessments at discrete communiteis. This study explores teh use of moderate resolution (250 m) bathymetry data to support computationally cheaper modelling to assess nearshore tsunami hazard. Comparison with high ersolution models using best available elevation data demonstrates that moderate resolution models are valid (errors in waveheight < 20%) at depths greater than 10m in areas of relatively low sloping, uniform shelf environments. However in steeper and more complex shelf environments they are only valid at depths of 20 m or greater. Modelled arrival times show much less sensitivity to data resolution compared with wave heights and current velocities. It is demonstrated that modelling using 250 m resoltuion data can be useful in assisting emergency managers and planners to prioritse communities for more detailed inundation modelling by reducing uncertainty surrounding the effects of shelf morphology on tsunami propagaion. However, it is not valid for modelling tsunami inundation. Further research is needed to define minimum elevation data requirements for modelling inundation and inform decisions to undertake acquisition of high quality elevaiton data collection.

  • The development of the Indian Ocean Tsunami Warning and mitigation System (IOTWS) has occurred rapidly over the past few years and there are now a number of centres that perform tsunami modelling within the Indian Ocean, both for risk assessment and for the provision of forecasts and warnings. The aim of this work is to determine to what extent event-specific tsunami forecasts from different numerical forecast systems differ. This will have implications for the inter-operability of the IOTWS. Forecasts from eight separate tsunami forecast systems are considered. Eight hypothetical earthquake scenarios within the Indian Ocean and ten output points at a range of depths were defined. Each forecast centre provided, where possible, time series of sea-level elevation for each of the scenarios at each location. Comparison of the resulting time series shows that the main details of the tsunami forecast, such as arrival times and characteristics of the leading waves are similar. However, there is considerable variability in the value of the maximum amplitude (hmax) for each event and on average, the standard deviation of hmax is approximately 70% of the mean. This variability is likely due to differences in the implementations of the forecast systems, such as different numerical models, specification of initial conditions, bathymetry datasets, etc. The results suggest that it is possible that tsunami forecasts and advisories from different centres for a particular event may conflict with each other. This represents the range of uncertainty that exists in the real-time situation.

  • Effective disaster risk reduction is founded on knowledge of the underlying risk. While methods and tools for assessing risk from specific hazards or to individual assets are generally well developed, our ability to holistically assess risk to a community across a range of hazards and elements at risk remains limited. Developing a holistic view of risk requires interdisciplinary collaboration amongst a wide range of hazard scientists, engineers and social scientists, as well as engagement of a range of stakeholders. This paper explores these challenges and explores some of the common and contrasting issues sampled from a range of applications addressing earthquake, tsunami, volcano, severe wind, flood, and sea-level rise from projects in Australia, Indonesia and the Philippines. Key issues range from the availability of appropriate risk assessment tools and data, to the ability of communities to implement appropriate risk reduction measures. Quantifying risk requires information on the hazard, the exposure and the vulnerability. Often the knowledge of the hazard is reasonably well constrained, but exposure information (e.g., people and their assets) and measures of vulnerability (i.e., susceptibility to injury or damage) are inconsistent or unavailable. In order to fill these gaps, Geoscience Australia has developed computational models and tools which are open and freely available. As the knowledge gaps become smaller, the need is growing to go beyond the quantification of risk to the provision of tools to aid in selecting the most appropriate risk reduction strategies (e.g., evacuation plans, building retrofits, insurance, or land use) to build community resilience.

  • Geoscience Australia is supporting the exploration and development of offshore oil and gas resources and establishment of Australia's national representative system of marine protected areas through provision of spatial information about the physical and biological character of the seabed. Central to this approach is prediction of Australia's seabed biodiversity from spatially continuous data of physical seabed properties. However, information for these properties is usually collected at sparsely-distributed discrete locations, particularly in the deep ocean. Thus, methods for generating spatially continuous information from point samples become essential tools. Such methods are, however, often data- or even variable- specific and it is difficult to select an appropriate method for any given dataset. Improving the accuracy of these physical data for biodiversity prediction, by searching for the most robust spatial interpolation methods to predict physical seabed properties, is essential to better inform resource management practises. In this regard, we conducted a simulation experiment to compare the performance of statistical and mathematical methods for spatial interpolation using samples of seabed mud content across the Australian margin. Five factors that affect the accuracy of spatial interpolation were considered: 1) region; 2) statistical method; 3) sample density; 4) searching neighbourhood; and 5) sample stratification by geomorphic provinces. Bathymetry, distance-to-coast and slope were used as secondary variables. In this study, we only report the results of the comparison of 14 methods (37 sub-methods) using samples of seabed mud content with five levels of sample density across the southwest Australian margin. The results of the simulation experiment can be applied to spatial data modelling of various physical parameters in different disciplines and have application to a variety of resource management applications for Australia's marine region.

  • Obtaining reliable predictions of the subsurface will provide a critical advantage for explorers seeking mineral deposits at depth and beneath cover. A common approach in achieving this goal is to use deterministic property-based inversion of potential field data to predict a 3D subsurface distribution of physical properties that explain measured gravity or magnetic data. Including all prior geological knowledge as constraints on the inversion ensures that the recovered predictions are consistent with both the geophysical data and the geological knowledge. Physical property models recovered from such geologically-constrained inversion of gravity and magnetic data provide a more reliable prediction of the subsurface than can be obtained without constraints. The non-uniqueness of inversions of potential field data mandates careful and consistent parameterization of the problem to ensure realistic solutions.