From 1 - 10 / 217
  • Since the 2004 Sumatra-Andaman Earthquake, understanding the potential for tsunami impact on coastlines has become a high priority for Australia and other countries in the Asia-Pacific region. Tsunami warning systems have a need to rapidly assess the potential impact of specific events, and hazard assessments require an understanding of all potential events that might be of concern. Both of these needs can be addressed through numerical modelling, but there are often significant uncertainties associated with the three physical properties that culminate in tsunami impact: excitation, propagation and runup. This talk will focus on the first of these, and attempt to establish that seismic models of the tsunami source are adequate for rapidly and accurately establishing initial conditions for forecasting tsunami impacts at regional and teletsunami distances. Specifically, we derive fault slip models via inversion of teleseismic waveform data, and use these slip models to compute seafloor deformation that is used as the initial condition for tsunami propagation. The resulting tsunami waveforms are compared with observed waveforms recorded by ocean bottom pressure recorders (BPRs). We show that, at least for the large megathrust earthquakes that are the most frequent source of damaging tsunami, the open-ocean tsunami recorded by the BPRs are well predicted by the seismic source models. For smaller earthquakes, or those which occur on steeply dipping faults, however, the excitation and propagation of the resulting tsunami can be significantly influenced by 3D hydrodynamics and by dispersion, respectively. This makes it mode difficult to predict the tsunami waveforms.

  • Geoscience Australia has recently released the 2012 version of the National Earthquake Hazard Map of Australia. Among other applications, the map is a key component of Australia's earthquake loading code AS1170.4. In this presentation we will provide an overview of the new maps and how they were put together. The new maps take advantage of the significant improvements in both the data sets and models used for earthquake hazard assessment in Australia since the current map in AS1170.4 was produced. These include: - An additional 20+ years of earthquake observations - Improved methods of declustering earthquake catalogues and calculating earthquake recurrence - Ground motion prediction equations (i.e. attenuation equations) based on observed strong motions instead of intensity - Revised earthquake source zones - Improved maximum magnitude earthquake estimates based on palaeoseismology - The use of open source software for undertaking probabilistic seismic hazard assessment which promotes testability and repeatability Hazard maps will be presented for a range of response spectral acceleration (RSA) periods between 0.0 and 1.0s and for multiple return periods between a few hundred to a few thousand years. These maps will be compared with the current earthquake hazard map in AS1170.4. For a return period of 500 years, the hazard values in the 0.0s RSA period map were generally lower than the hazard values in the current AS1170.4 map. By contrast the 0.2s RSA period hazard values were generally higher.

  • This paper describes the methods used to define earthquake source zones and calculate their recurrence parameters (a, b, Mmax). These values, along with the ground motion relations, effectively define the final hazard map. Definition of source zones is a highly subjective process, relying on seismology and geology to provide some quantitative guidance. Similarly the determination of Mmax is often subjective. Whilst the calculation of a and b is quantitative, the assumptions inherent in the available methods need to be considered when choosing the most appropriate one. For the new map we have maximised quantitative input into the definition of zones and their parameters. The temporal and spatial Poisson statistical properties of Australia's seismicity, along with models of intra-plate seismicity based on results from neotectonic, geodetic and computer modelling studies of stable continental crust, suggest a multi-layer source zonation model is required to account for the seismicity. Accordingly we propose a three layer model consisting of three large background seismicity zones covering 100% of the continent, 25 regional scale source zones covering ~50% of the continent, and 44 hotspot zones covering 2% of the continent. A new algorithm was developed to calculate a and b. This algorithm was designed to minimise the problems with both the maximum likelihood method (which is sensitive to the effects of varying magnitude completeness at small magnitudes) and the least squares regression method (which is sensitive to the presence of outlier large magnitude earthquakes). This enabled fully automated calculation of a and b parameters for all sources zones. The assignment of Mmax for the zones was based on the results of a statistical analysis of neotectonic fault scarps.

  • This paper discusses two of the key inputs used to produce the draft National Earthquake Hazard Map for Australia: 1) the earthquake catalogue and 2) the ground-motion prediction equations (GMPEs). The composite catalogue used draws upon information from three key catalogues for Australian and regional earthquakes; a catalogue of Australian earthquakes provided by Gary Gibson, Geoscience Australia's QUAKES, and the International Seismological Centre. A complex logic is then applied to select preferred location and magnitude of earthquakes depending on spatial and temporal criteria. Because disparate local magnitude equations were used through time, we performed first order magnitude corrections to standardise magnitude estimates to be consistent with the attenuation of contemporary local magnitude ML formulae. Whilst most earthquake magnitudes do not change significantly, our methodology can result in reductions of up to one local magnitude unit in certain cases. Subsequent ML-MW (moment magnitude) corrections were applied. The catalogue was declustered using a magnitude dependent spatio-temporal filter. Previously identified blasts were removed and a time-of-day filter was developed to further deblast the catalogue.

  • Abstract: Severe wind is one of the major natural hazards affecting Australia. The main wind hazards contributing to economic loss in Australia are tropical cyclones, thunderstorms and mid-latitude storms. Geoscience Australia's Risk and Impact Analysis Group (RIAG) has developed mathematical models to study a number of natural hazards including wind hazard. In this paper, we describe a model to study 'combined' gust wind hazard produced by thunderstorm and mid-latitude or synoptic storms. The model is aimed at applications in regions where these two wind types dominate the hazard spectrum across all return periods (most of the Australian continent apart from the coastal region stretching north from about 27 degrees south). Each of these severe wind types is generated by different physical phenomena and poses a different hazard to the built environment. For these reasons, it is necessary to model them separately. The return period calculated for each wind type is then combined probabilistically to produce the combined gust wind return period, the indicator used to quantify severe wind hazard. The combined wind hazard model utilises climate-simulated wind speeds and hence it allows wind analysts to assess the impact of climate change on future wind hazard. It aims to study severe wind hazard in the non-cyclonic regions of Australia (region 'A', as defined in the Australian/NZ Wind Loading Standard, AS/NZS 1170.2:2002) which are dominated by thunderstorm and synoptic winds.

  • One of the important inputs to a probabilistic seismic hazard assessment is the expected rate at which earthquakes within the study region. The rate of earthquakes is a function of the rate at which the crust is being deformed, mostly by tectonic stresses. This paper will present two contrasting methods of estimating the strain rate at the scale of the Australian continent. The first method is based on statistically analysing the recently updated national earthquake catalogue, while the second uses a geodynamic model of the Australian plate and the forces that act upon it. For the first method, we show a couple of examples of the strain rates predicted across Australia using different statistical techniques. However no matter what method is used, the measurable seismic strain rates are typically in the range of 10-16s-1 to around 10-18s-1 depending on location. By contrast, the geodynamic model predicts a much more uniform strain rate of around 10-17s-1 across the continent. The level of uniformity of the true distribution of long term strain rate in Australia is likely to be somewhere between these two extremes. Neither estimate is consistent with the Australian plate being completely rigid and free from internal deformation (i.e. a strain rate of exactly zero). This paper will also give an overview of how this kind of work affects the national earthquake hazard map and how future high precision geodetic estimates of strain rate should help to reduce the uncertainty in this important parameter for probabilistic seismic hazard assessments.

  • An assumption of probabilistic seismic hazard assessment is that within each source zone the random earthquakes of the past are considered a good predictor of future seismicity. Random earthquakes suggest a Poisson process. If the source zone does not follow a Poisson process then the resulting PSHA might not be valid. The tectonics of a region will effect its spatial distributions. Earthquakes occurring on a single fault, or uniformly distributed, or clustered or random will each have a distinctive spatial distribution. Here we describe a method for both identifying and delineating earthquake clusters and then characterising them. We divide the region into N cells and by counting the number of earthquakes in each cell we obtain a distribution of the number of cells versus the number of earthquakes per cell. This can then be compared to the theoretical Poisson distribution. Areas which deviate from the theoretical Poisson distribution, can then be delineated. This suggests a statistically robust method for determining source zones. Preliminary results suggest that areas of clustering (eg. SWSZ) can also be modelled as a Poisson process which differs from the larger regional Poisson process. The effect of aftershocks and swarms are also investigated.

  • We present a probabilistic tectonic hazard analysis of a site in the Otway Basin,Victoria, Australia, as part of the CO2CRC Otway Project for CO2 storage risk. The study involves estimating the likelihood of future strong earthquake shaking and associated fault displacements from natural tectonic processes that could adversely impact the storage process at the site. Three datasets are used to quantify the tectonic hazards at the site: (1) active faults; (2) historical seismicity, and; (3) GPS surface velocities. Our analysis of GPS data reveals strain rates at the limit of detectability and not significantly different from zero. Consequently, we do not develop a GPS-based source model for this Otway Basin model. We construct logic trees to capture epistemic uncertainty in both the fault and seismicity source parameters, and in the ground motion prediction. A new feature for seismic hazard modelling in Australia, and rarely dealt with in low-seismicity regions elsewhere, is the treatment of fault episodicity (long-term activity versus inactivity) in the Otway model. Seismic hazard curves for the combined (fault and distributed seismicity) source model show that hazard is generally low, with peak ground acceleration estimates of less than 0.1g at annual probabilities of 10-3-10-4/yr. The annual probability for tectonic displacements of greater than or equal to 1m at the site is even lower, in the vicinity of 10-8-10-9/yr. The low hazard is consistent with the intraplate tectonic setting of the region, and unlikely to pose a significant hazard for CO2 containment and infrastructure.

  • The cost of landslide is underestimated in Australia because their impact and loss are not readily reported or captured. There is no reliable source of data which highlights landslide cost to communities and explains who currently pays for the hazard and how much costs are. The aim of this document is to investigate and analyse landslide costs within a Local Government Area (LGA) in order to highlight the varied landslide associated costs met by the local government, state traffic and rail authorities and the public. It is anticipated this may assist in developing a baseline awareness of the range of landslide costs that are experienced at a local level in Australia. Initial estimates in this study indicate that cumulative costs associated with some landslide sites are well beyond the budget capacity of a local government to manage. Furthermore, unplanned remediation works can significantly disrupt the budget for planned mitigation works over a number of years. Landslide costs also continue to be absorbed directly by individual property owners as well as by infrastructure authorities and local governments. This is a marked distinction from how disaster costs which arise from other natural hazard events, such as flood, bushfire, cyclone and earthquake are absorbed at a local level. It was found that many generic natural hazard cost models are inappropriate for determining landslide costs because of the differences in the types of landslide movement and damage. Further work is recommended to develop a cost data model suitable for capturing consistent landslide cost data. Better quantification of landslide cost is essential to allow for comparisons to be made with other natural hazard events at appropriate levels. This may allow for more informed policy development and decision making across all levels.

  • Abstract is too large to be pasted here. See TRIM link: D2011-144613