From 1 - 10 / 217
  • Since the 2004 Sumatra-Andaman Earthquake, understanding the potential for tsunami impact on coastlines has become a high priority for Australia and other countries in the Asia-Pacific region. Tsunami warning systems have a need to rapidly assess the potential impact of specific events, and hazard assessments require an understanding of all potential events that might be of concern. Both of these needs can be addressed through numerical modelling, but there are often significant uncertainties associated with the three physical properties that culminate in tsunami impact: excitation, propagation and runup. This talk will focus on the first of these, and attempt to establish that seismic models of the tsunami source are adequate for rapidly and accurately establishing initial conditions for forecasting tsunami impacts at regional and teletsunami distances. Specifically, we derive fault slip models via inversion of teleseismic waveform data, and use these slip models to compute seafloor deformation that is used as the initial condition for tsunami propagation. The resulting tsunami waveforms are compared with observed waveforms recorded by ocean bottom pressure recorders (BPRs). We show that, at least for the large megathrust earthquakes that are the most frequent source of damaging tsunami, the open-ocean tsunami recorded by the BPRs are well predicted by the seismic source models. For smaller earthquakes, or those which occur on steeply dipping faults, however, the excitation and propagation of the resulting tsunami can be significantly influenced by 3D hydrodynamics and by dispersion, respectively. This makes it mode difficult to predict the tsunami waveforms.

  • In early 2011 a series of natural disasters impacted a large number of communities in Queensland. The flooding in the Brisbane region and the severe wind and storm surge experienced in the tully region caused widespread damage to infrastructure and disrupted both households and businesses. The full recovery costs over the next few years are expected to be considerable and will be a major drain on the resources of all levels of government and the insurance industry.

  • On 23 March 2012, at 09:25 GMT, a MW 5.4 earthquake occurred in the eastern Musgrave Ranges region of north-central South Australia, near the community of Ernabella (Pukatja). Several small communities in this remote part of central Australia reported the tremor, but there were no reports of injury or significant damage. This was the largest earthquake to be recorded on mainland Australia for the past 15 years and resulted in the formation of a 1.6 km-long surface deformation zone comprising reverse fault scarps with a maximum vertical displacement of over 0.5 m, extensive ground cracking, and numerous rock falls. The maximum ground-shaking is estimated to have been in the order of MMI VI. The earthquake occurred in non-extended Stable Continental Region (SCR) cratonic crust, over 1900 km from the nearest plate boundary. Fewer than fifteen historic earthquakes worldwide are documented to have produced co-seismic surface deformation (i.e. faulting or folding) in the SCR setting. The record of surface deformation relating to the Ernabella earthquake therefore provides an important constraint on models relating surface rupture length to earthquake magnitude. Such models may be employed to better interpret Australia's rich prehistoric record of seismicity, and contribute to improved estimates of seismic hazard.

  • We report four lessons from experience gained in applying the multiple-mode spatially-averaged coherency method (MMSPAC) at 25 sites in Newcastle (NSW) for the purpose of establishing shear-wave velocity profiles as part of an earthquake hazard study. The MMSPAC technique is logistically viable for use in urban and suburban areas, both on grass sports fields and parks, and on footpaths and roads. A set of seven earthquake-type recording systems and team of three personnel is sufficient to survey three sites per day. The uncertainties of local noise sources from adjacent road traffic or from service pipes contribute to loss of low-frequency SPAC data in a way which is difficult to predict in survey design. Coherencies between individual pairs of sensors should be studied as a quality-control measure with a view to excluding noise-affected sensors prior to interpretation; useful data can still be obtained at a site where one sensor is excluded. The combined use of both SPAC data and HVSR data in inversion and interpretation is a requirement in order to make effective use of low frequency data (typically 0.5 to 2 Hz at these sites) and thus resolve shear-wave velocities in basement rock below 20 to 50 m of soft transported sediments.

  • This paper describes the methods used to define earthquake source zones and calculate their recurrence parameters (a, b, Mmax). These values, along with the ground motion relations, effectively define the final hazard map. Definition of source zones is a highly subjective process, relying on seismology and geology to provide some quantitative guidance. Similarly the determination of Mmax is often subjective. Whilst the calculation of a and b is quantitative, the assumptions inherent in the available methods need to be considered when choosing the most appropriate one. For the new map we have maximised quantitative input into the definition of zones and their parameters. The temporal and spatial Poisson statistical properties of Australia's seismicity, along with models of intra-plate seismicity based on results from neotectonic, geodetic and computer modelling studies of stable continental crust, suggest a multi-layer source zonation model is required to account for the seismicity. Accordingly we propose a three layer model consisting of three large background seismicity zones covering 100% of the continent, 25 regional scale source zones covering ~50% of the continent, and 44 hotspot zones covering 2% of the continent. A new algorithm was developed to calculate a and b. This algorithm was designed to minimise the problems with both the maximum likelihood method (which is sensitive to the effects of varying magnitude completeness at small magnitudes) and the least squares regression method (which is sensitive to the presence of outlier large magnitude earthquakes). This enabled fully automated calculation of a and b parameters for all sources zones. The assignment of Mmax for the zones was based on the results of a statistical analysis of neotectonic fault scarps.

  • One of the important inputs to a probabilistic seismic hazard assessment is the expected rate at which earthquakes within the study region. The rate of earthquakes is a function of the rate at which the crust is being deformed, mostly by tectonic stresses. This paper will present two contrasting methods of estimating the strain rate at the scale of the Australian continent. The first method is based on statistically analysing the recently updated national earthquake catalogue, while the second uses a geodynamic model of the Australian plate and the forces that act upon it. For the first method, we show a couple of examples of the strain rates predicted across Australia using different statistical techniques. However no matter what method is used, the measurable seismic strain rates are typically in the range of 10-16s-1 to around 10-18s-1 depending on location. By contrast, the geodynamic model predicts a much more uniform strain rate of around 10-17s-1 across the continent. The level of uniformity of the true distribution of long term strain rate in Australia is likely to be somewhere between these two extremes. Neither estimate is consistent with the Australian plate being completely rigid and free from internal deformation (i.e. a strain rate of exactly zero). This paper will also give an overview of how this kind of work affects the national earthquake hazard map and how future high precision geodetic estimates of strain rate should help to reduce the uncertainty in this important parameter for probabilistic seismic hazard assessments.

  • This set of Australian landslide images illustrates the causes of landslides, both large and small, and other earth movements. A set of 15 slides with explanatory text; includes images of Thredbo, NSW, Sorrento Vic., Gracetown WA and Tasmania.

  • Abstract is too large to be pasted here. See TRIM link: D2011-144613

  • The cost of landslide is underestimated in Australia because their impact and loss are not readily reported or captured. There is no reliable source of data which highlights landslide cost to communities and explains who currently pays for the hazard and how much costs are. The aim of this document is to investigate and analyse landslide costs within a Local Government Area (LGA) in order to highlight the varied landslide associated costs met by the local government, state traffic and rail authorities and the public. It is anticipated this may assist in developing a baseline awareness of the range of landslide costs that are experienced at a local level in Australia. Initial estimates in this study indicate that cumulative costs associated with some landslide sites are well beyond the budget capacity of a local government to manage. Furthermore, unplanned remediation works can significantly disrupt the budget for planned mitigation works over a number of years. Landslide costs also continue to be absorbed directly by individual property owners as well as by infrastructure authorities and local governments. This is a marked distinction from how disaster costs which arise from other natural hazard events, such as flood, bushfire, cyclone and earthquake are absorbed at a local level. It was found that many generic natural hazard cost models are inappropriate for determining landslide costs because of the differences in the types of landslide movement and damage. Further work is recommended to develop a cost data model suitable for capturing consistent landslide cost data. Better quantification of landslide cost is essential to allow for comparisons to be made with other natural hazard events at appropriate levels. This may allow for more informed policy development and decision making across all levels.

  • We present a probabilistic tectonic hazard analysis of a site in the Otway Basin,Victoria, Australia, as part of the CO2CRC Otway Project for CO2 storage risk. The study involves estimating the likelihood of future strong earthquake shaking and associated fault displacements from natural tectonic processes that could adversely impact the storage process at the site. Three datasets are used to quantify the tectonic hazards at the site: (1) active faults; (2) historical seismicity, and; (3) GPS surface velocities. Our analysis of GPS data reveals strain rates at the limit of detectability and not significantly different from zero. Consequently, we do not develop a GPS-based source model for this Otway Basin model. We construct logic trees to capture epistemic uncertainty in both the fault and seismicity source parameters, and in the ground motion prediction. A new feature for seismic hazard modelling in Australia, and rarely dealt with in low-seismicity regions elsewhere, is the treatment of fault episodicity (long-term activity versus inactivity) in the Otway model. Seismic hazard curves for the combined (fault and distributed seismicity) source model show that hazard is generally low, with peak ground acceleration estimates of less than 0.1g at annual probabilities of 10-3-10-4/yr. The annual probability for tectonic displacements of greater than or equal to 1m at the site is even lower, in the vicinity of 10-8-10-9/yr. The low hazard is consistent with the intraplate tectonic setting of the region, and unlikely to pose a significant hazard for CO2 containment and infrastructure.