From 1 - 2 / 2
  • Abstract: Tsunami inundation is rare on most coastlines, but large events can have devasting consequences for life and infrastructure. There is demand for inundation hazard maps to guide risk-management actions, such as the design of tsunami evacuation zones, tsunami-resilient infrastructure, and insurance. But the frequency of tsunami-generating processes (e.g., large earthquakes, landslides, and volcanic collapses) is usually very uncertain. This reflects limitations in scientific knowledge, and the short duration of historical records compared to the long inter-event times of dangerous tsunamis. Consequently, tsunami hazards are subject to large uncertainties which should be clearly communicated to inform risk-management decisions. Probabilistic Tsunami Hazard Assessment (PTHA) offers a structured approach to quantifying tsunami hazards and the associated uncertainties, while integrating data, models, and expert opinion. For earthquake-generated tsunamis, several national and global-scale PTHAs provide databases of hypothetical scenarios, scenario occurrence-rates and their uncertainties. Because these “offshore PTHAs” represent the coast at coarse spatial resolutions (~ 1-2 km) they are not directly suitable for onshore risk management and can only simulate tsunami waveforms accurately in deep-water, far from the coast. Yet because offshore PTHAs can use earthquake and tsunami data at global scales, they offer relatively well tested representations of earthquake-tsunami sources, occurrence-rates, and uncertainties. Furthermore, by combining an offshore PTHA with a high-resolution coastal inundation model, the resulting onshore tsunami hazard can in-principle be derived at spatial resolutions appropriate for risk management (~ 10 m) for any site of interest. This study considers the computational problem of rigorously transforming offshore PTHAs into site-specific onshore PTHAs. In theory this can be done by using a high-resolution hydrodynamic model to simulate inundation for every scenario in the offshore PTHA. In practice this is computationally prohibitive, because modern offshore PTHAs contain too many scenarios (on the order of 1 million) and inundation models are computationally demanding. Monte-Carlo sampling offers a rigorous alternative that requires less computation, because inundation simulations are only required for a random subset of scenarios. It is also known to converge to the correct solution as the number of scenarios is increased. This study develops several approaches to reduce Monte-Carlo errors at the onshore site of interest, for a given computational cost. As compared to existing Monte-Carlo approaches for offshore-to-onshore PTHA, the key novel idea is to use deep-water tsunami wave heights (modelled by the offshore PTHA) to estimate the relative “importance” of each scenario near the onshore site of interest, prior to inundation simulation. Scenarios are randomly sampled from the offshore PTHA in a way that over-represents the “important” scenarios, and the theory of importance sampling enables weighting these scenarios so as to correct for the sampling bias. This can greatly reduce Monte-Carlo errors for a given sampling effort. In addition, because importance-sampling is analytically tractable, the variance of the Monte-Carlo errors can be estimated at offshore sites prior to sampling. This helps modellers to estimate the adequacy of a proposed Monte-Carlo sampling scheme prior to expensive inundation computation. The analytical variance result also enables the theory of optimal-sampling to be applied in a way that to reduces the Monte-Carlo variance, by non-uniformly sampling from earthquakes of different magnitudes. The new techniques are applied to an onshore earthquake-tsunami PTHA in Tongatapu, the main island of Tonga. In combination the new techniques lead to efficiency improvements equivalent to simulating 4-18 times more scenarios, as compared with commonly used Monte-Carlo methods for onshore PTHA. They also enable the hazard uncertainties in the offshore PTHA to be translated onshore, where they are of most significance to risk management decision-making. The greatest accuracy improvements occur for large tsunamis, and for computations that represent uncertainties in the hazard.

  • At far-field coasts the largest tsunami waves often occur many hours after arrival, and hazardous waves may persist for more than a day. To simulate tsunamis at far-field coasts it is common to combine high-resolution nonlinear shallow water models (covering sites of interest) with low-resolution reduced-physics global-scale models (to efficiently simulate propagation). The global propagation models often ignore friction and are mathematically energy conservative, so in theory the modelled tsunami will persist indefinitely. In contrast, real tsunamis exhibit slow dissipation at the global-scale with an energy e-folding time of approximately one day. How strongly do these global-scale approximations affect nearshore tsunamis simulated at far-field coasts? To investigate this issue we compare modelled and observed tsunamis at sixteen nearshore tide-gauges in Australia, which were generated by the following earthquakes: Mw 9.5 Chile 1960; Mw 9.2 Sumatra 2004; Mw 8.8 Chile 2010; Mw 9.1 Tohoku 2011; and Mw 8.3 Chile 2015. Each historic tsunami is represented with multiple earthquake source models from the literature, to prevent bias in any single source from dominating the results. The tsunami is simulated for 60 hours with a nested global-to-local model. On the nearshore grids we solve the nonlinear shallow water equations with Manning-friction, while on the global grid we test three reduced-physics propagation models which combine the linear shallow water equations with alternative treatments of friction: 1) frictionless; 2) nonlinear Manning-friction; and 3) constant linear-friction. In comparison with data, the frictionless global model works well for simulating nearshore tsunami maxima for ~ 8 hours after tsunami arrival, and Manning-friction gives similar predictions in this period. Constant linear-friction is found to under-predict the size of early arriving waves. As the simulation duration is increased from 36 to 60 hours, the frictionless global model increasingly over-estimates the observed tsunami maxima; whereas both models with global-scale friction perform relatively well. The constant linear-friction model can be improved using delayed linear-friction, where propagation is simulated with an initial frictionless period (12 hours herein). This prevents the systematic underestimation of early nearshore wave heights. While nonlinear Manning-friction offers comparably good performance, a practical advantage of the linear-friction models in this study is that their solutions can be computed, to high accuracy, with a simple transformation of frictionless solutions. This offers a pragmatic approach to improving unit-source based global tsunami simulations at late times.