statistics
Type of resources
Keywords
Publication year
Topics
-
The tasseled cap index percentiles provide statistical (10th, 50th and 90th percentile) summaries of the tasseled cap wetness index from 1987 to 2017. They are intended for use as inputs into classification algorithms to identify potential wetlands, groundwater dependent ecosystems and characterise salt flats, clay pans, salt lakes and coastal land forms. For Landsat 5 TM, Landsat 7 ETM+ and Landsat 8 OLI data, the tasseled cap transforms are described in "Crist, E.P., 1985. A TM tasseled cap equivalent transformation for reflectance factor data. Remote Sensing of Environment, 17(3), pp.301-306."
-
<div>Forecasting large earthquakes along active faults is of critical importance for seismic hazard assessment. Statistical models of recurrence intervals based on compilations of paleoseismic data provide a forecasting tool. Here we compare five models and use Bayesian model-averaging to produce time-dependent, probabilistic forecasts of large earthquakes along 93 fault segments worldwide. This approach allows better use of the measurement errors associated with paleoseismic records and accounts for the uncertainty around model choice. Our results indicate that although the majority of fault segments (65/93) in the catalogue favour a single best model, 28 benefit from a model-averaging approach. We provide earthquake rupture probabilities for the next 50 years and forecast the occurrence times of the next rupture for all the fault segments. Our findings suggest that there is no universal model for large earthquake recurrence, and an ensemble forecasting approach is desirable when dealing with paleoseismic records with few data points and large measurement errors. <b>Citation:</b> Wang, T., Griffin, J.D., Brenna, M. et al. Earthquake forecasting from paleoseismic records. <i>Nat Commun</i><b> 15</b>, 1944 (2024). https://doi.org/10.1038/s41467-024-46258-z
-
<div>This is for submission to the 2022 ICCE Conference: https://icce2022.com/</div> This Abstract was submitted/presented to the 2022 International Conference on Coastal Engineering (ICCE) 04-09 December (https://icce2022.com/)
-
<p>Lu-Hf isotopic analysis of zircon is becoming a common way to characterise the source signature of granite. The data are collected by MC-LA-ICP-MS (multi-collector laser ablation inductively coupled plasma mass spectrometry) as a series of spot analyses on a number of zircons from a single sample. These data are often plotted as spot analyses, and variable significance is attributed to extreme values, and amount of scatter. <p>Lu-Hf data is used to understand the origin of granites, and often a distribution of εHf values is interpreted to derive from heterogeneity in the source or from mixing processes. As with any physical measurement, however, before the data are used to describe geologic processes, care ought to be taken to account for sources of analytical variability. The null hypothesis of any dataset is that there is no difference between measurements that cannot be explained by analytical uncertainty. This null hypothesis must then be disproven using common statistical methods. <p>There are many sources of uncertainty in any analytical method. First is the uncertainty associated with the counting statistics of each analysis. This uncertainty is usually recorded as the SE (standard error) uncertainty attributed to each spot. This uncertainty commonly underestimates the total uncertainty of the population, as it only contains information about the consistency of the measurement within a single analysis. The other source of uncertainty that needs to be characterised is similarity over multiple analyses. This is very difficult to assess in an unknown material, but can be assessed by measuring well-understood reference zircons. <p>Reference materials are characterised by homogeneity in the isotope of interest, and multiple analyses of this material should produce a single statistical population. Where these populations display significant excess scatter, manifested as a MSWD value that far exceeds 1, this means that counting statistics are not the sole source of uncertainty. This can be addressed by expanding the uncertainty on the analyses until the standard zircons form a coherent statistical population. This expansion should then be applied to the unknown zircons to accommodate this ‘spot-to-spot-uncertainty’ or ‘repeatability’ factor. This approach is routinely applied to SHRIMP U-Pb data, and here is similarly applied to Lu-Hf data from granites of the northeast Lachlan Orogen. <p>By applying these uncertainty factors appropriately, it is then possible to assess the homogeneity of unknown materials by calculating weighted means and MSWD factors. The MSWD is a measure of scatter away from a single population (McIntyre et al., 1966; Wendt and Carl, 1991). Where the MSWD is 1, the scatter in data points can be explained solely by analytical means. The higher the MSWD, the less likely it is that the data can be described as a single population. Data which disperses over several εHf units can still be attributed to a single population if the uncertainty envelopes of analyses largely overlap each other. These concepts are illustrated using the data presented in Figure 1. Four out of five of the εHf datasets on zircons from granites form statistically coherent populations (MSWD = 0.69 to 2.4). <p>A high MSWD does not necessarily imply that variation is due to processes occurring during granite formation. Although zircon is a robust mineral, isotopic disturbances are still possible. In the U-Pb system, there is often evidence of post-crystallisation ‘Pb-loss’ which leads to erroneously young apparent U-Pb ages. The Lu-Hf system in zircon is generally thought to be more robust than the U-Pb system, but that does not mean that it is impervious to such effects. In the data set presented in Figure 1, the sample with the most scatter in Lu-Hf (Glenariff Granite, εHf = -0.2 ± 1.5, MSWD = 7.20) is also the sample which had the most rejections in the SHRIMP U-Pb data due to Pb-loss. The subsequent Hf analyses targeted only those grains which fell within the magmatic population (i.e., no observed Pb-loss), but the larger volume excavated by laser Hf analysis means that it is likely that disturbed regions of these grains were incorporated into the measurement. This gives an explanation for the scatter that has nothing to do with geological source characteristics. <p>This line of logic can similarly be applied to all types of multi-spot analyses, including O-isotope analyses. While most of the εHf datasets presented here form coherent populations, the O-isotope data are significantly more scattered (MSWD = 2.8 to 9.4). The analyses on the unknowns scatter much more than on the co-analysed TEMORA2 reference zircon. This implies a source of scatter additional to those described above. In addition to the above described sources of uncertainty, O-isotope analysis by SIMS is also extremely sensitive to topography on the surface of the epoxy into which zircons are mounted (Ickert et al., 2008). O isotopes may also be susceptible to post-formation disturbance and so care should also be taken when interpreting O data, before assigning geological meaning. <p>While it is possible for Lu-Hf and O analyses of zircons in granites to reflect heterogeneous sources and/or complex formation processes, it is important to first exclude other sources of heterogeneity such as analytical sources of uncertainty, and post-formation isotopic disturbances.
-
<div>Offshore probabilistic tsunami hazard assessments (PTHAs) are increasingly available for earthquake generated tsunamis. They provide standardized representations of tsunami scenarios, their uncertain occurrence-rates, and models of the deep ocean waveforms. To quantify onshore hazards it is natural to combine this information with a site-specific inundation model, but this is computationally challenging to do accurately, especially if accounting for uncertainties in the offshore PTHA. This study reviews an efficient Monte Carlo method recently proposed to solve this problem. The efficiency comes from preferential sampling of scenarios that are likely important near the site of interest, using a user-defined importance measure derived from the offshore PTHA. The theory of importance sampling enables this to be done without biasing the final results. Techniques are presented to help design and test Monte Carlo schemes for a site of interest (before inundation modelling) and to quantify errors in the final results (after inundation modelling). The methods are illustrated with examples from studies in Tongatapu and Western Australia.</div> Abstract submitted/presented to the International Conference on Coastal Engineering (ICCE) 2022 - Sydney (https://icce2022.com/). Citation: Davies, G. (2023). FROM OFFSHORE TO ONSHORE PROBABILISTIC TSUNAMI HAZARD ASSESSMENT WITH QUANTIFIED UNCERTAINTY: EFFICIENT MONTE CARLO TECHNIQUES. <i>Coastal Engineering Proceedings</i>, (37), papers.18. https://doi.org/10.9753/icce.v37.papers.18
-
Hazardous tsunamis are rare in Australia but could be generated by several mechanisms, including large plate-boundary earthquakes in locations that efficiently direct wave energy to our coast. With few hours between detection and tsunami arrival, prior planning is important to guide emergency response and risk mitigation. This drives interest in tsunami hazard information; which areas could be inundated, how likely, and how confident can we be? In practice the hazard is uncertain because historical records are short relative to tsunami frequencies, while long-term sedimentary records are sparse. Hazard assessments thus often follow a probabilistic approach where many alternative tsunami scenarios are simulated and assigned uncertain occurrence rates. This relies on models of stochastic earthquakes and their occurrence rates, which are not standardised, but depend on the scenario earthquake magnitude and other information from the source region. In this study we test three different stochastic tsunami models from the 2018 Australian Probabilistic Tsunami Hazard Assessment (PTHA18), an open-source database of earthquake-tsunami scenarios and return periods. The three models are tested against observations from twelve historical tsunamis at multiple tide gauges in Australia. For each historical tsunami, and each of the three models, sixty scenarios with similar earthquake location and magnitude are sampled from the PTHA18 database. A nonlinear shallow water model is used to simulate their effects at tide gauges in NSW, Victoria and Western Australia. The performance and statistical biases of the three models are assessed by comparing observations with the 60 modelled scenarios, over twelve separate tsunamis. Presented at the 30th Conference of the Australian Meteorological and Oceanographic Society (AMOS) 2024.
-
This internal-use-only dataset contains the latest available Administrative Boundaries data - Australian Bureau of Statistics (ABS) Australian Statistical Geography Standard (ASGS) Main Structure (Mesh Blocks and Statistical Areas) - from PSMA which is updated every 3 months. Data is open to general public and can be downloaded. For more information about the PSMA licence agreement and to access the metadata statement, please refer to the confluence page of (http://intranet.ga.gov.au/confluence/display/NGIG/PSMA+Data+and+Cloud+Services).