From 1 - 10 / 121
  • This report provides background information about the Ginninderra controlled release Experiment 2 including a description of the environmental and weather conditions during the experiment, the groundwater levels and a brief description of all the monitoring techniques that were trialled during the experiment. Release of CO2 began 26 October 2012 at 2:25 PM and stopped 21 December 2012 at 1:30 PM. The total CO2 release rate during Experiment 2 was 218 kg/d CO2. The aim of the second Ginninderra controlled release was to artificially simulate the leakage of CO2 along a line source, to represent leakage along a fault. Multiple methods and techniques were then trialled in order to assess their abilities to: - detect that a leak was present - pinpoint the location of the leak - identify the strength of the leak - monitor how the CO2 behaves in the sub-surface - assess the effects it may have on plant health Several monitoring and assessment techniques were trialled for their effectiveness to quantify and qualify the CO2 that was release. This experiment had a focus on plant health indicators to assess the aims listed above, in order to evaluate the effectiveness of monitoring plant health and the use of geophysical methods to identify that a CO2 leak may be present. The methods are described in this report and include: - soil gas - airborne hyperspectral surveys - plant health (PhenoMobile) - soil CO2 flux - electromagnetic (EM-31) - electromagnetic (EM-38) - ground penetrating radar (GPR) This report is a reference guide to describe the Ginninderra Experiment 2 details. Only methods are described in this report with the results of the study published in conference papers and future journal articles.

  • CO2CRC Symposium 2013: Oral presentation as part of a tag-team Ginninderra presentation As part of the controlled release experiments at the Ginninderra test site, a total of 14 soil flux surveys were conducted; 12 during the first experiment (March 2012 - June 2012), and 2 during the second experiment (October - December 2012). The aim was to determine what proportion of the known CO2 that was released could be measured using the soil flux method as a quantification tool. The results of this study enabled us to use the soil flux measurements as a proxy for other CO2 quantification methods and to gain an understanding of how the CO2 migrated within the sub-surface. For experiment one; baseline surveys were conducted pre-release, followed by surveys several times a week during the first stages of the release. The CO2 'breakthrough' was detected only 1 day after the release began. Surveys were then conducted weekly to monitor the flux rate over time. The soil CO2 flux gradually increased in magnitude until almost reaching the expected release rate (128 kg/day measured while the release rate was 144 kg/day) after approximately 4 weeks, and then receded quickly once the controlled release was stopped. Soil gas wells confirm that there is significant lateral migration of the CO2 in the sub-surface, suggesting that there was a degree of accumulation of CO2 in the sub-surface during the experiment.

  • Poster for IAH 2013 A major concern for regulators and the public with geological storage of CO2 is the potential for the migration of CO2 via a leaky fault or well into potable groundwater supplies. Given sufficient CO2, an immediate effect on groundwater would be a decrease in pH which could lead to accelerated weathering, an increase in alkalinity and the release of major and minor ions. Laboratory and core studies have demonstrated that on contact with CO2 heavy metals can be released under low pH and high CO2 conditions (particularly Pd, Ni and Cr). There is also a concern that trace organic contaminants could be mobilised due to the high solubility of many organics in supercritical CO2. These scenarios potentially occur in a high CO2 leakage event, therefore detection of a small leak although barely perceptible could provide an important early warning for a subsequent and more substantial impact.

  • The Walloon Coal Measures (WCM) in the Clarence-Moreton and the Surat basins in Qld and northern NSW contain up to approximately 600 m of mudstone, siltstone, sandstone and coal. Wide-spread exploration for coal seam gas (CSG) within both basins has led to concerns that the depressurisation associated with the resource development may impact on water resources in adjacent aquifers. In order to predict potential impacts, a detailed understanding of sedimentary basins hydrodynamics that integrates geology, hydrochemistry and environmental tracers is important. In this study, we show how different hydrochemical parameters and isotopic tracers (i.e. major ion chemistry, dissolved gas concentrations, 13C-DIC, 18O, 87Sr/86Sr, 3H, 14C, 2H and 13C of CH4) can help to improve the knowledge on groundwater recharge and flow patterns within the coal-bearing strata and their connectivity with over- or underlying formations. Dissolved methane concentrations in groundwaters of the WCM in the Clarence-Moreton Basin range from below the reporting limit (10 µg/L) to approximately 50 mg/L, and samples collected from nested bore sites show that there is also a high degree of vertical variability. Other parameters such as groundwater age measurements collected along distinct flow paths are also highly variable. In contrast, 87Sr/86Sr isotope ratios of WCM groundwaters are very uniform and distinct from groundwaters contained in other sedimentary bedrock units, suggesting that 87Sr/86Sr ratios may be a suitable tracer to study hydraulic connectivity of the Walloon Coal Measures with over- or underlying aquifers, although more studies on the systematic are required. Overall, the complexity of recharge processes, aquifer connectivity and within-formation variability confirms that a single tracer that cannot provide all information necessary to understand aquifer connectivity in these sedimentary basins, but that a multi-tracer approach is required.

  • Monitoring is a regulatory requirement for all carbon dioxide capture and geological storage (CCS) projects to verify containment of injected carbon dioxide (CO2) within a licensed geological storage complex. Carbon markets require CO2 storage to be verified. The public wants assurances CCS projects will not cause any harm to themselves, the environment or other natural resources. In the unlikely event that CO2 leaks from a storage complex, and into groundwater, to the surface, atmosphere or ocean, then monitoring methods will be required to locate, assess and quantify the leak, and to inform the community about the risks and impacts on health, safety and the environment. This paper considers strategies to improve the efficiency of monitoring the large surface area overlying onshore storage complexes. We provide a synthesis of findings from monitoring for CO2 leakage at geological storage sites both natural and engineered, and from monitoring controlled releases of CO2 at four shallow release facilities - ZERT (USA), Ginninderra (Australia), Ressacada (Brazil) and CO2 field lab (Norway).

  • Abstract: Land Surface Temperature (Ts) is an important boundary condition in many land surface modelling schemes. It is also important in other application areas such as, hydrology, urban environmental monitoring, agriculture, ecological and bushfire monitoring. Many studies have shown that it is possible to retrieve Ts on a global scale using thermal infrared data from satellites. Development of standard methodologies that generate Ts products routinely would be of broad benefit to the application of remote sensing data in areas such as hydrology and urban monitoring. AVHRR and MODIS datasets are routinely used to deliver Ts products. However, these data have 1km spatial resolution, which is too coarse to detect the detailed variation of land surface change of concern in many applications, especially in heterogeneous areas. Higher resolution thermal data from Landsat is a possible option in such cases. To derive Ts, two scientific problems need to be resolved: to remove the atmospheric effects and derive surface brightness temperature (TB) and to separate the emissivity and Ts effects in the surface brightness temperature (TB). To derive TB, for single thermal band sensors such as, Landsat 5, 7 and (due to a faulty dual-band thermal instrument) on Landsat-8, the split window methods, such as those used for NOAAAVHRR data (Becker & Li, 1990), and the day/night pairs of thermal infrared data in several bands, as used for MODIS (Wan et al., 2002) are not available for correcting atmospheric effects. The retrieval of surface brightness temperature TB from Landsat data therefore needs more care, as the accuracy of the TB retrieval depends critically on the ancillary data, such as atmospheric water vapour data (precipitable water). In this paper, a feasible operational method to remove the atmospheric effects and retrieve surface brightness temperature from Landsat data is presented. The method uses the MODTRAN 5 radiative transfer model and global atmospheric profile data sets, such as NASA MERRA (The Modern Era Retrospective-Analysis for Research and Applications) atmospheric profiles, NOAA NCEP (National Center for Environmental Prediction) reanalysis product and ECMWF (The European Centre for Medium-Range Weather Forecasts) to correct for the atmospheric effects. The results derived from the global atmospheric profiles are assessed against the TB product estimated by using (accurate) ground based radiosonde data (balloon data). The results from this study have found: The global data sets NCEP1, NCEP2, MERRA and ECMWF can all generally give satisfactory TB products and can meet the levels of accuracy demanded by many practitioners, such as 1º K. Among global data sets, ECMWF data set performs best. The root mean square difference (RMSD) for the 9 days and 3 test sites are all within 0.4º K when compared with the TB products estimated using ground radiosonde measurements.

  • This presentation will provide an overview of geological storage projects and research in Australia.

  • Wind multipliers are factors that transform regional wind speeds into local wind speeds, accounting for the local effects which include topographical, terrain and shielding influences. Wind multipliers have been successfully utilized in various wind related activities such as wind hazard assessment (engineering building code applications), event-based wind impact assessments (tropical cyclones), and also national scale wind risk assessment. The work of McArthur in developing the Forest Fire Danger Index (FFDI: Luke and McArthur, 1978) indicates that the contribution of wind speed to the FFDI is about 45% of the magnitude, indicating the importance of determining an accurate local wind speed in bushfire hazard and spread calculations. For bushfire spread modeling, local site variation (@ 100 metre and also 25 metre horizontal resolution) have been considered through the use of wind multipliers, and this has resulted in a significant difference to the currently utilized regional '10 metre height' wind speed (and further to the impact analysis). A series of wind multipliers have been developed for three historic bushfire case study areas; the 2009 Victorian fires (Kilmore fire), the 2005 Wangary fire (Eyre Peninsula), and the 2001 Warragamba - Mt. Hall fire (Western Sydney). This paper describes the development of wind multiplier computation methodology and the application of wind multipliers to bushfire hazard and impact analysis. The efficacy of using wind multipliers within a bushfire spread hazard model is evaluated by considering case study comparisons of fire extent, shape and impact against post-disaster impact assessments. The analysis has determined that it is important to consider wind multipliers for local wind speed determination in order to achieve reliable fire spread and impact results. From AMSA 2013 conference

  • We have developed a Building Fire Impact Model to evaluate the probability that a building located in a peri-urban region of a community is affected/destroyed by a forest fire. The methodology is based on a well-known mathematical technique called Event Tree (ET) modeling, which is a useful graphical way of representing the dependency of events. The tree nodes are the event itself, and the branches are formed with the probability of the event happening. If the event can be represented by a discrete random variable, the number of possible realisations of the event and their corresponding probability of occurring, conditional on the realisations of the previous event, is given by the branches. As the probability of each event is displayed conditional on the occurrence of events that precede it in the tree, the joint probability of the simultaneous occurrence of events that constitute a path is found by multiplication (Hasofer et al., 2007). BFIM contains a basic implementation of the main elements of bushfire characteristics, house vulnerability and human intervention. In the first pass of the BFIM model, the characteristics of the bushfire in the neighboring region to the house is considered as well as the characteristics of the house and the occupants of the house. In the second pass, the number of embers impacting on the house is adjusted for human intervention and wind damage. In the third pass, the model examines house by house conditions to determine what houses have been burnt and their impact on neighboring houses. To illustrate the model application, a community involved in the 2009 Victorian bushfires has been studied and the event post-disaster impact assessment is utilized to validate the model outcomes. MODSIM 2013 Conference

  • The dry-tropics of central Queensland has an annual bushfire threat season that generally extends from September to November. Fire weather hazard is quantified using either the Forest Fire Danger Index (FFDI) or the Grassland Fire Danger Index (GFDI) (Luke and McArthur, 1978). Weather observations (temperature, relative humidity and wind speed) are combined with an estimate of the fuel state to predict likely fire behaviour if an ignition eventuates. A high resolution numerical weather model (dynamic downscaling) was utilised to provide spatial texture over the Rockhampton region for a range of historical days where bushfire hazard (as measured at the Rockhampton Airport meteorological station) was known to be severe to extreme. From the temperature, relative humidity and wind speeds generated by the model, the maximum FFDI for each simulated day was calculated using a maximum drought factor. Each of these FFDI maps was then normalised to the value of the FFDI at the grid point corresponding to Rockhampton Airport (ensemble produced). The annual recurrance interval (ARI) of FFDI at Rockhampton Airport for the current climate was calculated from observations by fitting Generalised Extreme Value (GEV) distributions. For future climate, we considered three downscaled General Circulation Models (GCM's) forced by the A2 emission scenario for atmospheric greenhouse gas emissions. The spatial pattern of the 50 and 100 year ARI fire danger rating for the Rockhampton region (current and future climate) was determined. In general, a small spatial increase in the fire danger rating is reflected in the ensemble model average for the 2090 climate. This is reflected throughout the Rockhampton region in both magnitude and extent through 2050 to 2090. Cluster areas of higher (future climate) bushfire hazard were mapped for planning applications. Handbook MODSIM2013 Conference