Before reading this page, you should read:
As we discussed in
Study design considerations for ecosystem receptors, ecosystem receptors vary widely in their nature. Some can only be measured in laboratory experiments and others only from field surveys. Here we discuss some issues regarding ecosystem receptors that are common to all types of monitoring studies, as well as analytical issues specific to certain types of monitoring program objectives.
Using decision criteria to assess change
For most indicators of
ecosystem receptor lines of evidence, the notion of a guideline value is framed around the effect size decided when planning the study design.
Effect size usually expresses the maximum departure from control or reference conditions that is acceptable given the management goals. Subtle effects that are more difficult to detect will need more intensive sampling than large effects (Fox 2001, Ryan 2013).
You need to understand the
trade-offs between Type I and Type II errors when making a decision about effect size. Determining relevant effect sizes and negotiating error rates must be part of the study design process because inadequacies in sampling design cannot be rectified during data analysis.
Indicators within the toxicity line of evidence can be used to
derive guideline values for single toxicants.
Step 6 of the Water Quality Management Framework, monitoring data for each relevant indicator are used to assess ecosystem condition and changes against the water/sediment quality objectives.
Results of these assessments are assembled for each line of evidence and combined in an
evaluation of the weight of evidence.
For indicators in the
biodiversity line of evidence, the assessment will consist of applying the relevant statistical tests from the study design. Significant changes or trends away from reference conditions signify that the minimum critical effect size has been exceeded. This exceedance will trigger recording a response for this line of evidence in the weight-of-evidence evaluation table.
Similarly, for indicators in the
biomarkers of exposure line of evidence, statistical tests that show the effect size has been exceeded will result in this line of evidence being recorded in the weight-of-evidence evaluation table.
Bioaccumulation biomarkers will be recorded in the weight-of-evidence evaluation table if the concentrations statistically exceed those of the agreed guideline values, or the reference or control site concentrations.
toxicity are triggered in the weight-of-evidence evaluation table if relevant statistical tests show that toxicity of the test sample (preferably identified as a statistically significant effect of 20% or greater compared to a control) has been demonstrated for 1 or more species in a suite of at least 3 test species, but preferably 5 or more.
Greater weight should be given to results that show:
- higher toxicity (e.g. a 20 to 50% effect would carry less weight than > 50% effect relative to control), or
- more species exhibiting toxicity.
If only a small number of species are tested (e.g. 1 to 3), then the higher potential uncertainty associated with the toxicity line of evidence means that:
- the need for additional lines of evidence may be increased
- even just one species exhibiting toxicity should carry significant weight (requiring substantial management response)
- other lines of evidence should carry greater weight than the toxicity line of evidence if no species exhibit toxicity.
All these analyses assume that the study design is powerful enough (sample size and variability are acceptable) to detect the agreed deviation from reference or control conditions. For guidance on the use of direct toxicity assessment (DTA), refer to details in the ANZECC & ARMCANZ (2000) guidelines sections 126.96.36.199 and 8.3.6. At present, there is no more recent or more comprehensive guidance on DTA for Australia and New Zealand.
You can use:
Statistical analysis issues
Environmental change related to water quality
The monitoring program objective here is usually to provide evidence of environmental change related to water quality, or to identify the source of contaminants or stressors. (In
Monitoring for 7 typical uses of the framework, we provide examples for assessing a waste discharge, investigating an unexpected event and implementing a broadscale monitoring program.)
Model-based study designs will be most suitable for these types of study objectives.
Earlier literature (e.g. Downes et al. 2002) referred to conventional analysis of variance (ANOVA) and regression approaches, such as the generalised linear model (GLM), to analyse such data for ecosystem receptors. But you can use readily available modern analytical tools, such as generalised linear mixed models (GLMMs) and Bayesian equivalents, to avoid awkward transformation of response variables and other issues that affect the robustness of older methods.
Newer analytical methods may need increased sample sizes to better justify the choice of distribution used. This is why we strongly encourage you to seek the support of a professional statistician at the design phase of the investigation.
Licensing and compliance
Here the monitoring study usually focuses on site-specific assessments for licensing and compliance (e.g.
assessing a waste discharge).
Historically, licensing and compliance objectives have been framed in terms of guideline values for toxicants and PC stressors, or biological analogues (e.g. a diversity index or biotic score). Follow our
analytical methods for assessing condition against guideline values.
More modern approaches to licensing and compliance should be framed in terms of deviation from reference or desired conditions for ecosystem receptors. Follow our procedures for:
Assessing change where there are no baseline data
Assessing change without any baseline data is a very common scenario for ecosystem receptors, especially when
investigating an unexpected event, such as a fish kill or an accidental spill.
Conclusions drawn from inadequate baseline data can be strengthened by increasing spatial and temporal features of the study design. Follow our analytical approaches for:
Data demands for such analyses can be high so you should seek professional statistical support at the planning stage of the monitoring study design to ensure that your planned analyses are feasible.
Reporting on environmental conditions across a broad spatial scale is often associated with State of the Environment reporting and similar ‘audits’, but broadscale assessments are also used for:
Candidate variables for ecosystem receptors are usually multivariate response indicators (e.g. community composition of plants, invertebrates or fishes), including new-generation methods using genomics (Chariton et al. 2016).
Analysing datasets from multivariate response indicators typically involves nonparametric randomisation procedures (also known as ‘permutation procedures’) (Anderson & Robinson 2001, McArdle & Anderson 2001, Anderson 2001, Anderson & Ter Braak 2003), often based on a similarity measure of:
- species (Legendre & Legendre 2012) or genomic composition (e.g. Bourlat et al. 2013, Saxena et al. 2015), or
- community traits (Dray et al. 2014).
For multivariate responses involving abundances of species or traits, procedures based on multivariate parametric modelling are being developed (Warton et al. 2015a, 2015b, 2015c).
Indicators that are being scored either as present or absent across all the study locations should use occupancy modelling procedures if the detections of the indicator are likely to vary. Repeat surveys can be implemented cost-effectively to estimate detection probabilities. Elsewhere we cited the
survey designs needed for this procedure, and the analytical procedures are introduced by MacKenzie et al. (2006).
Repeat surveys required for occupancy modelling may not be feasible for some organisms (e.g. fish surveys in rivers). Alternative design and analytical methods are highlighted by Gwinn et al. (2016). These techniques continue to be developed so consult recent literature to ensure that you are using up-to-date methods in your analysis.
Australian River Assessment System (AUSRIVAS) (Davies 2000) found widespread use as a rapid biological assessment tool in some Australian states and territories. It was developed chiefly for broadscale rapid bioassessment with applications in State of the Environment reporting and environmental audits (Norris et al. 2007, Nichols et al. 2017).
AUSRIVAS explains the analytical and reporting procedures (available to registered users), and it is further described in
Protocols for Biological Assessment.
Other approaches to rapid biological assessment convert taxonomic information into scores or indexes of community condition or tolerance of environmental disturbance, such as SIGNAL (Chessman 2003) and MCI (Stark & Maxted 2007). These are essentially univariate measures of community structure and can be analysed using conventional statistical methods. Some authors caution that such indexes may have unusual distributional properties so pay particular attention to assessing conformation to distributional assumptions of the analyses (Green 1979).
This can range in scope from
assessing remediation at local scales (e.g. rehabilitation of a mine site) to broadscale interventions (e.g. mitigating dry land salinity) — over timescales of months to centuries. Assessing recovery can also be a component of a larger program, such as:
Recovery over shorter-term periods would ideally be handled by
methods based on bioequivalence testing (McBride 1999). McBride et al. (2014) explained the technique more fully for environmental applications.
If recovery is likely to be long term (e.g. 10 years or longer), then the performance of the recovery will be better assessed in terms of the trend or rate of change over time. You can test for these features of data by
using temporal analysis techniques.
Anderson MJ 2001,
Permutation tests for univariate or multivariate analysis of variance and regression, Canadian Journal of Fisheries & Aquatic Sciences 58: 626–639.
Anderson M.J. & Robinson J 2001,
Permutation tests for linear models, Australian & New Zealand Journal of Statistics 43: 75–88.
Anderson MJ & Ter Braak CJF 2003,
Permutation tests for multi-factorial analysis of variance, Journal of Statistical Computation and Simulation 73: 85–113.
Bourlat SJ, Borja A, Gilbert J, Taylor MI, Davies N, Weisberg SB, et al. 2013,
Genomics in marine monitoring: new opportunities for assessing marine health status, Marine Pollution Bulletin 74: 19–31.
Chariton AA, Sun M, Gibson J, Webb JA, Leung KMY, Hickey CW, et al. 2016,
Emergent technologies and analytical approaches for understanding the effects of multiple stressors in aquatic environments, Marine and Freshwater Research 67: 414.
Chessman BC 2003,
New sensitivity grades for Australian macroinvertebrates, Marine and Freshwater Research 54: 95–103.
Davies PE 2000, Development of a national river bioassessment system (AUSRIVAS) in Australia, in: Wright JF, Sutcliffe DW & Furse MT (ed.) 2000,
Assessing the Biological Quality of Fresh Waters: RIVPACS and other techniques, Freshwater Biological Association, Ambleside, pp. 113–125.
Downes BJ, Barmuta LA, Fairweather PG, Faith DP, Keough MJ, Lake PS, et al. 2002,
Monitoring Ecological Impacts: Concepts and practice in flowing waters, Cambridge University Press, Cambridge.
Dray S, Choler P, Dolédec S, Peres-Neto PR, Thuiller W, Pavoine S, et al. 2014,
Combining the fourth-corner and the RLQ methods for assessing trait responses to environmental variation, Ecology 95: 14–21.
Fox DR 2001,
Environmental power analysis — a new perspective, Environmetrics 12: 437–449.
Green RH 1979,
Sampling Design and Statistical Methods for Environmental Biologists, John Wiley and Sons, New York.
Gwinn DC, Beesley LS, Close P, Gawne B & Davies PM 2016,
Imperfect detection and the determination of environmental flows for fish: challenges, implications and solutions, Freshwater Biology 61: 172–180.
Legendre P & Legendre L 2012,
Numerical Ecology, 3rd Edition, Elsevier, Amsterdam.
MacKenzie DI, Nichols JD, Royle JA, Pollock KH, Bailey LL & Hines JE 2006,
Occupancy Estimation and Modeling: Inferring patterns and dynamics of species occurrence, 1st Edition, Elsevier, Burlington.
McArdle BH & Anderson MJ 2001,
Fitting multivariate models to community data: a comment on distance-based redundancy analysis, Ecology 82: 290–297.
McBride G, Cole RG, Westbrooke I & Jowett I 2014,
Assessing environmentally significant effects: a better strength-of-evidence than a single P value? Environmental Monitoring and Assessment 186: 2729–2740.
McBride GB 1999,
Applications: equivalence tests can enhance environmental science and management, Australian & New Zealand Journal of Statistics 41(1): 19–29.
Nichols SJ, Barmuta LA, Chessman BC, Davies PE, Dyer FJ, Harrison ET et al. 2017,
The imperative need for nationally coordinated bioassessment of rivers and streams, Marine and Freshwater Research 68(4): 599–613.
Norris RH, Linke S, Prosser I, Young WJ, Liston P, Bauer N et al. 2007,
Very-broad-scale assessment of human impacts on river condition, Freshwater Biology 52: 959–976.
Ryan TP Jr 2013,
Sample Size Determination and Power, John Wiley & Sons, Hoboken
Saxena G, Marzinelli EM, Naing NN, He Z, Liang Y, Tom L et al. 2015,
Ecogenomics reveals metals and land-use pressures on microbial communities in the waterways of a megacity, Environmental Science & Technology 49: 1462–1471.
Stark JD & Maxted JR 2007,
A User Guide for the Macroinvertebrate Community Index, prepared for the New Zealand Ministry for the Environment by Cawthron Institute, Nelson.
Warton DI, Blanchet FG, O’Hara RB, Ovaskainen O, Taskinen S, Walker SC et al. 2015a,
So many variables: joint modeling in community ecology, Trends in Ecology & Evolution 30: 766–779.
Warton DI, Foster SD, De’ath G, Stoklosa J & Dunstan PK 2015b,
Model-based thinking for community ecology, Plant Ecology 216: 669–682.
Warton DI, Shipley B & Hastie T 2015c,
CATS regression — a model-based approach to studying trait-based community assembly, Methods in Ecology and Evolution 6: 389–398.