Abstract
Most Environmental Impact Assessments (EIAs) fail to generate effective monitoring and forecast triggers because there is a lack of appropriate baseline data and forecasting, especially for biotic endpoints. Herein, we provide an example of how to develop monitoring and forecast triggers with biotic data, specifically fish populations, to assess impacts of a planned refurbishment of the Mactaquac Hydroelectric Generating Station, a large hydroelectric facility. We recommend strategies for developing interim monitoring triggers until sufficient biological data is collected, including default critical effect sizes or data percentiles when there are only a few years of data. When there is sufficient data the monitoring trigger can be based on the predicted normal range, i.e., 2x standard deviation of the means. We generated forecast triggers with the general linear model, partial least squares regression, and elastic net regression. We demonstrate that interannual variability of fish population characteristics sampled consecutively for 4 years was insufficient for meaningful monitoring and forecast trigger development. Collecting sufficient baseline data for new projects in an undeveloped area will be challenging due to costs and regulatory and economic time frames as current practice is generally 1 or 2 years. Changes to existing projects, such as in this study, or new projects near existing development should have existing baseline data – if forethought is given as to effective endpoints. The alignment of monitoring requirements between developments within a watershed will improve monitoring, modelling, and prediction over the long term and for consideration of future developments.
Keywords: Monitoring trigger, Forecast trigger, Adaptive monitoring, Effects-based approach, Fish population characteristics
Introduction
All development of environments will cause some change and most often a negative impact on the natural system (Somers et al. 2018; Noble 2021). Most development has some post-development monitoring that attempts to determine if there is change and if it is significant; however, these efforts rarely, if ever, detect change or the effectiveness of mitigation (Kilgour et al. 2007; Lindenmayer and Likens 2010; Greig and Duinker 2011; Brown et al. 2024). An adaptive monitoring plan is the preferred approach for evaluating change using monitoring results because it allows for quick responses to small environmental changes before impacts become difficult to reverse (if appropriate monitoring endpoints are chosen; Arciszewski et al. 2017b; Munkittrick et al. 2019). Effective adaptive monitoring plans include tiers representing different levels of effort (intensity, focus, or questions monitored) and defined thresholds or triggers which are values for chemical/physical/biological response or metrics that prompt a change in the monitoring effort (Arciszewski and Munkittrick 2015; Somers et al. 2018). An adaptive monitoring plan provides a mechanism to maximize monitoring effectiveness by directing effort and resources to indicators that may be changing, and thus, reduce monitoring intensity as confidence builds that there is no change. An example of an adaptive monitoring plan is the Canadian Environmental Effects Monitoring (EEM) programme (Government of Canada 2017) that consists of tiers of surveillance (initial) monitoring, confirmatory monitoring, investigation of cause, and investigation of solution. If the trigger of two confirmed effects is set and a cause of effects from the facility is found, management decisions are made on whether to modify the facility and the programme returns to surveillance monitoring again.
There are multiple types of triggers such as performance triggers (set by engineers to indicate if development is working as designed), compliance triggers (developed from regulatory permits), and management triggers (developed as limits from land use plans or environmental quality criteria; Somers et al. 2018). Here, we summarize how two other types of triggers – monitoring and forecast – that prompt a change in effort or focus can improve the assessment of the potential impacts of development. A forecast trigger is a prediction (forecast) of what the endpoint will look like given the predicted changes the development will have on the environment. Within monitoring planning, a monitoring trigger represents a threshold that indicates that the environment is changing outside of a normal range defined from baseline or historical data (range of natural/current variability). Importantly, whether impacted or pristine, baseline is defined as the recent pre-development conditions for the project being assessed. In an adaptive monitoring approach, the consequence of conditions moving outside of the normal range is altering the monitoring programme’s level of effort and/or design (the tier) to focus resources to provide more detailed information on changes of concern. This alteration can shift monitoring from surveillance mode to an alternate tier of effort such as to confirm results or determine the cause, depending on the design of the adaptive monitoring plan.
Unfortunately, baseline data (before development) is typically inadequate to develop effective monitoring triggers. This is either due to a lack of temporal and/or spatial scale to characterize natural or recent variability (Dubé 2003; Kilgour et al. 2007; Arnold et al. 2019) and/or baseline sampling focused on select stressors (stressor-based approach) versus biotic metrics that could encompass effects that are unknown (effects-based approach; Dubé and Munkittrick 2001; Dubé 2003; Kilgour et al. 2007). Brown et al. (2022) discuss a suite of effects-based indicators that could improve the ability of baseline data to provide meaningful and effective aquatic monitoring programmes for development projects.
Using the available baseline data, modelling, and risk assessments done during preparation for the development approval process typically focus on estimating the potential impacts from changes in stressor levels associated with a proposed facility. These stressor-based evaluations usually predict post-construction levels of stressors associated with the Project, produce a forecast of anticipated levels of stressors post-development, and determine the acceptability of the changes. For example, in the case of developments with anticipated potential impacts on water quality, the predicted receiving environment levels are compared to water quality criteria, and the approval process uses the predicted levels of change to assess the acceptability of the proposed development alongside additional socioeconomic considerations. In an adaptive monitoring context, the forecast from the environmental impact assessment (EIA) could be used as a trigger or threshold to prompt mitigation or compensation should a completed development exceed its forecasted (and accepted during the approval process) level of impact.
In current practice, quantitative forecasting of potential future levels of biotic variables is typically not possible as projects either lack sufficient baseline data or fail model predictions of the development’s impact on biotic endpoints, resulting in qualitative statements instead of quantitative, testable forecasts (Dubé 2003; Greig and Duinker 2011; Brown et al. 2024). The risk assessments may also not adequately consider relevant sources of variability. In aquatic systems, ecological drivers of variability between years, for example, include climate, flow, temperature, and the cumulative effect of existing stressors. If we understand the dominant drivers of variability pre-development, it should be possible to develop models that would predict how biotic endpoints would vary post-development (Kilgour et al. 2019; Arciszewski et al. 2022). The collection of adequate baseline data and the integration of ecological endpoints into the modelling process during project evaluation will enable prediction of the potential ecological responses during engineering evaluations and allow for the modification of the design to reduce ecological impacts. In addition to the role in comparing potential development scenarios, once the development design is decided, these ecological forecast models can be used as post-construction monitoring tools to help inform both the adequacy of modelling during development projections and provide predictions of expected ecological performance post-construction.
Although recommendations for improvements to EIA by enhancing baseline data collection and development of effective predictions have been discussed for decades, most projects are still approved by regulators without an effects-based assessment and monitoring approach. It is generally argued by proponents that the stressor-based approach is sufficient, which has been accepted by regulators because, in part, effects-based approaches take time and are more expensive, which challenges regulators who are under pressure to allow and promote development (Morrison-Saunders and Bailey 2003; Connelly 2011). The effects-based approach is the most effective assessment of real environmental change (Dubé 2003; Kilgour et al. 2007). This paper provides an example of how to develop monitoring and forecast triggers with biotic data, using fish populations for a refurbishment of a large hydroelectric facility as a case study. Multiple methods for calculating monitoring triggers, such as the grand mean ±2x standard deviation (SD), percentiles, critical effect size, and normal range, are calculated. Three models, general linear model, partial least squares regression, and elastic net regression, are used with multiple environmental data to predict fish endpoints and develop forecast triggers. These methods for developing monitoring and forecast triggers are compared and the effectiveness of their use in monitoring programmes discussed.
Methods
Sampling Area
The Mactaquac Hydroelectric Generating Station (MGS), on the Wolastoq | Saint John River, NB, is a large dam (55 m height; 1100 m total length; 670 MW) and will reach the end of its service life shortly due to a concrete expansion issue known as an alkali aggregate reaction (Curry et al. 2020). The New Brunswick Electric Power Corporation (NB Power) decided to renew the dam infrastructure based on research completed during Phase 1 (2014–2019) of the Mactaquac Aquatic Ecosystem Study (MAES), as well as engagement with the public and the Indigenous Wolastoqey communities (Curry et al. 2020). In Canada, large dam renewal requires a provincial and potentially a federal EIA (GC, 2019; GNB, 2019). The MGS renewal provides a case study to demonstrate a baseline effects-based monitoring approach using fish populations. Sampling was conducted on the Wolastoq | Saint John River, NB, 3–17 km downstream of MGS (Fig. 1).
Fig. 1.
Sampling location on Wolastoq | Saint John River, NB from 2015 to 2021. Star on insert indicates sampling location in watershed (grey area). Hatched area indicates where Yellow Perch were collected
Fish Collection
Mature Yellow Perch from the fish community studies completed in 2015 to 2019 (Phase 1) and 2021 (Phase 2; unpublished) of MAES were included in this dataset (in 2020, the fish community was not sampled due to COVID-19 restrictions). In 2020, a separate fishing effort that targeted Yellow Perch was made. Gonad and liver weight were measured and fish were aged starting in 2018. Fish community sampling used an electrofishing boat, fyke nets (3 by 75 ft lead with two 3 by 25 ft wings and ¼” mesh), and beach seine nets (typically 50 ft long, 6 ft tall, with ¼” mesh with bag) at multiple sites throughout the summer to fall within a set of four designated sub-areas defined by different river habitats (Tailrace, Side Channel, Islands, and Main Channel). The data from three sub-areas for mature Yellow Perch is used in this analysis (we excluded “Side Channel” as it was not fished in 2020 and had few mature fish during 2015–2019 sampling). Sampling in 2020 targeted Yellow Perch using an electrofishing boat. In 2021, additional electrofishing was conducted until the target number of mature Yellow Perch was collected. The electrofishing boat was an SR-18H-Model Electrofishing Boat equipped with a 5.0 GPP Electrofisher, typically set at 60 Hz, 21% duty cycle, low range at 30%, 950–1050 w. Sample timing is in Supplementary Table 1.
During fish sampling, captured fish were placed in a live well until the fishing effort was complete. Captured fish were externally examined (species, sex, size) and Yellow Perch were retained, and the remainder returned alive to the river. Retained fish, starting in 2018, were transported in a cooler on ice to an indoor processing area. Fish handling and euthanasia followed university animal care protocols (15027, 16029, 17019, 18017, 19022, 20012, and 21014 for the University of New Brunswick, R120002 for the Wilfrid Laurier University, and AC20-0091 for the University of Calgary).
In 2015–2017, all fish were measured for length and weight and returned live as part of a fish community monitoring plan looking at microhabitat selection (Dolson-Edge et al. 2019a; Dolson-Edge et al. 2019b). All endpoints were measured in subsequent years. Lethal sampling during MAES targeted 20 mature male and 20 mature female Yellow Perch in 2018 and 2019 in each sub-area and was adjusted to 25 in 2020 based on MAES Phase 1 data (α = 0.5, power 0.8, effect size of 25% for all endpoints except condition, which was 10%) following a power analysis (Green 1989; Munkittrick et al. 2009). Re-analyses based on 2020 results set a new target number of 30 mature males and 30 mature females for 2021 sampling. Fish from 2019 were collected in the summer and reproductive related endpoints were excluded for this year as gonads were not fully developed (GSI < 1%).
Measurements
Fish fork length (±1 mm) and total weight (±2 (2015–2020)/0.2 (2021) g) were assessed on all fish. Liver weight (±0.003 g), gonad weight (±0.003 g), and age were measured on lethally sampled fish. Age was determined from otoliths that were collected, wiped with a paper towel, and placed dry in a labelled paper envelope or plastic vial.
Age was determined by counting the annuli of the otoliths (Mark Gautreau 2018–2020 and North-South Consulting 2021, with some additional analysis of 2018 and 2019 fish). The otoliths were coated in epoxy, allowed to dry, and then a thin slice cut on the transverse plane with a diamond blade saw (~1 mm). The slice was mounted on a slide and sanded and polished. The lines (annuli) were then counted under a dissection scope (Stevenson and Chapman, 1992). All quality and assurance targets were met. Supplementary Tables 2–4 provide summary statistics for the sampled fish.
Environmental Variables
Air temperature and precipitation were collected from the Environment and Climate Change Canada, “Fredericton Intl A 8101505” station. Dam discharge was provided by NB Power for MGS and used as a surrogate for river discharge. Water temperature was recorded almost continuously for multiple years with water temperature loggers for various MAES projects. Where water temperature data was missing (<8% of time in this study), it was estimated using air2stream (Toffolon and Piccolroaz 2015; Piccolroaz et al. 2016), calibrated by the particle swarm optimization method with inertia weight and water temperature, air temperature, and river discharge from 2015 to 2020. The model was run using the Crank–Nicolson scheme to solve the model equation and 8 parameters, which resulted in a root mean squared error of 1.27 °C. Averages of environmental variables are in Supplementary Table 5.
Analysis
Fish Population Measures
Body weight at length, condition factor (K), fork length at age, liver weight at body weight, liver somatic index (LSI), gonad weight at body weight, and gonadosomatic index (GSI) are common fish population endpoints (Pope et al. 2010; Government of Canada 2017; Table 1). K, GSI, and LSI were calculated as follows (weight in g and length in mm):
K = 100,000*[body weight/(fork length3)];
GSI = 100*gonad weight/body weight; and,
LSI = 100*liver weight/body weight.
Table 1.
Response variables modelled and when a covariate was included
| Response Variable | Covariate | All | Female | Male |
|---|---|---|---|---|
| Body Weight | Fork Length | Y | Y | Y |
| Liver Weight | Body Weight | N | Y | Y |
| Gonad Weight | Body Weight | N | Y | Y |
| Fork Length | Age | N | Y | Y |
| Condition (K) | - | Y | Y | Y |
| Liver Somatic Index (LSI) | - | N | Y | Y |
| Gonadosomatic Index (GSI) | - | N | Y | Y |
| Body weight adjusted for Fork Length | - | Y (190) | Y (204) | Y (166) |
| Liver Weight adjusted for Body Weight | - | N | Y (116) | Y (67) |
| Gonad Weight adjusted for Body Weight | - | N | Y (116) | Y (67) |
| Fork Length adjusted for Age | - | N | Y (5) | Y (5) |
Length is in mm, weight in g, and age in years. Values in brackets are what the response variable was standardized to. N—no; Y—yes
If the GSI was <1%, fish were considered immature and excluded from analysis following the protocol of EEM (Government of Canada 2017). The size that males and females became mature was determined for years when gonad size was known (2018–2021) and then fish that were smaller than maturity when gonad size was unknown (2015–2017) were excluded from analysis.
Annual variability in growth and body size is expected. Measurements were standardized to a common age or size to allow for comparisons between years and locations with a common regression slope adjustment (residual added to calculated dependent given chosen standard independent value; Reist 1985; Somers and Jackson 1993). Values used for standardization are presented in Table 1.
Data were examined for outliers (i.e., visualization with graphs, mean ± 3 x SD, elevated studentized residuals and/or Cook’s distance) and outliers removed (range of 0.4% to 7.5% removal, depending on endpoint). All statistical procedures, unless otherwise stated, were carried out using RStudio (Posit team 2023; various packages). Means are presented with standard error (SE).
Developing a Monitoring Trigger
A monitoring trigger represents a threshold that indicates that the endpoint is changing outside of a normal range defined from baseline or historical data (range of natural/current variability). We typically define monitoring triggers as an estimate of the upper and lower values of the “normal range”, including natural or recent variability (Kilgour et al. 2017; Arciszewski et al. 2017a). Often a normal range (values enclosing 95% of population reference values) is calculated as the grand mean ± 2 SD or percentiles (Kilgour et al. 1998; Barrett et al. 2015). To demonstrate how to calculate the monitoring trigger with the fish population data in this paper, multiple methods were used to compare these different triggers.
The 2.5th and 97.5th percentiles and mean ±2 SD (of yearly means) were calculated for comparison. Normal ranges were calculated as follows based on the guidance provided by Barrett et al. (2015). Data were initially tested to determine if it was normal (Shapiro) and if not a Box-Cox transformation was conducted. If the data were not normal and transformation did not correct this, then a 1000 resampling method (bootstrap) was used, and these means were tested for normality. If data were normal (either initially, post-transformation, or post-resampling), then Eq. 1 was used to calculate the normal range. For data that was still not normal, 2.5th and 97.5th percentiles on the resampled data were calculated.
| 1 |
where is the sample mean (of yearly means), is the (1-α/2) fractile of a t distribution with n-1 degrees of freedom, s is the sample standard deviation, and n is the sample size.
In the Canadian EEM Programme (Government of Canada 2017), a meaningful (biologically significant) difference between an exposure area and reference area is termed a critical effect size (CES). For most fish endpoints, a 25% difference of the means is used except for condition where a 10% difference is used, based on the CESs (Munkittrick et al. 2002, 2009). The CESs of yearly means were also compared to the other approaches for developing monitoring triggers in this paper (Table 2 summarizes the monitoring triggers considered in this paper).
Table 2.
List of monitoring triggers in this research
| Monitoring Triggers | Description | Potentially Calculated on Resampled Data |
|---|---|---|
| Grand Mean ± 2 SD | The mean of three or more years ± 2 SD of the means for those years | No |
| 2.5th and 97.5th percentiles | The 2.5th and 97.5th percentiles of data in years examined | Yes |
| Normal Range | Equation 1a | Yes |
| Critical Effect Size | 10% (condition only) or 25% of mean of one or more years | No |
SD standard deviation.
aEquation 1 from Barrett et al. (2015) as shown in this paper.
The monitoring trigger is the upper and lower values of the range we expect the next sampling event to fall within. With multiple years of baseline data, a monitoring trigger is calculated to compare with the post-development monitoring data to provide a clear comparison of before and after. For situations where there is not enough baseline data or less of a clear before-after situation, interim monitoring triggers can be calculated based on the grand mean and variability with three years of data (situations with less data are limited to calculating 95% confidence levels of individuals). Three years are needed at a minimum. If the results from the 4th year is within the expected range, it is considered typical (“normal”) and can be included in the estimation for the 5th year. If it is outside the range, this is unexpected and considered an exceedance. The interim monitoring trigger is then fixed for confirmation of the exceedance. If the 5th year is within the range, then the 4th year is no longer considered to be an exceedance (there was likely insufficient understanding of variability of the system) and is included in the monitoring trigger calculation for the 6th year along with the 5th year. But if the 5th year is also outside the range, this is a confirmation that something is different (Arciszewski and Munkittrick 2015; Arciszewski et al. 2017a). What tier you move to next depends on the design of your adaptive monitoring programme. In situations where endpoints (Table 1) continually fluctuate in and out of “normal”, effort should be expended to understand the drivers of the variability to allow prediction to focus on regression or control chart (statistical process to monitor change overtime) approaches (Arciszewski et al. 2018).
Developing a Forecast Trigger
A forecast trigger is a prediction (forecast) of what the endpoint will look like given the predicted changes the development will have on the environment. Fish population characteristics show substantial year-to-year variability and in long-term studies between year comparisons at reference sites can be statistically significantly different almost 50% of the time in the absence of changes in anthropogenic stressors (Ussery et al. 2021). Effectively modelling year-to-year variability depends on identifying the major drivers of variability pre-development. For fish populations in river systems, temperature and flow can often explain a large component of the inter-annual variability (Kilgour et al. 2019).
Although water temperature and discharge can have a large influence on fish performance directly and indirectly (e.g., Clarke et al. 2008; Kilgour et al. 2019; Arciszewski et al. 2022) there are a number of ways that these aspects can be represented. Fish are ectotherms and variation in water temperature seasonally influences growth with cold temperatures (i.e., winter) reducing growth (lower metabolic rates, digestion, swimming capacity; Marsden et al. 2021) but also fish in slightly lower temperatures within the optimal range will have lower somatic growth than those at higher temperatures (Dhillon and Fox 2004). Sustained cooler winter temperatures are also important for reproductive success of many temperate fish (Farmer et al. 2015). Higher flow may result in higher expenditure of energy to swim and result in lower condition (Torralva et al. 1997; Benejam et al. 2016) and fluctuations often lead to nest abandonment or influence timing of spawn (Clarke et al. 2008; Young et al. 2011). Therefore, multiple environmental variables were calculated (Supplemental Table 5) based on previous studies looking at fish populations in impacted rivers (Brasfield et al. 2015; Kilgour et al. 2019) and hydroecological variables (Monk et al. 2011), which resulted in the evaluation of 72 potential variables. Many of these candidate variables may be redundant or irrelevant to a particular endpoint and selection is needed. To address the issues with variable selection processes for ecological model development (Burnham and Anderson 2002; Heinze et al. 2018) three approaches were chosen: a general linear model using two environmental variables that explained a good portion of variation in multiple fish responses previously (Kilgour et al. 2019), partial least squares regression (O’Sullivan et al. 2020; Schull et al. 2023), and elastic net regression (McMillan et al. 2022; Arciszewski et al. 2022). The forecast triggers for each model were based on the model prediction ±2*SDR (standard deviation of the residuals) because it could be calculated for all models. For 2022 predictions, the average covariate (Table 1) values were used (if in model) and ten fall sampling dates were used to calculate the environmental variables.
Models were compared using Akaike information criterion corrected for small sample size (AICc) and Bayesian information criterion (BIC) was calculated (Burnham and Anderson 2004). The lowest value indicated a better model.
The General Linear Model
Previously, a general linear model (LM) that included average summer air temperature (Tair), average river discharge over 60 days prior to fish collection (Q60dp), and year explained a good portion of variation for fish health endpoints on the Athabasca River (Kilgour et al. 2019). In this study LMs were run with the response variable and covariate (when applicable; Table 1) as shown in Eq. 2.
| 2 |
where Y is the response variable, Tair average summer air temperature in the year the fish was collected, is the average river discharge over the 60 days prior to fish collection, Year is the year the fish was collected, and β are the coefficients. Covariates (Table 1) were only included if the response variable had one.
Using all Yellow Perch for body weight and length related endpoints, models were initially run using only the first three years of data (2015–2017), then four (2015–2018), five (2015–2019), six (2015–2020), and seven (2015–2021) to test the change in model fit and its ability to predict the 2021 fish endpoints and with only one, two, or three variables to see how much impact these had on improving explained variance (Supplemental Table 6) similar to stepwise approaches seen elsewhere for model selection (Muhling et al. 2020; Korman et al. 2021). For endpoints based on males or females alone, the first three years of available data (only three to four years in total) were used to estimate 2021 endpoints.
Partial Least Squares Regression
Partial least squares regression (PLS) was selected as it is good at avoiding overfitting models with highly correlated predictor parameters (Carrascal et al. 2009). It is a regression method based on covariance and finds a linear regression model by projecting the predicted variables and the observable variables into a new space and finding the components that explain the most variance (O’Sullivan et al. 2020). XLStat in Microsoft Excel was used to conduct PLS. All 72 environmental variables, year, and covariate (if applicable; Table 1) were run initially. Environmental variables with a Variable Importance in the Projection (VIP) > 1 were selected for the final PLS run as they are highly influential (Eriksson et al. 1995). is used as a selection criterion for models and a cut off of (1–0.952) was used as it corresponds to p < 0.05 (O’Sullivan et al. 2020). For response variables where did not reach that cutoff 5 components were selected in those cases to have a model to compare to. When all fish were analysed 2015–2020 data were used and for endpoints based on males or females alone the first three years (only three to four years in total available) were used to look at how well this model estimated the 2021 fish endpoints.
Elastic Net Regression
Elastic net regression (EN) is a regularized regression that combines lasso (shrinkage) and ridge (feature selection) regression to deal with multicollinearity and overfitting (Zou and Hastie 2005; McMillan et al. 2022). The glmnet package in RStudio was used to conduct EN using a Gaussian1 distribution and α (penalty ratio) of 0.5. Lambda was determined through cross-validation with leave-one-out and mean squared error to find the largest value of lambda such that error was within 1 standard error of the minimum lambda (lambda.1se) with the “cv.glmnet()” function. Coefficients that were not insignificant (≠0) were included in the model. When all fish were analysed 2015–2020 data was used and for endpoints based on males or females alone, the first three years (only three to four years in total available) were used to look at how well this model estimates the 2021 fish endpoints.
Results
Monitoring Trigger
A typical monitoring trigger is mean of the endpoint ± 2 SD. This is demonstrated in Fig. 2. The first 3 years mean and ± 2 SD are calculated for the 2018 trigger. The 2018 measured value of condition is outside the range of the mean ± 2 SD, so the 2019 trigger value calculation does not include the 2018 data since the 2018 value is an exceedance and unexpected. But as the 2019 measured value of condition is within the monitoring trigger range, then the 2020 trigger calculation includes the 2018 data as there was insufficient data to know the variability of condition (Arciszewski and Munkittrick 2015; Arciszewski et al. 2017a).
Fig. 2.

Mean annual condition (±SE) for all Yellow Perch from 2015 to 2021. The grey boxes represent the calculated mean ± 2 SD (monitoring trigger) based on previous data. When the observation falls outside of the anticipated range in a small dataset, the monitoring trigger is not adjusted. The empty grey box represents the anticipated condition for Yellow Perch in a future sampling period
The triggers for the mean ±2SD and normal ranges based on Eq. 1 were very similar and usually provided the smallest estimate of normal range (Fig. 3A/B/D). If a resampling method was needed, the ranges on those percentiles were sometimes narrower and sometimes wider than the previously mentioned methods (Fig. 3C). The CES percents were the next largest in range and regular 2.5th and 97.5th percentiles usually had the largest range, although occasionally the CES was similar to the percentiles (Fig. 3D). In Fig. 3C the data were not normal and the resampled data were also not normal, so the 2.5th and 97.5th percentiles of the resampled data were used as a monitoring trigger. In the other panels of Fig. 3 the data were normal and a normal range using Eq. 1 was used. In Fig. 3B–D there are years when a measured average is outside the range of the monitoring trigger and in the subsequent year the same trigger value is used when this occurs (as explained above).
Fig. 3.
Annual means (±SE) and monitoring triggers of (A) body weight for female Yellow Perch, (B) LSI for female Yellow Perch, (C) condition for all Yellow Perch, and D) fork length for male Yellow Perch. The monitoring triggers (lines) were calculated based on previous data. When the observation falls outside of the anticipated range in a small dataset the monitoring trigger is not adjusted. The lines for 2022 represent the anticipated endpoints for Yellow Perch in a future sampling period
Forecast Trigger
Models
Results for the general linear models compared with varying years of data and number of variables are in the Supplementary Material.
The final selection for LM, PLS, and EN had similar R2, adjR2, MSE, and SDR for the response variables that included all Yellow Perch (pooled sexes) for 2015–2020 (Table 3). PLS selected more environmental variables than EN (average of 39 versus 7 of 72) and coefficients for most environmental variables for all model types were small (Supplementary Tables 9–11), which indicates a negligible relationship. Only female GSI had an environmental variable that was the same in all three models (Q60dp). Some of the EN models included median water temperature (e.g., all Yellow Perch body weight), average water and/or air temperature (e.g., gonad related endpoints both sexes), or the number of increases in river discharge (e.g., gonad related endpoints both sexes) with larger coefficients. For PLS maximum air temperature, minimum water temperature, and the number of times river discharge was below the 25th percentile had larger coefficients for several of the response variables.
Table 3.
Results of general linear model, partial least squares, and elastic net regression with All, Female, and Male Yellow Perch
| All | Female | Male | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| R2 | adjR2 | MSE | SDR | R2 | adjR2 | MSE | SDR | R2 | adjR2 | MSE | SDR | ||
| Body Weight | LM | 0.956 | 0.956 | 0.00178 | 0.0422 | 0.957 | 0.956 | 0.00172 | 0.0416 | 0.962 | 0.960 | 0.00116 | 0.0343 |
| PLS | 0.952 | 0.951 | 0.00193 | 0.0441 | 0.955 | 0.934 | 0.00177 | 0.0427 | 0.965 | 0.918 | 0.00106 | 0.0340 | |
| EN | 0.956 | 0.956 | 0.00181 | 0.0421 | 0.953 | 0.950 | 0.00202 | 0.0432 | 0.954 | 0.948 | 0.00158 | 0.0376 | |
| Condition | LM | 0.0884 | 0.0860 | 0.0160 | 0.126 | 0.197 | 0.185 | 0.01752 | 0.1328 | 0.202 | 0.178 | 0.0121 | 0.111 |
| PLS | 0.102 | 0.0572 | 0.0157 | 0.126 | 0.197 | −0.111 | 0.01751 | 0.1333 | 0.113 | −0.259 | 0.0125 | 0.113 | |
| EN | 0.101 | 0.0900 | 0.0160 | 0.126 | 0.138 | 0.0912 | 0.01956 | 0.1376 | 0.131 | 0.0631 | 0.0140 | 0.116 | |
| Body Weight adjusted for Fork Length | LM | 0.100 | 0.0976 | 0.00177 | 0.0420 | 0.209 | 0.197 | 0.00168 | 0.0412 | 0.278 | 0.256 | 0.00114 | 0.0341 |
| PLS | 0.128 | 0.0818 | 0.00171 | 0.0414 | 0.218 | −0.0452 | 0.00188 | 0.0437 | 0.242 | −0.341 | 0.00120 | 0.0351 | |
| EN | 0.116 | 0.109 | 0.00175 | 0.0417 | 0.115 | 0.0824 | 0.00219 | 0.0463 | 0.163 | 0.068 | 0.00141 | 0.0367 | |
| Fork Length | LM | - | - | - | - | 0.371 | 0.356 | 0.00342 | 0.0587 | 0.288 | 0.255 | 0.00256 | 0.0510 |
| PLS | - | - | - | - | 0.491 | 0.146 | 0.00276 | 0.0537 | 0.391 | −0.557 | 0.00219 | 0.0490 | |
| EN | - | - | - | - | 0.393 | 0.374 | 0.00348 | 0.0576 | 0.197 | 0.173 | 0.00304 | 0.0541 | |
| Fork Length adjusted for Age | LM | - | - | - | - | 0.028 | 0.0138 | 0.00343 | 0.0588 | 0.003 | −0.0270 | 0.00249 | 0.0503 |
| PLS | - | - | - | - | 0.242 | −0.473 | 0.00268 | 0.0529 | 0.202 | −2.44 | 0.00199 | 0.0467 | |
| EN | - | - | - | - | 0.0733 | 0.0524 | 0.00338 | 0.0574 | 0 | −0.0299 | 0.00259 | 0.0504 | |
| Liver Weight | LM | - | - | - | - | 0.797 | 0.791 | 0.0107 | 0.104 | 0.377 | 0.336 | 0.0216 | 0.148 |
| PLS | - | - | - | - | 0.807 | 0.695 | 0.0102 | 0.103 | 0.386 | 0.0794 | 0.0213 | 0.149 | |
| EN | - | - | - | - | 0.773 | 0.768 | 0.0125 | 0.110 | 0 | −0.0313 | 0.0359 | 0.188 | |
| Liver Somatic Index | LM | - | - | - | - | 0.0560 | 0.0342 | 0.0557 | 0.237 | 0.0510 | 0.00512 | 0.0488 | 0.223 |
| PLS | - | - | - | - | 0.119 | −0.351 | 0.0561 | 0.242 | 0.193 | −1.76 | 0.0415 | 0.214 | |
| EN | - | - | - | - | 0 | −0.0150 | 0.0649 | 0.253 | 0 | −0.0317 | 0.0538 | 0.228 | |
| Liver Weight adjusted for Body Weight | LM | - | - | - | - | 0.0563 | 0.0346 | 0.0103 | 0.102 | 0.0385 | −0.00592 | 0.0107 | 0.104 |
| PLS | - | - | - | - | 0.115 | −0.299 | 0.00979 | 0.101 | 0.159 | −1.12 | 0.00933 | 0.101 | |
| EN | - | - | - | - | 0 | −0.0150 | 0.0113 | 0.106 | 0 | −0.0303 | 0.0116 | 0.106 | |
Models for All Yellow Perch include data from 2015 to 2020. Models for female and male include data from 2018–2020 except gonad related ones which are 2018, 2020, and 2021
See Table 1 for response variables and covariates
adjR2 adjusted R2, EN elastic net regression, LM general linear model, MSE mean squared error, PLS partial least squares regression, SDR standard deviation of the residuals
For body weight (BW), K, body weight adjusted for fork length (BWadj), and fork length (FL) for female and male Yellow Perch (Table 3), results were also similar between LM, PLS, and EN. But for other response variables, models using PLS and EN (when regressed) had higher R2 and lower MSE than models using LM. This was most noticeable for male gonad weight (GW). Also of interest, GSI and gonad weight adjusted for body weight (GWadj) had R2 values much higher than liver or body size counter parts.
PLS often had lower adjR2 largely due to the higher number of environmental variables in its models. It was not possible to develop a relationship of female LSI or liver weight adjusted for body weight (LWadj) or male FLadj, liver weight (LW), LSI, or LWadj using EN. This is possibly because there were only three years of data used in these models.
Testing the Predictability of 2021 Values
As a reminder, models looking at all mature Yellow Perch are using six years of data and models based on males or females alone are using three years of data. Many models were able to estimate 2021 response variable values close to the actual value (Figs. 4; 5A). But there were some cases where models estimated 2021 values poorly (Fig. 5B). PLS provided a slightly better prediction for male Yellow Perch gonad weight (Fig. 5A) whereas the LM provided the best prediction for female Yellow Perch liver weight (Fig. 5B). Predictions for gonad weight was likely better than some other response variables because models included 2021 data. Confidence intervals were wider than SDR for all LMs (Fig. 4), although for some LM models (such as when year was included, not shown), they were unrealistically large.
Fig. 4.

The prediction of Yellow Perch body weight (blue solid line) based on a general linear model (fork length, average discharge over 60 days prior to fish collection, and average summer air temperature) using All Yellow Perch from 2015–2020. Two forecast triggers (dashed lines) were calculated form the model, ±2SDR and 95% confidence intervals. The average of the actual 2021 body weights (±SE) is shown as the red dot
Fig. 5.
Estimated 2021 A male Yellow Perch gonad weight using data from 2018, 2020, and 2021 and B female Yellow Perch liver weight using data from 2018–2020 with the general linear model, partial least squares, and elastic net regression. The forecast triggers (dashed lines) for each model (by respective colour) are ±2SDR. The actual 2021 values are shown as red dots
Developing the Trigger
The SDR for the three models using 2015–2020 data are shown in Fig. 6A/B for condition and body weight for all Yellow Perch. The R2 value was lower and the SDR higher for condition than for body weight (Supplementary Table 9). The forecast trigger range is wide for condition and similar to the 2.5th and 97.5th percentile monitoring triggers. For body weight, the forecast trigger range was more similar to the CES monitoring trigger. All actual values were between the upper and lower range, although for body weight the actual values were very close to the upper trigger for the PLS model.
Fig. 6.
Annual means (±SE) and estimated values of A condition for all, B body weight of all, C gonad weight for male and D liver weight for female Yellow Perch. The monitoring triggers (solid and black dashed lines) were calculated based on previous data. When the observation falls outside of the anticipated range in a small dataset the monitoring trigger is not adjusted. The forecast triggers (dashed coloured lines) for each model (by respective colour) are ± 2SDR. Models used two to six years of data (depending on what was available). The lines for 2022 represent the anticipated endpoints for Yellow Perch in a future sampling period
For male gonad weight (Fig. 6C) the actual values in 2020 and 2021 were within the ±2 SD range for all models. But estimates for 2022 were extremely low for EN and PLS and extremely high for LM (not shown). For liver weight values included in the models were within the ±2 SD range for the monitoring trigger but 2021 was above for PLS and EN but within for LM (Fig. 6D).
Discussion
It is challenging to use simple statistical differences between sites as a tool to make management decisions because the power to detect change is strongly impacted by sample size and variability. Munkittrick et al. (2009) reviewed approaches for defining critical effect sizes for environmental monitoring and Kilgour et al. (2017) recommended using a “normal range” to define ecologically relevant effect sizes. The effects-based approach is the most effective assessment of real environmental change (Dubé 2003; Kilgour et al. 2007) and we demonstrated how to develop monitoring (range of natural/current variability) and forecast (predicted future range of variability) triggers for fish populations, an effects-based approach. The fish population data in this paper demonstrated that for Yellow Perch, more than four years of data are needed to have confidence in monitoring triggers (Fig. 3) and predictive models (Fig. 6); three of the four response variables presented did not have the fourth year of data within the monitoring trigger range and three years of data to develop some of the models could not be verified with a fourth year of data. In other studies, fish population variability stabilized with 8 to 12 years of data (Arciszewski and Munkittrick 2015; Arciszewski et al. 2017b). Site-specific triggers are most useful, but we recognize that baseline studies will seldom have sufficient pre-development data to satisfy these requirements. When possible, space can be substituted for time to create regional triggers until sufficient site-specific data are available.
When deciding what to use as a monitoring trigger, some things should be kept in mind. If the monitoring trigger from Barrett et al. (2015) is used, the normal ranges will be larger with smaller sample sizes and if resampling (bootstrap) is needed it is less sensitive to each added year of data than other methods (Fig. 3C). Using percentiles as monitoring triggers may be appropriate when there are only a few years of data, but given their wide range with more years of data ± 2 SD or CES can be used with confidence.
With seven years of data, the three models demonstrated here provided similar estimates. Given that PLS selected many environmental variables (average of 54% of variables) and that the three models were similar in results and predictability, it may not be the best method for this study. EN and PLS often had larger coefficients for variables that are correlated with Tair and Q60dp but not these two variables that were in the LM, which may mean that one of those correlated environmental variables would explain more of the variance. As there are issues with variable selection processes for ecological model development (Burnham and Anderson 2002; Heinze et al. 2018), regularization techniques, such as EN, are suggested as alternatives (McMillan et al. 2022). EN on average selected 7 environmental variables with common ones for models with six years of data: average and median water temperature, number of days water temperature was above 10 °C, number of days average air temperature was above 10 °C, minimum river discharge, average precipitation, and the number of times precipitation was >15 mm before fish collection day. With more years of data or spring sampling it is possible that different environmental variables would be more important for predicting those response variables, such as winter length for egg development (e.g., Farmer et al. 2015).
The potential MGS refurbishment may include consideration of changes to limit ecological impacts such as facilitation of fish movement by installing a fish ladder and considering ecological flows needed to support downstream river reaches. The monitoring triggers presented here will indicate when there is a difference from the current pre-refurbishment conditions. The modelling in this case study predicted conditions of a known year, but if environmental factors are predicted from design changes to the dam as part of the EIA, those values can be input into the model to predict Yellow Perch response to those conditions post-modification to the dam.
In an ideal EIA, baseline data would be collected for multiple years on multiple physiochemical and biotic components of the environment (Dubé 2003; Kilgour et al. 2007; Arnold et al. 2019). During the pre-construction phase year to year variability of these components necessitates the collection of more than a few years of data (Fig. 3) to adequately document the natural/recent variability in the system (Dubé 2003; Kilgour et al. 2007). In addition to understanding the “normal range”, an understanding of the factors that drive this natural/recent variability can assist in modelling potential ecological impacts of development (Kilgour et al. 2019; Arciszewski et al. 2022). These models then forecast the potential ecological implications that the development may have as it changes the environment, such as changes to river discharge or temperature (Duinker and Greig 2007). Post-construction monitoring would continue and results compared to baseline (monitoring trigger) and model predictions (forecast trigger). If triggers are exceeded, it does not necessarily mean the assessment needs to change to another tier in the adaptive monitoring plan, as there are multiple questions that should be addressed to ensure a firm understanding of if there are likely impacts occurring (Table 4).
Table 4.
Questions to ask if monitoring or forecast trigger exceeded (modified from Munkittrick and Arciszewski 2017; Somers et al. 2018)
| Monitoring Trigger Exceeded | Forecast Trigger Exceeded |
|---|---|
|
Can we confirm the changes? Is the study design adequate? Are the reference sites adequate? Are there no potential confounding factors? |
Is the change beyond expected? Does the model have enough data to make a reliable forecast? Do the conditions of the receiving environment match what was used to develop the model? |
If we answer yes to all the above questions, then a change in tier is warranted to better understand cause and, therefore, potential mitigation to reduce impacts. But if the model is not considered reliable, either because of not enough data or incorrect input data, then new data can be used to refine the model. An iterative feedback loop is needed so that model development improves as well as our understanding of the system (Greig and Duinker 2011). The failure to make an adequate prediction post-construction does not matter for the EIA –the model helped navigate discussions about the consequences of the development and design changes that could reduce/prevent it (scenario building; Duinker and Greig 2007). The iterative process during post-construction monitoring provides an iterative feedback loop to improve both modelling and monitoring so that future development in the area has a model better able to make predictions (Dubé 2003; Greig and Duinker 2011).
When there is minimal baseline data, the dominant tool in the post-construction monitoring plan will be the monitoring triggers as efforts continue to improve the forecast capability. When the model is fully developed, exceeding the forecast trigger is a signal that conditions may be changing (Somers et al. 2018). But if forecast triggers are within monitoring triggers and results are within monitoring triggers (despite an exceedance of the forecast), the response to exceeding the forecast trigger might be to continue to monitor or increase frequency of monitoring instead of investigating cause/solution.
The case study here is related to a potential dam refurbishment. As infrastructure ages, EIAs on large refurbishment or upgrade projects will increase, as will projects related to removing aging infrastructure. Collecting baseline for use in assessments on modifications to existing facilities may be easier as data collection may already be occurring and sampling logistics such as easy access to the site (i.e., roads exist to facility) are established. Although the data in this paper were data-limited with seven or fewer years at one location with inconsistent sample timing, there is an opportunity for refurbishment projects to have multiple years of data in multiple locations (i.e., reference and exposure areas) much easier than new projects in remote areas. It is essential that existing monitoring programmes consider including endpoints of relevance to developing post-development, post-refurbishment, or post-removal monitoring decisions.
Data on water quality, water quantity, and ecological health are widely collected but the information is inconsistent between programmes (Dipper 1998) and between EIA phases (Brown et al. 2024). Management decisions regarding the acceptability of a proposed development involve a process with various assessment components, including aspects of ecological risk assessment, cumulative effects assessment, and EIA. Although we have known since the 1960s about the power of before-and-after control impact (BACI) designs (Green 1979), interpretations of potential post-development impacts continue to be hampered by the inability to develop consistent monitoring programmes across the phases of development. In the case of hydroelectric dams, few assessments integrate before and after designs into pre-development and post-development monitoring requirements (Brown et al. 2024), which limits the ability to assess when there is environmental change from the project to inform management and mitigation decisions. In addition, often those who decide if there is an impact post-development are separate from the EIA process, which may be part of the reason for the inconsistency of monitored components between programmes and EIA phases.
It is important to consider the ecological relevance of monitoring components as we do not have a good understanding of habitat requirements for biota (Kilgour et al. 2007) and often do not know all the potential stressors of a Project or how they will interact with other existing or potential future developments (Dubé and Munkittrick 2001). Choosing the right indicators needs to balance ecological relevance, time lag tolerance, ability to indicate cause of change, and if change is reversible if it is observed (Munkittrick et al. 2019). Multiple years of baseline data are needed to know when conditions are different and as such, consideration of post-development monitoring early in the EIA process can help with the assessment and monitoring process. It is possible to develop consistency across the phases of monitoring by considering future monitoring needs early in the phases of consideration of the development (Brown et al. 2022).
Decision-making is challenged by a lack of mechanisms to make decisions related to environmental indicators. Developing a process for incorporating and iteratively improving monitoring and forecast triggers (Somers et al. 2018) would greatly improve the capability to assess future developments’ impacts and implement management to correct mistakes in design. Having triggers developed before post-development monitoring begins so that comparisons can occur as soon as results are received expedites the timeframe to knowing when there is an issue and if management is needed. While monitoring and forecast triggers will necessarily be interim until sufficient data are available to establish normal ranges (Kilgour et al. 2017), the alignment of monitoring requirements between developments within a watershed will improve monitoring, modelling, and prediction over the long term and for consideration of future developments.
Conclusion
Interim triggers based on sampling consistently at the appropriate time are useful for detecting meaningful change, but the strength of triggers will improve with additional data. We recommend using percentiles as monitoring triggers when there are only a few years of data but a normal range or CES with more years of data. We used the SDR as the forecast trigger, which had a narrower range than the confidence interval and could also be calculated with the models demonstrated in this paper. Development of monitoring and forecast triggers assists an adaptive monitoring programme to detect actual impacts post-construction. Most EIAs do not have adequate data to incorporate effects-based endpoints into their monitoring plan. This paper demonstrates with a fish population that it is possible but that multiple years of data are needed.
Supplementary information
Acknowledgements
This field study was conducted on the unceded and unsurrendered Wolastoqey Territory, which encompasses the Wolastoq | Saint John River watershed. Please support Indigenous priorities as we work together for reconciliation. We would like to thank members of the Mactaquac Aquatic Ecosystem Study team and St. Mary’s First Nation for support with fieldwork and providing data. This research project is a part of the Mactaquac Aquatic Ecosystem Study (MAES) funded by New Brunswick Power and Natural Sciences and Engineering Research Council’s Collaborative Research and Development programme (CRDPJ: 462708-13).
Author Contributions
CJM Brown was the primary investigator and created the manuscript. KR Munkittrick provided feedback and direction in shaping the manuscript. TJ Arciszewski and DS Smith provided specific feedback on modelling. All authors reviewed the manuscript.
Data Availability
Some data used in this study are publicly available. Air temperature and precipitation were taken from “Fredericton Intl A 8101505” at [https://climate.weather.gc.ca/](https://climate.weather.gc.ca). Water temperature and fish data are available upon request from MAES at [https://www.canadianriversinstitute.com/maes](https://www.canadianriversinstitute.com/maes).
Compliance with Ethical Standards
Conflict of Interest
The authors declare no competing interests.
Footnotes
Gamma distribution with a log-link was attempted initially but convergence warnings consistently occurred with liver weight and gonad weight relative to body weight.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00267-025-02260-9.
References
- Arciszewski TJ, Hazewinkel RR, Munkittrick KR, Kilgour BW (2018) Developing and applying control charts to detect changes in water chemistry parameters measured in the Athabasca River near the oil sands: a tool for surveillance monitoring. Environ Toxicol Chem 37:2296–2311. 10.1002/etc.4168. [DOI] [PubMed] [Google Scholar]
- Arciszewski TJ, Munkittrick KR (2015) Development of an adaptive monitoring framework for long-term programs: an example using indicators of fish health. Integr Environ Assess Manag 11:701–718. 10.1002/ieam.1636. [DOI] [PubMed] [Google Scholar]
- Arciszewski TJ, Munkittrick KR, Kilgour BW et al. (2017a) Increased size and relative abundance of migratory fishes observed near the Athabasca oil sands. Facets 2:833–858. 10.1139/FACETS-2017-0028. [Google Scholar]
- Arciszewski TJ, Munkittrick KR, Scrimgeour GJ et al. (2017b) Using adaptive processes and adverse outcome pathways to develop meaningful, robust, and actionable environmental monitoring programs. Integr Environ Assess Manag 13:877–891. 10.1002/ieam.1938. [DOI] [PubMed] [Google Scholar]
- Arciszewski TJ, Ussery EJ, McMaster ME (2022) Incorporating industrial and climatic covariates into analyses of fish health indicators measured in a stream in Canada’s oil sands region. Environments 9:73. 10.3390/environments9060073. [Google Scholar]
- Arnold LM, Hanna K, Noble BF (2019) Freshwater cumulative effects and environmental assessment in the Mackenzie Valley, Northwest Territories: challenges and decision-maker needs. Impact Assess Proj Apprais 37:516–525. 10.1080/14615517.2019.1596596. [Google Scholar]
- Barrett TJ, Hille KA, Sharpe RL et al. (2015) Quantifying natural variability as a method to detect environmental change: Definitions of the normal range for a single observation and the mean of m observations. Environ Toxicol Chem 34:1185–1195. 10.1002/ETC.2915. [DOI] [PubMed] [Google Scholar]
- Benejam L, Saura-Mas S, Bardina M et al. (2016) Ecological impacts of small hydropower plants on headwater stream fish: From individual to community effects. Ecol Freshw Fish 25:295–306. 10.1111/eff.12210. [Google Scholar]
- Brasfield SM, Hewitt LM, Chow L et al. (2015) Assessing the contribution of multiple stressors affecting small-bodied fish populations through a gradient of agricultural inputs in northwestern New Brunswick, Canada. Water Qual Res J Can 50:182–197. 10.2166/wqrjc.2014.126. [Google Scholar]
- Brown CJM, Curry RA, Gray MA et al. (2022) Considering fish as recipients of ecosystem services provides a framework to formally link baseline, development, and post-operational monitoring programs and improve aquatic impact assessments for large scale developments. Environ Manag 70:350–367. 10.1007/S00267-022-01665-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brown CJM, Noble BF, Munkittrick KR (2024) Examination of recent hydroelectric dam projects in Canada for alignment of baseline studies, predictive modeling, and postdevelopment monitoring phases of aquatic environmental impact assessments. Integr Environ Assess Manag 20:616–644. 10.1002/IEAM.4823. [DOI] [PubMed] [Google Scholar]
- Burnham KP, Anderson DR (2004) Multimodel inference: Understanding AIC and BIC in model selection. Socio Methods Res 33:261–304. 10.1177/0049124104268644. [Google Scholar]
- Burnham KP, Anderson DR (eds) (2002) Model selection and multimodel inference: a practical information-theoretic approach, 2nd edn. Springer
- Carrascal LM, Galván I, Gordo O (2009) Partial least squares regression as an alternative to current regression methods used in ecology. Oikos 118:681–690. 10.1111/J.1600-0706.2008.16881.X. [Google Scholar]
- Clarke KD, Pratt TC, Randall RG, et al (2008) Validation of the flow management pathway: effects of altered flow on fish habitat and fisheries downstream from a Hydropower Dam. Canadian Technical Report of Fisheries and Aquatic Sciences 2784. Fisheries and Oceans Canada
- Connelly R (2011) Canadian and international EIA frameworks as they apply to cumulative effects. Environ Impact Assess Rev 31:453–456. 10.1016/j.eiar.2011.01.007. [Google Scholar]
- Curry RA, Yamazaki G, Linnansaari T et al. (2020) Large dam renewals and removals — Part 1: Building a science framework to support a decision-making process. River Res Appl 36:1460–1471. 10.1002/rra.3680. [Google Scholar]
- Dhillon RS, Fox MG(2004) Growth-independent effects of temperature on age and size at maturity in Japanese Medaka (Oryzias latipes) Copeia 2004:37–45. 10.1643/CI-02-098R1 [Google Scholar]
- Dipper B (1998) Monitoring and post-auditing in environmental impact assessment: a review. J Environ Plan Manag 41:731–747. 10.1080/09640569811399. [Google Scholar]
- Dolson-Edge, R, Bruce, M, Samways, KM et al., 2019a. Baseline Biological Conditions in the Saint John River. Mactaquac Aquatic Ecosystem Study Report Series 2019-066. Canadian Rivers Institute, University of New Brunswick, Fredericton, Canada. [Google Scholar]
- Dolson-Edge R, Tar C, Gautreau M, Curry RA (2019b) Fish Community of the Saint John River in 2018. Mactaquac Aquatic Ecosystem Study Report Series 2019-074. Canadian Rivers Institute, University of New Brunswick, Fredericton, Canada. [Google Scholar]
- Dubé MG (2003) Cumulative effect assessment in Canada: a regional framework for aquatic ecosystems. Environ Impact Assess Rev 23:723–745. 10.1016/S0195-9255(03)00113-6. [Google Scholar]
- Dubé MG, Munkittrick KR (2001) Integration of effects-based and stressor-based approaches into a holistic framework for cumulative effects assessment in aquatic ecosystems. Hum Ecol Risk Assess 7:247–258. 10.1080/20018091094367. [Google Scholar]
- Duinker PN, Greig LA (2007) Scenario analysis in environmental impact assessment: Improving explorations of the future. Environ Impact Assess Rev 27:206–219. 10.1016/j.eiar.2006.11.001. [Google Scholar]
- Eriksson L, Hermens JLM, Johansson E et al. (1995) Multivariate analysis of aquatic toxicity data with PLS. Aquat Sci 57:217–241. 10.1007/BF00877428/METRICS. [Google Scholar]
- Farmer TM, Marschall EA, Dabrowski K, Ludsin SA (2015) Short winters threaten temperate fish populations. Nat Commun 6:1–10. 10.1038/ncomms8724. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Government of Canada, 2019. Impact Assessment Act. Department of Justice, Canada. [Google Scholar]
- Government of Canada (2017) Environmental effects monitoring. In: Environ. Clim. Chang. Canada. https://www.canada.ca/en/environment-climate-change/services/managing-pollution/environmental-effects-monitoring.html. Accessed 22 Nov 2022
- Government of New Brunswick., 2019. Environmental Impact Assessment Regulation. Department of Envrionment and Local Government, Canada. [Google Scholar]
- Green RH (1989) Power analysis and practical strategies for environmental monitoring. Environ Res 50:195–205. 10.1016/S0013-9351(89)80058-1. [DOI] [PubMed] [Google Scholar]
- Green, RH, 1979. Sampling design and statistical methods for environmental biologists. John Wiley & Sons, Inc., USA. [Google Scholar]
- Greig LA, Duinker PN (2011) A proposal for further strengthening science in environmental impact assessment in Canada. Impact Assess Proj Apprais 29:159–165. 10.3152/146155111X12913679730557. [Google Scholar]
- Heinze G, Wallisch C, Dunkler D (2018) Variable selection—a review and recommendations for the practicing statistician. Biometrical J 60:431–449. 10.1002/BIMJ.201700067. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kilgour BW, Dubé MG, Hedley K et al. (2007) Aquatic environmental effects monitoring guidance for environmental assessment practitioners. Environ Monit Assess 130:423–436. 10.1007/s10661-006-9433-0. [DOI] [PubMed] [Google Scholar]
- Kilgour BW, Munkittrick KR, Hamilton L et al. (2019) Developing triggers for environmental effects monitoring programs for Trout-Perch in the lower Athabasca River (Canada). Environ Toxicol Chem 38:1890–1901. 10.1002/etc.4469. [DOI] [PubMed] [Google Scholar]
- Kilgour BW, Somers KM, Barrett TJ et al. (2017) Testing against “normal” with environmental data. Integr Environ Assess Manag 13:188–197. 10.1002/ieam.1775. [DOI] [PubMed] [Google Scholar]
- Kilgour BW, Somers KM, Matthews DE (1998) Using the normal range as a criterion for ecological significance in environmental monitoring and assessment. Écoscience 5:542–550. 10.1080/11956860.1998.11682485. [Google Scholar]
- Korman J, Yard MD, Dzul MC et al. (2021) Changes in prey, turbidity, and competition reduce somatic growth and cause the collapse of a fish population. Ecol Monogr 91: e01427. 10.1002/ECM.1427. [Google Scholar]
- Lindenmayer, DB, Likens, GE, 2010. Effective Ecological Montioring. CSIRO Publishing, Collingwood, Australia. [Google Scholar]
- Marsden JE, Blanchfield PJ, Brooks JL et al. (2021) Using untapped telemetry data to explore the winter biology of freshwater fish. Rev Fish Biol Fish 31:115–134. 10.1007/S11160-021-09634-2. [Google Scholar]
- McMillan PG, Feng ZZ, Deeth LE, Arciszewski TJ (2022) Improving monitoring of fish health in the oil sands region using regularization techniques and water quality variables. Sci Total Environ 811: 152301. 10.1016/J.SCITOTENV.2021.152301. [DOI] [PubMed] [Google Scholar]
- Monk WA, Peters DL, Curry RA, Baird DJ (2011) Quantifying trends in indicator hydroecological variables for regime-based groups of Canadian rivers. Hydrol Process 25:3086–3100. 10.1002/hyp.8137. [Google Scholar]
- Morrison-Saunders A, Bailey J (2003) Practitioner perspectives on the role of science in environmental impact assessment. Environ Manag 31:683–695. 10.1007/s00267-003-2709-z. [DOI] [PubMed] [Google Scholar]
- Muhling BA, Brodie S, Smith JA et al. (2020) Predictability of species distributions deteriorates under novel environmental conditions in the California current system. Front Mar Sci 7:542465. 10.3389/FMARS.2020.00589/BIBTEX. [Google Scholar]
- Munkittrick KR, Arciszewski TJ (2017) Using normal ranges for interpreting results of monitoring and tiering to guide future work: a case study of increasing polycyclic aromatic compounds in lake sediments from the Cold Lake oil sands (Alberta, Canada) described in Korosi et al. (2016). Environ Pollut 231:1215–1222. 10.1016/J.ENVPOL.2017.07.070. [DOI] [PubMed] [Google Scholar]
- Munkittrick KR, Arciszewski TJ, Gray MA (2019) Principles and challenges for multi-stakeholder development of focused, tiered, and triggered, adaptive monitoring programs for aquatic environments. Diversity 11:155. 10.3390/d11090155. [Google Scholar]
- Munkittrick KR, Arens CJ, Lowell RB, Kaminski GP (2009) A review of potential methods of determining critical effect size for designing environmental monitoring programs. Environ Toxicol Chem 28:1361–1371. 10.1897/08-376.1. [DOI] [PubMed] [Google Scholar]
- Munkittrick KR, McGeachy SA, McMaster ME, Courtenay SC (2002) Overview of freshwater fish studies from the pulp and paper environmental effects monitoring program. Water Qual Res J Can 37:49–77. 10.2166/wqrj.2002.005. [Google Scholar]
- Noble BF (2021) Introduction to environmental impact assessment: a guide to principles and practice, Fourth. Oxford University Press, Don Mill, Canada
- O’Sullivan AM, Wegscheider B, Helminen J, et al (2020) Catchment-scale, high-resolution, hydraulic models and habitat maps—a salmonid’s perspective. J Ecohydraulics 1–16. 10.1080/24705357.2020.1768600
- Piccolroaz S, Calamita E, Majone B et al. (2016) Prediction of river water temperature: a comparison between a new family of hybrid models and statistical approaches. Hydrol Process 30:3901–3917. 10.1002/HYP.10913. [Google Scholar]
- Pope KL, Lochmann SE, Young MK (2010) Methods for assessing fish populations. In: Hubert WA, Quist MC (eds) Inland fisheries management in North America, 3rd edn. American Fisheries Society, Bethesda, Maryland, USA, pp 325–351
- Posit team (2023) RStudio: Integrated development for R
- Schull Q, Beauvieux A, Viblanc VA et al. (2023) An integrative perspective on fish health: Environmental and anthropogenic pathways affecting fish stress. Mar Pollut Bull 194: 115318. 10.1016/J.MARPOLBUL.2023.115318 Part. [DOI] [PubMed] [Google Scholar]
- Somers KM, Kilgour BW, Munkittrick KR, Arciszewski TJ (2018) An adaptive environmental effects monitoring framework for assessing the influences of liquid effluents on benthos, water, and sediments in aquatic receiving environments. Integr Environ Assess Manag 14:552–566. 10.1002/ieam.4060. [DOI] [PubMed] [Google Scholar]
- Stevenson DK, Chapman SE (eds) (1992) Otolith microstructure examination and analysis. Canadian Special Publication of Fisheries and Aquatic Sciences 117. Ottawa, Canada
- Toffolon M, Piccolroaz S (2015) A hybrid model for river water temperature as a function of air temperature and discharge. Environ Res Lett 10:114011. 10.1088/1748-9326/10/11/114011. [Google Scholar]
- Torralva MDM, Angeles Puig M, Fernandez-delgado C (1997) Effect of river regulation on the life-history patterns of Barbus sclateri in the Segura river basin (south-east Spain). J Fish Biol 51:300–311. 10.1111/j.1095-8649.1997.tb01667.x. [Google Scholar]
- Ussery EJ, McMaster ME, Servos MR et al. (2021) A 30-year study of impacts, recovery, and development of critical effect sizes for endocrine disruption in White Sucker (Catostomus commersonii) exposed to bleached-kraft pulp mill effluent at Jackfish Bay, Ontario, Canada. Front Endocrinol (Lausanne) 12:664157. 10.3389/FENDO.2021.664157/BIBTEX. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Young PS, Cech JJ, Thompson LC (2011) Hydropower-related pulsed-flow impacts on stream fishes: a brief review, conceptual model, knowledge gaps, and research needs. Rev Fish Biol Fish 21:713–731. 10.1007/S11160-011-9211-0/METRICS. [Google Scholar]
- Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc Ser B Stat Methodol 67:301–320. 10.1111/J.1467-9868.2005.00503.X. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Some data used in this study are publicly available. Air temperature and precipitation were taken from “Fredericton Intl A 8101505” at [https://climate.weather.gc.ca/](https://climate.weather.gc.ca). Water temperature and fish data are available upon request from MAES at [https://www.canadianriversinstitute.com/maes](https://www.canadianriversinstitute.com/maes).




