Skip to main content
. 2021 Jan 20;254:108974. doi: 10.1016/j.biocon.2021.108974

Table A3.

Fixed effect predictor variables for each model in the set of seven models fitted separately to data from each of the four political units, in order to identify the best-supported model(s) describing inter-annual variation in observer effort (separately duration of count periods, and distances travelled for non-stationary counts). In these models each of three moments of distribution — mean, variance and skewness — could vary independently as a function of their own set of predictor variables. An “X” within a column indicates that this predictor variable was included in the model described by a row within this table. The fixed effect predictor “intercept” indicates that the moment of variation was constant across years. “Year” was a categorical predictor variable that describes variation among years that is arbitrary and potentially non-systematic in pattern; note that when “Year” was included as a predictor of a moment of distribution the year 2020 was treated as the intercept. Note that several models failed to converge, both for modelling of variation in count durations and travel distances, in spite of our efforts to adapt the model-fitting processa. See footnotes for Table A8, Table A9 for lists of the models that did not converge for each political unit.

Model number Mean
Variance
Skewness
Intercept Year Intercept Year Intercept Year
1 X X X
2 X X X
3 X X X
4 X X X
5 X X X
6 X X X
7 X X X
a

We adapted the default model-fitting process in two ways. First, we increased the number of iterations allowed for model fitting from the default 20 to 100, and halved the “step.length” value with which the algorithm would adjust parameter values in each iteration of the model-fitting algorithm; the convergence criterion was never altered. Second, at times model convergence failed in fewer than the 20 iterations allowed by default, and in these cases we would change the algorithm used from the default “method = RS”, to instead use a mixture of the two available algorithms (“method = mixed”) and then vary the number of iterations using the first algorithm before starting to use the second.