Table 2.
Parameterization |
The process of selecting the values or distributions of the model parameters based on empirical data, usually with a random component. Rigorous parameterization is fundamental since the value of the parameters largely determines the behaviour and predictions of the model. |
Sensitivity and uncertainty analysis |
The study of the influence of the parameter values of the models on the model outcomes. Sensitivity analysis can vary one parameter at a time (univariate) or multiple (multivariate). The comparison of the model predictions with the baseline parameter values and the modified values gives an idea of how sensitive the model is to a certain parameter. Sensitivity analysis is useful because enhances the communication of the model, tests the robustness of the results allowing the evaluation of our confidence in the predictions, increases our understanding of the system and allows detection of implementation errors. Uncertainty analysis evaluates the model response for the plausible range of the parameters. Uncertainty analysis provides information on what variable generates more uncertainty in the model and can help to direct data collection efforts. |
Validation |
The process of investigating whether model predictions are likely to be accurate. Two main types of validation can be distinguished: structural and predictive validation [29]. Structural validity requires that the model reproduces the observed system behaviour and is constructed in accordance with the way the real system operates, i.e. is consistent and based on theory. Predictive validation requires that the model predicts accurately data that were not used in its construction. It has also been argued that the credibility of a model might be provided by the credentials of the model building techniques, that sometimes involve contrary-to-fact principles that increase the reliability of the results [30]. |
Least squares |
Standard data fitting procedure that consists on the minimization of the squares of the difference between the observed data points and the fitted value provided by the model. |
Maximum likelihood estimation |
Method to estimate the parameters of a model based on data. This method chooses values for which the probability of generating the observed data is highest, given the model. |
Bayesian inference |
Method of statistical inference to estimate the parameters of a model combining prior belief and the evidence observed. As more evidence is gathered the prior distribution is modified into the posterior distribution that represents the uncertainty over the parameters value. |
Markov chain Monte Carlo (MCMC) |
MCMC are algorithms that can be used to sample the posterior distribution for Bayesian inference and are useful because they allow to sample from multi-dimensional distributions of observations. |
Particle filtering |
Particle filtering is a parameterization technique based on the simulation and sequential weighting of a sample of parameter values according to their consistency with the observed data. Particle filters are normally used to parameterize Bayesian models in which variables that cannot be observed are inferred by the model through connection in a Markov chain. |
Calibration | Here we define calibration as an iterative comparison between model predictions and observed data (e.g. attack rates, R0) without the use of standard statistical inference methods. After comparison, simulation of the model for different parameter values is performed and compared with the former predictions to see if an improvement in their agreement is obtained. |