Skip to main content
. 2010 Nov 18;10:710. doi: 10.1186/1471-2458-10-710

Table 1.

Recommendations for gathering evidence of model credibility

Evidence from examining model development process
Conceptual model

Underlying theories The conceptual model should be based on an accepted theory of the phenomena under study. The lack of an adequate theoretical basis is a serious limitation that may compromise the model's credibility.

Definitions of variables Definitions of the variables in the model should be justified. Evidence that the definitions are acceptable should be provided (e.g., a reference to published and/or generally accepted clinical criteria or results from validation studies).

Model content and structure Evidence should be provided that the model is sufficiently complete and that the relationships between the variables in the model are correctly specified. If some variables or interactions are omitted, explanations should be given why this is acceptable and does not invalidate the results.

Parameters

Parameters obtained from experts The process of parameter elicitation should be described (number of experts, their areas or expertise, questions asked, how the responses were converted to a parameter). Plausibility of the parameter value(s) should be assessed by independent experts. Comparisons should be made with other sources (if available) and the differences explained.

Parameters obtained from the literature Quality of the source should be ascertained. If available, a published meta-analysis should be used, but a single high-quality study may be an alternative. If information from several sources is combined, the methodology should be explained. Comparisons should be made with alternative sources and discrepancies explained. If alternative sources are not available, plausibility of the parameter values should be assessed by independent experts.

Parameters obtained from data analysis Validity evidence regarding the data and methods of analysis should be equivalent to that required for a publication in a scientific peer-review journal. The results should be compared with estimates from other sources and (if not available) expert opinion. Evidence to support generalizability of the parameters to the population modeled should be provided.

Parameters obtained through calibration Calibration methodology should be reported in detail (target data, search algorithm, goodness-of-fit metrics, acceptance criteria, and stopping rule). Plausibility of the parameters derived through calibration should be evaluated by independent experts and their values compared with external data (if available).

Computer implementation

Selection of model type A justification for the selected model type should be provided (stochastic vs. deterministic, micro vs. macro-level simulation; discrete vs. continuous time models, interacting agents vs. non-interactive models, etc). Whether or not the type of model is appropriate should be determined by independent experts.

Simulation software Information should be provided on the simulation software and programming language. The choice of software/language should be justified.

Computer program Independent experts should evaluate the key programming decisions and approaches used. The results of debugging tests should be documented and the equations underlying the model should be made open to scrutiny by external experts.

Evidence from examining model performance

Output plausibility Plausibility (face validity) should be evaluated by subject-matter experts for a wide range of input conditions and output variables, over varying time horizons.

Internal consistency Internal consistency should be assessed by considering functional and logical relationships between different output variables. Internal consistency should be tested under a wide range of conditions, including extreme values of the input parameters.

Parameter sensitivity analysis Model validation should include uncertainty and sensitivity analyses of key parameters. Screening methods should be used to select the most influential parameters for more extensive analysis. If feasible, probabilistic uncertainty/sensitivity analysis is recommended. If parameters are estimated through calibration, the model should be recalibrated as part of uncertainty/sensitivity analysis. In probabilistic models, the Monte Carlo error should be estimated.

Between-model comparisons Comparing the results of different models provides important evidence of validity. Between-model comparisons should take into account the extent to which models are developed independently. If feasible, the impact of different elements of model structure, assumptions, and computer implementation on the results should be evaluated in a systematic fashion.

Comparisons with external data Ideally, prospective data should be used for external validation. If prospective validation is not feasible, ex-post forecasting and backcasting based on historical data should be used to support predictive validity. Data used for validation should be different from data used in model development and calibration. Cross-validation and bootstrap methods can be considered as an alternative. Criteria for model acceptability should be specified in advance.

Evidence from examining the consequences of model-based decisions

Quality of decisions Quality of decisions based on the model should be evaluated and compared with those based on alternative approaches to decision making, using both subjective and objective criteria.

Model usefulness Uptake of a given model by policy makers should be monitored to assess model usefulness.