TABLE 2.
Activity | Recommended documentation |
---|---|
Equations and model description | All model equations with initial conditions, dosing regimens, parameter values and distributions, rationale for included mechanisms, derivations, sources for parameter values and mechanisms |
QC and QA | Results of code verification and record of any changes needed |
Units | Units for all model components as well as all data |
Mass balance | Results of mass balance analysis |
Unit tests | Commented, executable code for each unit test with anticipated and actual results (quantitative or qualitative) |
Reproducibility | Software and version (e.g., MATLAB R2020b, R 4.0.2), ODE solver, tolerances, operating system details; share all necessary executable code to allow key figures or predictions to be reproduced, including a fixed random seed |
Sensitivity analysis | |
LSA | Information on input parameters and model outputs used, method details (e.g., normalization, solver type), LSA results and interpretation |
Morris method – GSA | Information on input parameters and model outputs used, method details (e.g., normalization, solver type), results and interpretation; reliability/sensitivity analysis plot |
PRCC – GSA | Information on input parameters and model outputs used, method details, results and interpretation |
Sobol – GSA | Information on input parameters and model outputs used, method details, results and interpretation |
Identifiability analysis | |
Structural identifiability (using software such as DAISY, COMBOS, or GenSSI) | Choice and rationale for choosing the method used; list of identifiable parameters and/or combinations of identifiable parameters |
MCMC – practical identifiability | Two‐dimensional heat maps of MCMC simulation outputs for two parameters at a time; interpretation of results (identifiable parameters or relationships between parameters) |
Profile likelihood – practical identifiability | Profile likelihood plots and interpretation of results |
Aliasing score – practical and structural identifiability | Inputs and outputs to analysis, similar to LSA; aliasing score heat map and time‐dependent aliasing score results; interpretation of results |
Parameter estimation and model selection | |
Local optimization | List of parameters to be estimated, optimization algorithm and settings, error model; parameter estimates with confidence intervals, diagnostic plots; if optimization is a multistep process, documentation of the sequence |
Global optimization | |
vPop generation | List of parameters to be included and their distributions, constraints, sampling method, prevalence weighting method, objective function; resulting parameter ranges and distributions, virtual population statistics, and comparison to data |
Quantitative model selection (using a criterion such as AIC, AICc, or BIC) | Model selection criterion, list of models considered during the selection and their results |
Uncertainty quantification | |
Parameter confidence intervals | Parameter confidence intervals, preferably from bootstrap or profile likelihood methods, or by plotting virtual population parameter distributions |
Prediction intervals | Prediction interval plots, preferably with confidence intervals for the simulation percentiles |
vPop simulation (sampling) | The spread in model output by plotting percentiles (e.g., 5%, 50%, and 95%) and plotting these together with data |
Comparison with data | |
External validation | Plot of model predictions overlaid with external data; comparison of external data and data used for model calibration; may include, e.g., 2‐fold and 5‐fold discrepancy curves around the model prediction curve |
Hold‐out validation | Plots of model predictions overlaid with hold‐out data; plots of predictions vs observations for hold‐out data; may include, e.g., 2‐fold and 5‐fold discrepancy curves around the model prediction curves |
K‐fold cross‐validation | Values of k, mean, and variance of the mean square errors from each cross‐validation; comparison to error from parameter estimation with whole dataset |
The first column of this table lists examples of model evaluation activities discussed in this work. The second column contains a description of each activity, by detailing its recommended documentation.
Abbreviations: AIC, Akaike information criterion; AICc, corrected Akaike information criterion; BIC, Bayesian information criterion; GSA, global sensitivity analysis; LSA, local sensitivity analysis; MCMC, Markov chain Monte Carlo; ODE, ordinary differential equation; PRCC, partial rank correlation coefficient; QA, quality assurance; QC, quality control; vPop, virtual population.