Table 1.
Glossary
Definition | |
---|---|
Calibration | Agreement between observed outcome risks and the risks predicted by the model. |
Calibration slope | Slope of the linear predictor in case you would fit a regression line. The calibration slope ideally equals 1. A calibration slope <1 indicates that predictions are too extreme (e.g. low-risk individuals have a predicted risk that is too low, and high-risk individuals are given a predicted risk that is too high). Conversely, a slope >1 indicates that predictions are not extreme enough [26]. |
Concordance c-statistic | Statistic that quantifies the chance that for any two individuals of which one developed the outcome and the other did not, the former has a higher predicted risk according to the model than the latter. A c-statistic of 1 means perfect discriminative ability, whereas a model with a c-statistic of 0.5 is not better than flipping a coin [27]. C-statistic is highly dependent on case-mix in the population (i.e. in homogeneous populations c-statistics are in general lower compared to heterogeneous populations) [28,29]. |
Discrimination | Ability of the model to distinguish between people who did and did not develop the outcome of interest, often quantified by the concordance c-statistic. |
External validation | Evaluating the predictive performance of a prediction model in a study population other than the population from which the model was developed. |
OE ratio | The ratio of the total number of actual observed participants with the outcome in a specific time frame (e.g. in 1 y) and the total number of participants with the outcome as predicted by the model. |
Prediction horizon | Time frame over which the model predicts the outcome (e.g. predicting 10-y risk of developing cardiovascular disease). |
Predictive performance | Accuracy of the predictions made by a prediction model, often expressed in terms of calibration and discrimination. |
OE, observed expected.