Skip to main content
. 2021 Nov 18;24:101647. doi: 10.1016/j.pmedr.2021.101647

Table 4.

Overview of models’ discrimination and overall performance in the validation.

Method/model AUC (95% CI) AUCbias (95% CI) Brierscaled Slope (95% CI) Intercept (95% CI)
0 – no adjustments 0.726 (0.719, 0.733) 1.47% 0.781 (0.752, 0.811) 0.669 (0.539, 0.800)
1 – calibration-in-the-large 0.726 (0.719, 0.733) 5.26% 0.781 (0.752, 0.811) −0.531 (−0.618, −0.444)
2 – logistic calibration 0.726 (0.719, 0.733) 5.89% 1.000 (0.962, 1.038) 0.000 (−0.106, 0.106)
3 – refitting 0.738 (0.731, 0.745) 0.737 (0.731, 0.744) 6.53% 1.000 (0.965, 1.035) 0.000 (−0.098, 0.098)
4 – refitting with different predictor assessment 0.738 (0.731, 0.745) 0.737 (0.731, 0.745) 6.53% 1.000 (0.965, 1.035) 0.000 (−0.098, 0.098)
5 – refitting with numerical predictors as continuous 0.741 (0.734, 0.748) 0.741 (0.734, 0.748) 6.53% 1.000 (0.966, 1.034) 0.000 (−0.097, 0.097)
AUSDRISK 0.723 (0.716, 0.730) 4.42% 0.956 (0.920, 0.991) −0.514 (−0.600, −0.430)

Abbreviations: AUC = area under the receiver-operator curve; AUCbias = bias-corrected AUC for refitted models; Brierscaled = scaled Brier score; CI = confidence interval.