Table 2.
Comparison of model performance in terms of AUC, Brier’s score, calibration intercept and calibration slope averaged over the 200 testing samples in the repeated split-sample validation
Methods | Predictive Accuracy | Calibration | ||
---|---|---|---|---|
AUC | Brier | Intercept | Slope | |
Including Hospital Use Variables | ||||
logistic | 0.9610 | 0.0265 | -0.0303 | 0.9827 |
LASSO | 0.9622 | 0.0261 | -0.0109 | 0.9958 |
GAM | 0.9620 | 0.0262 | -0.0620 | 0.9592 |
LDA | 0.9559 | 0.0471 | -1.6859 | 0.3630 |
Tree | 0.9450 | 0.0271 | -0.0893 | 0.9378 |
RF | 0.9552 | 0.0270 | -0.2993 | 0.5798 |
XGBoost | 0.9669 | 0.0251 | 0.0464 | 1.0287 |
Excluding Hospital Use Variables | ||||
logistic | 0.9423 | 0.0296 | -0.0179 | 0.9879 |
LASSO | 0.9424 | 0.0295 | 0.0145 | 1.0087 |
GAM | 0.9425 | 0.0295 | -0.0474 | 0.9692 |
LDA | 0.9348 | 0.0536 | -1.6425 | 0.4178 |
Tree | 0.9276 | 0.0299 | -0.0463 | 0.9697 |
RF | 0.9190 | 0.0317 | -0.7027 | 0.4783 |
XGBoost | 0.9461 | 0.0288 | 0.0235 | 1.0090 |