Table 4.
Model | Optimal cutoff | Specificity | Precision | Recall | F1 measure | AUCa |
Logistic regression | 0.157 | 0.642 | 0.857 | 0.729 | 0.773 | 0.741 |
Naïve Bayes | 0.220 | 0.666 | 0.855 | 0.685 | 0.740 | 0.720 |
Alternating decision tree | 0.298 | 0.662 | 0.857 | 0.705 | 0.755 | 0.732 |
Random forest | 0.122 | 0.747 | 0.862 | 0.611 | 0.680 | 0.726 |
XGBoostb | 0.175 | 0.611 | 0.856 | 0.759 | 0.794 | 0.743 |
Neural network | 0.125 | 0.686 | 0.858 | 0.681 | 0.737 | 0.735 |
HOSPITAL score | 4 | 0.564 | 0.838 | 0.694 | 0.745 | 0.688 |
LACE index | 11 | 0.469 | 0.830 | 0.745 | 0.779 | 0.675 |
LACE-rt index | 7 | 0.542 | 0.833 | 0.688 | 0.740 | 0.668 |
aAUC: area under the curve.
bXGBoost: extreme gradient boosting.