Skip to main content
. 2022 Dec 19;28(3):1232–1239. doi: 10.1038/s41380-022-01918-8

Table 1.

AUC with 95% CI in the training and testing sets for the different trained models.

AUC
Model Training Testing Balanced accuracy AUPRC
Logistic regression 0.819 0.742 (0.732–0.753) 0.673 0.162
Random forest 0.930 0.678 (0.667–0.689) 0.620 0.189
Gradient boosting (GB) 0.874 0.726 (0.715–0.737) 0.663 0.177
XGBoost 0.925 0.688 (0.676–0.699) 0.632 0.209
Naïve Bayes (NB) 0.806 0.710 (0.698–0.721) 0.655 0.179
Logistic regression – L1 and L2 penalty (elasticnet) 0.816 0.745 (0.735–0.755) 0.675 0.179
Deep neural network (DNN) 0.800 0.753 (0.743–0.763) 0.684 0.218
Ensemble (XGB, GB, NB, L1L2) 0.887 0.743 (0.732–0.752) 0.667 0.208
Ensemble (XGB, GB, NB, DNN) 0.898 0.750 (0.739–0.760) 0.671 0.212

AUC Area Under the Receiver Operating Characteristic Curve, AUPRC area under the precision-recall curve.

Bold values represent the best performing model for each metric.