Skip to main content
. 2023 Jul 10;47(1):71. doi: 10.1007/s10916-023-01966-9

Table 2.

Performance metrics of all models with their optimal hyperparameters based on k-folds cross-validation

Classification Model AUC (95% CI) AUC (95% CI)
(SMOT)
Specificity (95% CI) Specificity (95% CI)
(SMOT)
Sensitivity (95% CI) Sensitivity (95% CI)
(SMOT)
Neural Network 0.586 (0.557, 0.615) 0.628 (0.583, 0.673) 0.969 (0.943, 0.994) 0.880 (0.835, 0.925) 0.204 (0.137, 0.271) 0.375 (0.282, 0.467)
XGBoost 0.663 (0.624, 0.702) 0.685 (0.652, 0.718) 0.964 (0.948, 0.979) 0.917 (0.905, 0.929) 0.363 (0.287, 0.439) 0.452 (0.387, 0.517)
Random Forest Classifier 0.625 (0.584, 0.666) 0.637 (0.600, 0.676) 0.969 (0.965, 0.973) 0.952 (0.932, 0.972) 0.279 (0.197, 0.361) 0.209 (0.156, 0.262)
Logistic Regression 0.589 (0.560, 0.618) 0.667 (0.628, 0.706) 0.971 (0.961, 0.980) 0.744 (0.704, 0.783) 0.209 (0.156, 0.262) 0.591 (0.526, 0.656)
Balanced Bagging Classifier 0.672 (0.627, 0.717) 0.657 (0.624, 0.690) 0.814 (0.784, 0.843) 0.858 (0.842, 0.874) 0.529 (0.456, 0.602) 0.457 (0.396, 0.518)
Balanced Random Forest Classifier 0.684 (0.653, 0.715) 0.681 (0.642, 0.720) 0.727 (0.709, 0.744) 0.819 (0.792, 0.846) 0.642 (0.577, 0.707) 0.542 (0.466, 0.618)

AUC Area under the receiver operating characteristics curve, CI Confidence Interval, SMOTE Synthetic Minority Oversampling Technique, bolded numbers indicate best performance for that metric