Skip to main content
. 2024 Mar 14;12:e17098. doi: 10.7717/peerj.17098

Table 4. Predictive performance of different models on training and validation sets.

Model AUC (SD) Cutoff (SD) Accuracy (SD) Sensitivity (SD) Specificity (SD) F1 score (SD)
Training set
XGBoost 0.913(0.002) 0.200(0.025) 0.810(0.015) 0.858(0.023) 0.800(0.022) 0.600(0.012)
LR 0.739(0.001) 0.162(0.002) 0.625(0.009) 0.757(0.013) 0.599(0.013) 0.402(0.001)
LightGBM 0.738(0.001) 0.167(0.000) 0.834(0.000) 0.726(0.003) 0.750(0.001) NaN
AdaBoost 0.805(0.001) 0.397(0.005) 0.725(0.006) 0.782(0.009) 0.711(0.008) 0.485(0.003)
GNB 0.794(0.001) 0.134(0.003) 0.738(0.002) 0.744(0.005) 0.737(0.003) 0.486(0.001)
MLP 0.768(0.022) 0.170(0.008) 0.697(0.039) 0.735(0.016) 0.689(0.048) 0.448(0.026)
SVM 0.508(0.059) 0.180(0.044) 0.607(0.189) 0.411(0.306) 0.646(0.287) 0.205(0.116)
Validation set
XGBoost 0.860(0.008) 0.200(0.025) 0.781(0.021) 0.837(0.046) 0.742(0.041) 0.556(0.026)
LR 0.738(0.009) 0.162(0.002) 0.625(0.009) 0.766(0.045) 0.601(0.046) 0.402(0.010)
LightGBM 0.738(0.010) 0.167(0.000) 0.834(0.000) 0.726(0.024) 0.750(0.013) NaN
AdaBoost 0.804(0.010) 0.397(0.005) 0.723(0.012) 0.764(0.050) 0.736(0.044) 0.481(0.017)
GNB 0.793(0.010) 0.134(0.003) 0.738(0.010) 0.748(0.037) 0.740(0.025) 0.487(0.014)
MLP 0.767(0.020) 0.170(0.008) 0.696(0.039) 0.736(0.043) 0.698(0.041) 0.449(0.034)
SVM 0.501(0.057) 0.180(0.044) 0.603(0.191) 0.486(0.297) 0.578(0.287) 0.217(0.094)

Notes.

XGBoost
extreme gradient boosting
LR
logistic regression
LightGBM
light gradient boosting machine
AdaBoost
Adaptive Boosting
GNB
Gaussian Naive Bayes
MLP
multilayer perceptron
SVM
support vector machine