Table 2.
Model | AUC | Sensitivity | Accuracy | Positive Pred Value | Negative Pred Value | Recall | F1 score |
---|---|---|---|---|---|---|---|
Adaboost | 0.898 | 0.7895 (0.5443, 0.9395) | 0.8158 (0.6567, 0.9226) | 0.8333 (0.5858, 0.9642) | 0.8000 (0.5634, 0.9427) | 0.6842 | 0.7428 |
LogitBoost | 0.802 | 0.6842 (0.4345, 0.8742) | 0.6842 (0.5135, 0.8250) | 0.6842 (0.4345, 0.8742) | 0.6842 (0.4345, 0.8742) | 0.6842 | 0.6842 |
XGBoost | 0.927 | 0.7895 (0.5443, 0.9395) | 0.8421 (0.6875, 0.9398) | 0.8824 (0.6356, 0.9854) | 0.8095 (0.5809, 0.9455) | 0.7894 | 0.8333 |
LR | 0.934 | 0.7895 (0.5443, 0.9395) | 0.8421 (0.6875, 0.9398) | 0.8824 (0.6356, 0.9854) | 0.8095 (0.5809, 0.9455) | 0.7894 | 0.8333 |
RF | 0.909 | 0.7895 (0.5443, 0.9395) | 0.8158 (0.6567, 0.9226) | 0.8333 (0.5858, 0.9642) | 0.8000 (0.5634, 0.9427) | 0.7894 | 0.8108 |
SVM | 0.95 | 0.7895 (0.5443, 0.9395) | 0.8421 (0.6875, 0.9398) | 0.8824 (0.6356, 0.9854) | 0.8095 (0.5809, 0.9455) | 0.7894 | 0.8333 |
NN | 0.953 | 0.7895 (0.5443, 0.9395) | 0.8421 (0.6875, 0.9398) | 0.8824 (0.6356, 0.9854) | 0.8095 (0.5809, 0.9455) | 0.7894 | 0.8333 |
KNN | 0.945 | 0.8421 (0.6042, 0.9662) | 0.8684 (0.7191, 0.9559) | 0.8889 (0.6529, 0.9862) | 0.8500 (0.6211, 0.9679) | 0.8421 | 0.8648 |
DT C5.0 | 0.88 | 0.7895 (0.5443, 0.9395) | 0.7368 (0.5690, 0.8660) | 0.7143 (0.4782, 0.8872) | 0.7647 (0.5010, 0.9319) | 0.7894 | 0.75 |
NB | 0.956 | 0.8947 (0.6686, 0.9870) | 0.8421 (0.6875, 0.9398) | 0.8095 (0.5809, 0.9455) | 0.8824 (0.6356, 0.9854) | 0.8947 | 0.85 |
GBM | 0.9 | 0.7895 (0.5443, 0.9395) | 0.8158 (0.6567, 0.9226) | 0.8333 (0.5858, 0.9642) | 0.8000 (0.5634, 0.9427) | 0.7894 | 0.8108 |
MLP | 0.917 | 0.7895 (0.5443, 0.9395) | 0.8421 (0.6875, 0.9398) | 0.8824 (0.6356, 0.9854) | 0.8095 (0.5809, 0.9455) | 0.7894 | 0.8333 |
AUC, area under the curve; LR, logistic regression; RF, random forest; SVM, support vector machine; NN, neural network; KNN, k-nearest neighbors; DT, decision tree; NB, naive Bayes; GBM, gradient boosting machine; MLP, multilayer perceptron.