Table 2.
Model_name | Accuracy | AUC | 95% CI | Sensitivity | Specificity | PPV | NPV | Task |
---|---|---|---|---|---|---|---|---|
LR | 0.839 | 0.858 | 0.8007–0.9158 | 0.870 | 0.735 | 0.919 | 0.621 | Label-train |
LR | 0.636 | 0.822 | 0.7036–0.9398 | 0.535 | 1.000 | 1.000 | 0.375 | Label-test |
SVM | 0.927 | 0.954 | 0.9122–0.9948 | 0.917 | 0.959 | 0.987 | 0.770 | Label-train |
SVM | 0.800 | 0.806 | 0.6509–0.9615 | 0.791 | 0.833 | 0.944 | 0.526 | Label-test |
KNN | 0.789 | 0.854 | 0.8052–0.9035 | 0.817 | 0.694 | 0.902 | 0.523 | Label-train |
KNN | 0.764 | 0.811 | 0.6964–0.9257 | 0.791 | 0.667 | 0.895 | 0.471 | Label-test |
RandomForest | 0.995 | 1.000 | 0.9996–1.0000 | 0.994 | 1.000 | 1.000 | 0.980 | Label-train |
RandomForest | 0.836 | 0.785 | 0.6274–0.9424 | 0.884 | 0.667 | 0.905 | 0.615 | Label-test |
ExtraTrees | 1.000 | 1.000 | 1.0000–1.0000 | 1.000 | 1.000 | 1.000 | 1.000 | Label-train |
ExtraTrees | 0.800 | 0.673 | 0.4840–0.8629 | 0.907 | 0.455 | 0.848 | 0.556 | Label-test |
XGBoost | 1.000 | 1.000 | 1.0000–1.0000 | 1.000 | 1.000 | 1.000 | 1.000 | Label-train |
XGBoost | 0.764 | 0.771 | 0.6251–0.9176 | 0.791 | 0.667 | 0.895 | 0.471 | Label-test |
LightGBM | 0.876 | 0.952 | 0.9267–0.9777 | 0.852 | 0.959 | 0.986 | 0.653 | Label-train |
LightGBM | 0.618 | 0.767 | 0.6213–0.9136 | 0.535 | 0.917 | 0.958 | 0.355 | Label-test |
MLP | 0.908 | 0.932 | 0.8940–0.9703 | 0.941 | 0.796 | 0.941 | 0.796 | Label-train |
MLP | 0.782 | 0.818 | 0.6994–0.9363 | 0.791 | 0.750 | 0.919 | 0.500 | Label-test |
KNN, k-nearest neighbor; LightGBM, light gradient boosting machine; LR, logistic regression; MLP, multilayer perceptron; NPV negative predictive value; SVM, support vector machines; PPV, positive predictive value.