Table 3.
Sensitivity | 1-Specificity | PPV | NPV | Accuracy | AUROC | |
---|---|---|---|---|---|---|
Training dataset (n = 165) | ||||||
ANN | 0.93 | 0.84 | 0.87 | 0.90 | 0.87 | 0.79 |
KNN | 0.81 | 0.64 | 0.86 | 0.64 | 0.78 | 0.72 |
SVM | 0.91 | 0.57 | 0.85 | 0.57 | 0.64 | 0.73 |
NBC | 0.91 | 0.49 | 0.75 | 0.87 | 0.75 | 0.50 |
MLR | 0.90 | 0.47 | 0.83 | 0.39 | 0.80 | 0.79 |
Testing dataset (n = 71) | ||||||
ANN | 0.94 | 0.87 | 0.89 | 0.88 | 0.86 | 0.81 |
KNN | 0.89 | 0.49 | 0.87 | 0.46 | 0.84 | 0.72 |
SVM | 0.90 | 0.82 | 0.85 | 0.71 | 0.85 | 0.74 |
NBC | 0.90 | 0.85 | 0.82 | 0.75 | 0.78 | 0.51 |
MLR | 0.84 | 0.61 | 0.88 | 0.69 | 0.85 | 0.77 |
ANN artificial neural network, KNN K nearest neighbor, SVM support vector machines, NBC Naive Bayes classifier, MLR multiple logistic regression, PPV positive predictive value, NPV negative predictive value, AUROC area under the receiver operating characteristic.