Table 2.
Model comparison using training data to predict testing data (n = 106).
| Model | True negative | False-positive | False-negative | True positive | Sensitivity | Specificity | Accuracy | Balanced Accuracy |
| A.Raw data | ||||||||
| CNN | 75 | 29 | 0 | 2 | 1 | 0.72 | 0.73 | 0.86 |
| ANN | 66 | 38 | 0 | 2 | 1 | 0.63 | 0.64 | 0.82 |
| Forest(J48) | 82 | 22 | 1 | 1 | 0.5 | 0.78 | 0.78 | 0.64 |
| Random forest | 88 | 16 | 0 | 2 | 1 | 0.85 | 0.85 | 0.93 |
| Random tree | 78 | 26 | 1 | 1 | 0.5 | 0.75 | 0.75 | 0.63 |
| REPT tree | 51 | 53 | 0 | 2 | 1 | 0.49 | 0.50 | 0.75 |
| BayesNet | 85 | 19 | 0 | 2 | 1 | 0.82 | 0.82 | 0.91 |
| Naïve Bayes | 85 | 19 | 0 | 2 | 1 | 0.82 | 0.82 | 0.91 |
| Logistic | 22 | 82 | 1 | 1 | 0.5 | 0.21 | 0.23 | 0.36 |
| SMO | 92 | 12 | 0 | 2 | 1 | 0.89 | 0.89 | 0.95 |
| Median | 0.84 | |||||||
| B. Normalization data | ||||||||
| CNN | 48 | 56 | 0 | 2 | 1 | 0.46 | 0.47 | 0.73 |
| ANN | 62 | 42 | 0 | 2 | 1 | 0.6 | 0.60 | 0.80 |
| Forest(J48) | 81 | 23 | 1 | 1 | 0.5 | 0.78 | 0.77 | 0.64 |
| Random forest | 82 | 22 | 0 | 2 | 1 | 0.79 | 0.79 | 0.90 |
| Random tree | 88 | 16 | 0 | 2 | 1 | 0.85 | 0.85 | 0.93 |
| REPT tree | 73 | 31 | 0 | 2 | 1 | 0.7 | 0.71 | 0.85 |
| BayesNet | 86 | 18 | 0 | 2 | 1 | 0.83 | 0.83 | 0.92 |
| Naïve Bayes | 86 | 18 | 0 | 2 | 1 | 0.83 | 0.83 | 0.92 |
| Logistic | 75 | 29 | 0 | 2 | 1 | 0.72 | 0.73 | 0.86 |
| SMO | 91 | 10 | 1 | 1 | 0.5 | 0.9 | 0.9 | 0.70 |
| Median | 0.86 | |||||||
| C. From Ko et al. | ||||||||
| XGBoost | 80 | 24 | 0 | 2 | 1 | 0.77 | 0.77 | 0.88 |
| AdaBoost | 81 | 23 | 0 | 2 | 1 | 0.78 | 0.78 | 0.89 |
| Random forest | 87 | 17 | 0 | 2 | 1 | 0.84 | 0.84 | 0.92 |
| Deep neural network | 95 | 9 | 1 | 1 | 0.5 | 0.91 | 0.91 | 0.71 |
| DNN + XGBoost | 80 | 24 | 0 | 2 | 1 | 0.77 | 0.77 | 0.88 |
| DNN + AdaBoost | 96 | 8 | 1 | 1 | 0.5 | 0.92 | 0.92 | 0.71 |
| Yan et al model | 36 | 68 | 0 | 2 | 1 | 0.35 | 0.36 | 0.67 |
| EDRnet | 95 | 9 | 0 | 2 | 1 | 0.91 | 0.92 | 0.96 |
| Median | 0.88 |
∗The 95% CIs of AUC (=0.9) = for the testing dataset.