Table 1.
Comparison of model accuracy using normalized training data (n = 361).
| Model | Sensitivity | Specificity | Accuracy | Balanced accuracy | AUC∗ |
| A. Raw data | |||||
| CNN | 0.90 | 0.86 | 0.88 | 0.88 | 0.85 |
| ANN | 0.95 | 0.88 | 0.91 | 0.92 | 0.89 |
| Forest (J48) | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 |
| Random forest | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| Random tree | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| REPT tree | 0.88 | 0.91 | 0.89 | 0.90 | 0.88 |
| BayesNet | 0.92 | 0.92 | 0.92 | 0.92 | 0.90 |
| Naïve Bayes | 0.78 | 0.95 | 0.87 | 0.87 | 0.86 |
| Logistic | 0.94 | 0.92 | 0.93 | 0.93 | 0.91 |
| SMO | 0.89 | 0.91 | 0.09 | 0.90 | 0.88 |
| Median | 0.92 | ||||
| B. Normalized data | |||||
| CNN | 0.93 | 0.93 | 0.92 | 0.93 | 0.91 |
| ANN | 0.95 | 0.98 | 0.97 | 0.97 | 0.96 |
| Forest (J48) | 0.99 | 0.97 | 0.98 | 0.98 | 0.97 |
| Random forest | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| Random tree | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| REPT tree | 0.92 | 0.93 | 0.93 | 0.93 | 0.91 |
| BayesNet | 0.92 | 0.92 | 0.92 | 0.92 | 0.90 |
| Naïve Bayes | 0.80 | 0.92 | 0.87 | 0.86 | 0.84 |
| Logistic | 0.93 | 0.92 | 0.93 | 0.93 | 0.91 |
| SMO | 0.87 | 0.93 | 0.91 | 0.90 | 0.88 |
| Median | 0.93 | ||||
| C. Ko et al[1] | |||||
| Random forest | 0.89 | 0.89 | 0.89 | 0.89 | 0.87 |
| Deep neural network | 0.91 | 0.93 | 0.92 | 0.92 | 0.90 |
| EDRnet | 0.92 | 0.93 | 0.93 | 0.93 | 0.91 |
| Median | 0.92 |
The 95% CIs of AUC (=0.9) = = 0.03 for the training dataset.