TABLE II. Performance of the Different Trained Models for COVID-19 Diagnosis and the Clinical Condition Classification Problems.
| M-I | ||||||
|---|---|---|---|---|---|---|
| Method | #Features | Accuracy | F1 Score | Sensitivity | Precision | Specificity |
| MLP (ReLu) | 16 | 81.99 | 62.43 | 56.22 | 70.19 | 91.31 |
| SVM (Linear) | 15 | 79.81 | 65.03 | 70.76 | 60.41 | 83.21 |
| Decision Tree using Entropy | 9 | 74.75 | 59.1 | 68.65 | 52.72 | 77.22 |
| Random Forest using Entropy | 11 | 75.21 | 59.52 | 68.50 | 52.71 | 77.68 |
| lightgray Logistic Regression | 15 | 78.97 | 64.47 | 71.84 | 58.60 | 81.63 |
| SVM (Polynomial) | 14 | 77.05 | 62.29 | 71.22 | 55.43 | 79.2 |
| M-II | ||||||
| SVM (Linear) | 30 | 72.34 | 79.14 | 80.29 | 78.24 | 57.12 |
| Decision Tree using Entropy | 6 | 67.59 | 74.53 | 72.32 | 76.93 | 58.54 |
| Logistic Regression | 28 | 72.01 | 77.88 | 75.32 | 80.84 | 65.71 |
| MLP (ReLu) | 10 | 72.17 | 80.32 | 86.53 | 74.95 | 44.69 |
| AdaBoost | 7 | 71.36 | 77.36 | 74.56 | 80.38 | 65.21 |
| Bagging KNN | 23 | 73.32 | 80.46 | 83.77 | 77.45 | 53.25 |