Table 11.
Labels | Researcher | Model | Accuracy (%) |
---|---|---|---|
Binary labels | Wehbe et al. [22] | DeepCOVID-XR | 83 |
Sethy et al. [24] | ResNet50+SVM | 95.38 | |
Loey et al. [26] | ResNet50 with augmentation | 82.91 | |
Kawsher Mahbub et al. [27] | DNN | 99.87 | |
Mukherjee et al. [28] | DNN | 96.28 | |
Das et al. [33] | TIN | 98.77 | |
Multiple labels | Ozturk [6] | DarkCovidNet | 87.02 |
Apostolopoulos et al. [7] | VGG-19 | 93.48 | |
Al-Falluji [8] | Modified ResNet18-Based | 96.73 | |
Wang et al. [23] | COVID-Net | 92.4 | |
Dev et al. [25] | HCN-DML | 96.67 | |
Das et al. [33] | TIN | 97.4 | |
Kawsher Mahbub et al. [27] | DNN | 95.7 | |
Proposed study | CovidViT | 98.0 |
Denotes the accuracy is gained by training on the same dataset with CovidViT by ourselves