Table 3.
The obtained results on different models using performance metrics
| Models | Accuracy (%) | Recall (%) | Specificity (%) | Precision (%) | F1-score (%) |
|---|---|---|---|---|---|
| AlexNet | 96.15 | 95.24 | 91.04 | 96.54 | 91.57 |
| DenseNet201 | 91.92 | 94.28 | 95.56 | 99.21 | 87.44 |
| GoogleNet | 90.25 | 94.74 | 92.47 | 96.48 | 96.21 |
| InceptionV3 | 97.76 | 90.61 | 94.68 | 90.64 | 95.47 |
| ResNet18 | 88.64 | 90.49 | 96.11 | 97.27 | 95.43 |
| ResNet50 | 95.91 | 89.55 | 93.09 | 99.64 | 92.05 |
| ResNet101 | 98.50 | 100 | 97.20 | 100 | 98.40 |
| VGG16 | 97.87 | 90.41 | 97.14 | 96.16 | 97.21 |
| VGG19 | 90.22 | 89.99 | 90.37 | 97.94 | 92.40 |
| XceptionNet | 90.10 | 95.47 | 92.61 | 90.54 | 90.55 |
| InceptionResNetV2 | 98.17 | 92.58 | 96.94 | 95.11 | 93.49 |
The best results are in bold