Skip to main content
. 2021 Aug 7:1–33. Online ahead of print. doi: 10.1007/s13369-021-05880-5

Table 11.

Average-wise performance comparison of all the models

Model NAME All experiments average F1 All experiments average AUC All experiments average accuracy
VGG-16 0.920733 0.888575 0.914426
VGG-19 0.915139 0.903742 0.921005
MobileNet 0.911087 0.919878 0.925677
InceptionResNetV2 0.904686 0.913151 0.920659
InceptionV3 0.896025 0.885667 0.882325
ResNet-101 0.899679 0.900224 0.899411
ResNet50V2 0.880875 0.910234 0.919958
ResNet-101V2 0.8971 0.9036 0.8989
Xception 0.917826 0.902268 0.91461
SqueezeNet 0.521689 0.530245 0.51925
DarkNet-53 0.529491 0.54358 0.532157
SqueezeNet + DarkNet-53 + MobileNetV2 + Xception + ShuffleNet 0.897339 0.902605 0.894416
EfficentNetB7 0.513148 0.532155 0.532913
DCGAN 0.949983 0.940607 0.957902
LSTMCNN 0.966804 0.964544 0.964641
U-Net(Glaucoma) 0.962745 0.959091 0.950293
Proposed Model 0.945833 0.960826 0.946373
COVID research paper 0.961163 0.957247 0.962947
NASNetLarge 0.959736 0.957844 0.954607
DarkNet-53 + MobileNetV2 + Resnet-101 + NASNetLarge + Xception + GoogLeNet 0.968263 0.95837 0.965129