Skip to main content
. 2023 Jul 1;24:273. doi: 10.1186/s12859-023-05398-7

Table 3.

Comparative analysis of the Deep Ensemble design with various Deep learning architectures with a data split of 70:20:10.

Model Methods Validation-loss Validation-accuracy Test-accuracy

Proposed model I

(MobileNetV2+InceptionV3)

FT on ADS-ALUF 0.0275 99.41 99.91

Proposed model II

(GoogleNet+SqueezeNet)

FT on ADS-ALUF 0.0953 97.14 99.79
AlexNet Trained on ODS-NPTW 4.1254 13.53 11.98
PT on ODS-NPTW 0.8999 75.47 76.43
FT on ODS-PTW 0.2375 93.10 93.10
FT on ADS-ALUF 0.1056 96.23 96.09
SqueezeNet Trained on ODS-NPTW 3.1779 6.51 8.07
PT on ODS-NPTW 0.3370 79.83 82.16
FT on ODS-PTW 0.3102 87.51 87.89
FT on ADS-ALUF 0.2614 94.60 93.62
GoogleNet Trained on ODS-NPTW 4.1299 2.93 1.95
PT on ODS-NPTW 0.3924 87.77 88.41
FT on ODS-PTW 0.5174 90.83 90.63
FT on ADS-ALUF 0.2897 95.64 93.75
MobileNetV2 Trained on ODS-NPTW 3.1169 5.27 5.86
PT on ODS-NPTW 2.1408 44.05 39.84
FT on ODS-PTW 0.2569 93.10 92.45
FT on ADS-ALUF 0.0212 97.85 97.79
InceptionV3 Trained on ODS-NPTW 3.4025 1.69 0.91
PT on ODS-NPTW 2.4302 56.93 57.03
FT on ODS-PTW 0.4128 96.62 96.74
FT on ADS-ALUF 0.0446 98.39 97.92

Here, ADS, ODS, PTW, NPTW, ALUF, PT, FT stands for augmented dataset, original dataset, pre-trained weights, no pre-trained weights, all layers un-frozen, parameter-tuning, fine-tuned