Skip to main content
. 2021 Jun 12;21(12):4054. doi: 10.3390/s21124054

Table 8.

The test accuracies of the DenseNet-121 and 169 trained by the optimization methods for the CIFAR-10 image dataset classification task. “W (Win)”, “T (Tie)”, and “L (Loss)” refer to the number of the compared methods for which HyAdamC-Basic (or HyAdamC-Scale) achieved better, equivalent, and worse test accuracies, respectively. The first and second best results are highlighted in red and orange, respectively.

DenseNet-121 DenseNet-169
Methods Batch 64 Batch 128 Batch 64 Batch 128
SGD 0.865 0.835 0.866 0.830
RMSProp 0.906 0.923 0.925 0.921
Adam 0.933 0.934 0.933 0.937
AdamW 0.928 0.929 0.932 0.928
Adagrad 0.925 0.921 0.920 0.919
AdaDelta 0.937 0.931 0.932 0.938
Rprop 0.114 0.416 0.104 0.367
Yogi 0.933 0.928 0.927 0.916
Fromage 0.907 0.916 0.907 0.907
TAdam 0.933 0.931 0.937 0.937
diffGrad 0.936 0.935 0.937 0.933
HyAdamC-Basic 0.939 0.938 0.944 0.942
HyAdamC-Scale 0.945 0.943 0.943 0.944
HyAdamC-Basic: W/T/L 11/0/0 11/0/0 11/0/0 11/0/0
HyAdamC-Scale: W/T/L 11/0/0 11/0/0 11/0/0 11/0/0