Table 3.
Optimizers | Training Accuracy | Validation Accuracy | Training Loss | Validation Loss | Precision | Recall | F1-score |
---|---|---|---|---|---|---|---|
Cascaded AlexNet with GoogLeNet | |||||||
SGD | 0.9931 | 0.9818 | 0.0229 | 0.0592 | 0.9749 | 0.9751 | 0.9750 |
RMSProp | 0.9894 | 0.9757 | 0.0482 | 0.1479 | 0.9746 | 0.9613 | 0.9679 |
Adagrad | 0.9956 | 0.9824 | 0.0153 | 0.0547 | 0.9815 | 0.9782 | 0.9798 |
Adamax | 0.9990 | 0.9859 | 0.0029 | 0.0574 | 0.9828 | 0.9795 | 0.9811 |
Adam | 0.9989 | 0.9857 | 0.0039 | 0.0750 | 0.9836 | 0.9836 | 0.9836 |
Adadelta | 0.9993 | 0.9873 | 0.0024 | 0.0696 | 0.9846 | 0.9856 | 0.9851 |
Improved GoogLeNet | |||||||
SGD | 0.9829 | 0.9521 | 0.0522 | 0.1038 | 0.9528 | 0.9539 | 0.9533 |
RMSProp | 0.9723 | 0.9685 | 0.1780 | 0.2272 | 0.9692 | 0.9666 | 0.9679 |
Adagrad | 0.9889 | 0.9718 | 0.0350 | 0.0930 | 0.9651 | 0.9618 | 0.9634 |
Adamax | 0.9998 | 0.9847 | 8.782 × 10−4 | 0.0875 | 0.9792 | 0.9826 | 0.9809 |
Adam | 0.9992 | 0.9904 | 0.0026 | 0.0434 | 0.9859 | 0.9872 | 0.9864 |
Adadelta | 0.9991 | 0.9905 | 0.0022 | 0.0567 | 0.9828 | 0.9879 | 0.9861 |
Xception | |||||||
SGD | 0.9990 | 0.9798 | 0.0140 | 0.0621 | 0.9764 | 0.9767 | 0.9765 |
RMSProp | 0.9998 | 0.9924 | 6.922 × 10−4 | 0.0433 | 0.9877 | 0.9920 | 0.9900 |
Adagrad | 0.9987 | 0.9621 | 0.0164 | 0.1460 | 0.9682 | 0.9505 | 0.9593 |
Adamax | 1.0000 | 0.9889 | 0.0012 | 0.0415 | 0.9902 | 0.9874 | 0.9888 |
Adam | 1.0000 | 0.9981 | 6.890 × 10−4 | 0.0178 | 0.9981 | 0.9975 | 0.9978 |
Adadelta | 1.0000 | 0.9906 | 8.407 × 10−4 | 0.0364 | 0.9926 | 0.9887 | 0.9906 |