Skip to main content
. 2020 Jul 3;10(7):427. doi: 10.3390/brainsci10070427

Table 5.

The training accuracy of ten optimizers on a proposed patch-wise model architecture of CNN.

Epoch  → 50 100 150 200 250 300
Optimizers  ↓
Adam 0.97 0.98 0.98 0.98 0.99 0.99
Adagrad 0.95 0.96 0.96 0.96 0.96 0.96
AdaDelta 0.95 0.96 0.96 0.96 0.96 0.96
SGD 0.95 0.967 0.968 0.97 0.97 0.97
NAG 0.94 0.94 0.94 0.94 0.95 0.95
Rmsprop 0.95 0.95 0.95 0.95 0.95 0.95
Momentum 0.96 0.96 0.97 0.97 0.974 0.97
Adamax 0.95 0.95 0.95 0.96 0.96 0.96
CLR 0.96 0.96 0.96 0.96 0.96 0.96
Nadam 0.96 0.96 0.96 0.96 0.97 0.97