Skip to main content
. 2017 Jun 12;17(6):1341. doi: 10.3390/s17061341

Figure 6.

Figure 6

Training losses of competing models with the Adagrad optimizer. The learning rate is decreased by a factor of ten every 500 epochs. Our proposed “Model 2” with “Batch Normalization” achieves the fastest convergence and the lowest training loss.