Skip to main content
. 2020 Oct 6;9(10):1319. doi: 10.3390/plants9101319

Table 1.

Hyperparameters of the deep learning optimizers.

Optimizers Specifications
SGD learning rate = 0.001, weight decay = 0.0005, momentum = 0.9, nesterov = False
Adagrad learning rate = 0.001, epsilon = 1 × 10−7
RMSProp learning rate = 0.001, rho = 0.9, epsilon = 1 × 10−7
Adadelta learning rate = 1.0, rho=0.95, epsilon= 1 × 10−6
Adam learning rate = 0.001, beta1 = 0.9, beta2 = 0.999, epsilon = 1 × 10−8, amsgrad = False
Adamax learning rate = 0.002, beta1 = 0.9, beta2 = 0.999, epsilon = 1 × 10−8