Skip to main content
. 2019 Apr 8;32(6):899–918. doi: 10.1007/s10278-019-00196-1

Table 5.

Hyper-parameters

Datasets Batch size Number of epochs Regularization Optimization method Learning rate Learning rate decay Beta-1 Beta-2 Epsilon
Pre-training Fine-tuning
ABIDE I 30 22 50 0.01 Adam 0.01 0 0.9 0.999 1e-08
Adamax 0.002
ABIDE II 20 Adam 0.01
Adamax 0.002
ABIDE I and ABIDE II 20 Adam 0.01
Adamax 0.002