Table 4.
Hyperparameter need to be optimized in model training.
Index | Hyperparameter | Option |
---|---|---|
1 | Neurons for top layers | 128, 256, 512, 1024 |
2 | Number of top layers | range from 2 to 8 |
3 | Neurons for the layer before the output | 32, 64 |
4 | Dropout rate | range from 0 to 0.5 |
5 | Batch size | 8, 16, 32 |
6 | Epochs | 50, 100, 150 |
7 | Activation function in top layers | softplus, relu, tanh, sigmoid, linear, elu, softmax |
8 | Activation function for the output layer | sigmoid, softmax |
9 | Kernel initializer | uniform, normal |
10 | Optimizer | “SGD,” “RMSprop,” “Adadelta,” “Adam,” “Adamax,” “Nadam” |