Table 2.
Hyper-parameters | Optimal values |
---|---|
Activation function | ReLu, sigmoid |
Learning rate | 0.01 |
Number of hidden layer Neurons | 128,64, 32 |
Optimizer | Adam |
Regularization L1 | 0.001 |
Dense layers | 3 |
Dropout rate | 0.5 |
Hyper-parameters | Optimal values |
---|---|
Activation function | ReLu, sigmoid |
Learning rate | 0.01 |
Number of hidden layer Neurons | 128,64, 32 |
Optimizer | Adam |
Regularization L1 | 0.001 |
Dense layers | 3 |
Dropout rate | 0.5 |