Skip to main content
. 2024 May 6;40(5):btae305. doi: 10.1093/bioinformatics/btae305

Table 2.

Optimal configuration values for the proposed DNN model.

Hyper-parameters Optimal values
Activation function ReLu, sigmoid
Learning rate 0.01
Number of hidden layer Neurons 128,64, 32
Optimizer Adam
Regularization L1 0.001
Dense layers 3
Dropout rate 0.5