Skip to main content
. 2022 Jun 20;25(8):104644. doi: 10.1016/j.isci.2022.104644

Table 1.

All hyperparameters and their values that are optimized for the different machine learning models

hyper-parameter MLP RNN CNN
batch size 16, 32, 64 16, 32, 64 16, 32, 64
Dropout 0.2, 0.4 0.2, 0.4 0.2, 0.4
kernel regularizer 10−4, 10−3, 10−2 10−4, 10−3, 10−2 10−4, 10−3, 10−2
activation dense ELU, ReLU ELU, ReLU ELU, ReLU
number of layers 2, 4, 8 2, 4, 8 2, 4, 8
learning rate 10−4, 10−3, 10−2 10−4, 10−3, 10−2 10−4, 10−3, 10−2
no. of units/filters 256, 512, 1024 32, 64 64, 128, 256
cell type GRU, LSTM
recurrent dropout 0, 0.2, 0.4
bidirectional true, false