Table 1.
hyper-parameter | MLP | RNN | CNN |
---|---|---|---|
batch size | 16, 32, 64 | 16, 32, 64 | 16, 32, 64 |
Dropout | 0.2, 0.4 | 0.2, 0.4 | 0.2, 0.4 |
kernel regularizer | 10−4, 10−3, 10−2 | 10−4, 10−3, 10−2 | 10−4, 10−3, 10−2 |
activation dense | ELU, ReLU | ELU, ReLU | ELU, ReLU |
number of layers | 2, 4, 8 | 2, 4, 8 | 2, 4, 8 |
learning rate | 10−4, 10−3, 10−2 | 10−4, 10−3, 10−2 | 10−4, 10−3, 10−2 |
no. of units/filters | 256, 512, 1024 | 32, 64 | 64, 128, 256 |
cell type | – | – | GRU, LSTM |
recurrent dropout | – | – | 0, 0.2, 0.4 |
bidirectional | – | – | true, false |