Table 2.
Deep Learning Architecture | Hyperparameters |
---|---|
Bi-LSTM | Number of layers: 2 |
Layer 1 units: 100 Layer 2 units: 50 |
|
Activation function: Leaky ReLU | |
DNN | Number of layers: 6 |
Layer 1 units: 100 Layer 2 units: 500 Layer 3 units: 100 Layer 4 units: 250 Layer 5 units: 12 Layer 6 units: 6 |
|
Activation function: ReLU | |
CNN1D | Number of layers: 2 |
Layer 1 units: 64 Layer 2 units: 64 |
|
Activation function: ReLU | |
Filter size: 3 x Features | |
Bi-GRU | Number of layers: 2 |
Layer 1 units: 100 Layer 2 units: 50 |
|
Activation function: Leaky ReLU |