Skip to main content
. 2020 Jan 28;20(3):723. doi: 10.3390/s20030723

Table 2.

Hyperparameters of all models used to test the new loss function presented in Section 3.1.

Deep Learning Architecture Hyperparameters
Bi-LSTM Number of layers: 2
Layer 1 units: 100
Layer 2 units: 50
Activation function: Leaky ReLU
DNN Number of layers: 6
Layer 1 units: 100
Layer 2 units: 500
Layer 3 units: 100
Layer 4 units: 250
Layer 5 units: 12
Layer 6 units: 6
Activation function: ReLU
CNN1D Number of layers: 2
Layer 1 units: 64
Layer 2 units: 64
Activation function: ReLU
Filter size: 3 x Features
Bi-GRU Number of layers: 2
Layer 1 units: 100
Layer 2 units: 50
Activation function: Leaky ReLU