Skip to main content
. 2022 Dec 12;22(24):9735. doi: 10.3390/s22249735

Table 2.

The optimal parameter settings of the baseline techniques.

Model Parameter Value
CNN Convolution Layers 2
Max-Pooling Layers 1
Filter Size (64,32)
Kernel Size 3
Pool Size 3
Activation ReLU
Optimizer Adam
Learning Rate 0.0001
MLP Layers 2
Neurons (64,64)
Activation ReLU
Learning Rate 0.001
Optimizer Adam
Batch Size 256
LSTM Layers 2
Neurons (64,64)
Activation ReLU
Learning Rate 0.001
Optimizer Adam
Batch Size 128
GRU Layers 2
Neurons (64,64)
Activation ReLU
Learning Rate 0.001
Optimizer Adam
Batch Size 128
BiLSTM Layers 2
Neurons (64,64)
Activation ReLU
Learning Rate 0.001
Optimizer Adam
Batch Size 128
BiGRU Layers 2
Neurons (64,64)
Activation ReLU
Learning Rate 0.001
Optimizer Adam
Batch Size 128
SVR Kernel Linear
C 1.0
Maximum Iterations 1000