Table 2.
Model | Parameter | Value |
---|---|---|
CNN | Convolution Layers | 2 |
Max-Pooling Layers | 1 | |
Filter Size | (64,32) | |
Kernel Size | 3 | |
Pool Size | 3 | |
Activation | ReLU | |
Optimizer | Adam | |
Learning Rate | 0.0001 | |
MLP | Layers | 2 |
Neurons | (64,64) | |
Activation | ReLU | |
Learning Rate | 0.001 | |
Optimizer | Adam | |
Batch Size | 256 | |
LSTM | Layers | 2 |
Neurons | (64,64) | |
Activation | ReLU | |
Learning Rate | 0.001 | |
Optimizer | Adam | |
Batch Size | 128 | |
GRU | Layers | 2 |
Neurons | (64,64) | |
Activation | ReLU | |
Learning Rate | 0.001 | |
Optimizer | Adam | |
Batch Size | 128 | |
BiLSTM | Layers | 2 |
Neurons | (64,64) | |
Activation | ReLU | |
Learning Rate | 0.001 | |
Optimizer | Adam | |
Batch Size | 128 | |
BiGRU | Layers | 2 |
Neurons | (64,64) | |
Activation | ReLU | |
Learning Rate | 0.001 | |
Optimizer | Adam | |
Batch Size | 128 | |
SVR | Kernel | Linear |
C | 1.0 | |
Maximum Iterations | 1000 |