Table 1. Hyperparameter tuning.
| Parameter | Values tested | Optimal value |
|---|---|---|
| Learning rate ( ) | 0.01, 0.001, 0.0005, 0.0001 | 0.001 |
| Batch size | 8, 16, 32 | 16 |
| Activation function | ReLU, Leaky ReLU, Tanh | Leaky ReLU |
| Patch size | 16 16, 32 32, 64 64 | 32 32 |
| Dropout rate | 0.1, 0.2, 0.3 | 0.1 |