Table 2.
Hyperparameter settings for training.
| Name | Value | Description |
|---|---|---|
| Epochs | 100 | Number of times the model was trained |
| Batch size | 32 | Number of samples selected for one training |
| Optimizer | AdamW | Tool used to bootstrap network update parameters |
| Learning rate | 0.0001 | Tunes parameters in optimization algorithms |
| Loss function | Cross Entropy | Evaluates the gap between the predicted value and the true value |