Skip to main content
. 2023 Sep 26;14:1256773. doi: 10.3389/fpls.2023.1256773

Table 2.

Hyperparameter settings for training.

Name Value Description
Epochs 100 Number of times the model was trained
Batch size 32 Number of samples selected for one training
Optimizer AdamW Tool used to bootstrap network update parameters
Learning rate 0.0001 Tunes parameters in optimization algorithms
Loss function Cross Entropy Evaluates the gap between the predicted value and the true value