Skip to main content
. 2024 Nov 27;13(23):3325. doi: 10.3390/plants13233325

Table 1.

Hyper-parameters fine-tuned using Optuna for each model.

Algorithm Fine-Tuned Hyper-Parameters
AdaBoost Learning rate; loss; number of estimators
SVR c; γ; epsilon
Lasso α; maximum number of iterations
Ridge α; maximum number of iterations
PLSR Number of components
RF Maximum number of features; maximum depth; minimum samples to split an internal node; minimum number of samples of a leaf node after the split; number of estimators
XGBoost Learning rate; γ; minimum child weight; column sample by tree; subsample; maximum depth