Table 1.
Algorithm | Fine-Tuned Hyper-Parameters |
---|---|
AdaBoost | Learning rate; loss; number of estimators |
SVR | c; γ; epsilon |
Lasso | α; maximum number of iterations |
Ridge | α; maximum number of iterations |
PLSR | Number of components |
RF | Maximum number of features; maximum depth; minimum samples to split an internal node; minimum number of samples of a leaf node after the split; number of estimators |
XGBoost | Learning rate; γ; minimum child weight; column sample by tree; subsample; maximum depth |