Skip to main content
. 2023 Aug 7;13:12775. doi: 10.1038/s41598-023-39724-z

Table 3.

The hyperparameters tuning of models.

Parameters Definition LightGBM XGBoost CatBoost AdaBoost SVR MLP
Boosting_type Boosting method gbdt
Learning_rate Boosting learning rate 0.01 0.1 0.01 0.001
Max_depth Maximum tree depth for base learners 5 3 7
Min_child_samples Minimum number of data needed in a child 100
Num_leaves Maximum tree leaves for base learners 30
Subsample Subsample ratio of the training instance 0.1 0.5 0.7
Booster Type of booster gbtree
Gamma Minimum loss reduction
n_estimator Number of gradient-boosted trees 500 100
Min_child_weight The minimum sum of instance weight(hessian) 5
Loss Loss function Linear
Kernel Kernel type rbf
C Regularization parameter 10
Gamma kernel parameter 0.08
Epsilon The epsilon-tube 0.01
Hidden_layer_sizes Number of neurons in the ith hidden layer 20
Max_iter The maximum number of iterations 20
Learning_rate_init The initial learning rate 0.08
Learning_rate Learning rate Invscaling
Activation Activation function relu