Skip to main content
. 2024 Nov 27;19(11):e0312531. doi: 10.1371/journal.pone.0312531

Table 2. Hyperparameters for the XGBoost model.

Hyperparameter Meaning Range of values Optimal results
(BO-GP) (BO-RF) (RS)
’n_estimators’ Number of trees 100–1000 1000 912 823
’max_depth’ Maximum depth of each tree 3–9 3 3 3
’learning_rate’ Learning rate of stages 0.05–0.30 0.1399 0.1194 0.109
’booster’ Booster method ‘gbtree’, ‘dart’ ‘dart’ ‘gbtree’ ‘dart’
’gamma’ The minimum loss to create a tree’s nodes 0.01–0.50 0.5 0.485 0.3077
’subsample’ The subsampling ratio in the training set 0.60–0.90 0.6 0.697 0.727
’colsample_bytree’ Specifies the proportion of columns to be subsampled 0.60–0.90 0.9 0.747 0.799
’reg_lambda’ Weights used in L2 regularization 1–50 22 3 8

The optimal results show that all methods select a fairly large number of trees (from 823–1000 trees). The maximum depth of all trees is only 3 while the remaining hyperparameters are chosen differently depending on each algorithm.