Skip to main content
. 2025 Sep 26;7:1657749. doi: 10.3389/fdgth.2025.1657749

Table 2.

Hyperparameter search spaces for tuning the XGBoost classifier algorithm.

Hyperparameter Description Search space
eta (learning rate) Learning rate to scale the contribution of each tree [0.001, 0.01, 0.1, 0.3, 0.5]
n_estimators Number of boosting rounds (trees) to build [32, 64, 128, 192, 256, 384, 512]
gamma Minimum loss reduction required to perform a split [0, 0.25, 0.5, 1]
max_depth Maximum depth of trees to prevent overfitting [2, 3, 4, 6, 8, 10, 12, 16, 24]
min_child_weight Minimum sum of instance weights needed in a child node [0.5, 1, 3, 5, 7, 10]
subsample Fraction of training data used for building each tree [0.8, 0.9, 1.0]
colsample_bytree Fraction of features randomly sampled for each tree [0.6, 0.7, 0.8, 0.9]
lambda (reg_lambda) L2 regularization to penalize large weights [0.01, 0.1, 1, 5, 10, 50, 100]
alpha (reg_alpha) L1 regularization to encourage sparsity in feature weight [0, 0.001, 0.01, 0.1]