Skip to main content
. Author manuscript; available in PMC: 2023 Dec 1.
Published in final edited form as: J Mol Model. 2022 Dec 1;28(12):408. doi: 10.1007/s00894-022-05373-8

Table 1.

Fine-tuned XGBoost Parameters

Name and Description Values
n_estimators: Number of gradient boosted trees. [50, 100, 200, 500, 1000]
max_depth: Maximum tree depth. [3, 4, 5, 6, 7]
learning_rate: Boosting learning rate. [0.01, 0.05, 0.1, 0.2, 0.3]
subsample: Subsample ratio of instances. [0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
colsample_bytree: Subsample ratio of columns. [0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
reg_alpha: L1 regularization weights. [0, 0.1, 1, 5, 10]
reg_lambda: L2 regularization weights. [0, 0.1, 1, 5, 10]