Skip to main content
. Author manuscript; available in PMC: 2022 Nov 18.
Published in final edited form as: Inform Med Unlocked. 2022 Oct 6;34:101104. doi: 10.1016/j.imu.2022.101104

Table 2.

Hyperparameter values for the random forest and gradient boosted tree algorithms following tuning.

Algorithm Hyperparameter Value
Random Forest mtry: number of features to use for each split in individual trees 20
num.trees: number of individual trees to build 450
Gradient Boosted Tree eta: learning rate .02
max_depth: complexity of individual trees 12
nrounds: number of boosting iterations 750
subsample: proportion of observations to use in each boosting iteration 0.85
colsample_bytree: proportion of features to use in each boosting iteration 0.57
colsample_bylevel: proportion of selected features to be used for each split 0.67