Skip to main content
. 2023 Mar 22;3(1):vbad034. doi: 10.1093/bioadv/vbad034

Table 1.

Parameter list used to optimize the RF and gradient boosting classifiers

Classifier List of parameters
RF (scikit-learn)
  • ‘bootstrap’: [True, False],

  • ‘max_depth’: [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None],

  • ‘max_features’: [‘auto', ‘sqrt’],

  • ‘min_samples_leaf’: [1, 2, 4],

  • ‘min_samples_split’: [2, 5, 10],

  • ‘n_estimators’: [100, 150, 200, 250, 500, 750, 1000]

Gradient boosting (XGBoost)
  • ‘max_depth’: [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None],

  • ‘learning_rate’: [0.001, 0.01, 0.1, 0.2, 0.3],

  • ‘subsample’: [0.5, 0.6, 0.7, 0.8, 0.9, 1.0],

  • ‘colsample_bytree’: [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],

  • ‘colsample_bylevel’: [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],

  • ‘min_child_weight’: [0.5, 1.0, 3.0, 5.0, 7.0, 10.0],

  • ‘gamma’: [0, 0.25, 0.5, 1.0],

  • ‘reg_lambda’: [0.1, 1.0, 5.0, 10.0, 50.0, 100.0],

  • ‘n_estimators’: [100, 150, 200, 250, 500, 750, 1000]

Gradient boosting (LightGBM)
  • ‘max_depth’:[10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None],

  • ‘learning_rate’: [0.001, 0.01, 0.1, 0.2, 0.3],

  • ‘subsample’: [0.5, 0.6, 0.7, 0.8, 0.9, 1.0],

  • ‘colsample_bytree’: [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],

  • ‘min_child_weight’: [0.5, 1.0, 3.0, 5.0, 7.0, 10.0],

  • ‘reg_lambda’: [0.1, 1.0, 5.0, 10.0, 50.0, 100.0],

  • ‘n_estimators’: [100, 150, 200, 250, 500, 750, 1000]