Skip to main content
. 2023 Jan 17;23(3):1072. doi: 10.3390/s23031072

Table 7.

Hyperparameter tuning attribution.

Model Parameters
Grid Parameters Grid Values Tuned Values
Random Forest Regressor n_estimators
max_depth
min_samples_split
min_samples_leaf
: [10,25,50,75,100]
: [10,25,50,75,100]
: [2,4,6,8,10]
: [1,2,3,4,5]
{n_estimators: 100},
{max_depth: 25},
{min_samples_split: 2},
{min_samples_leaf: 1}
CatBoost Regressor depth
learning_rate
iterations
: [10,25,50]
: [0.1,0.5,1]
: [50,100,250]
{depth: 10},
{learning_rate: 0.1},
{iterations: 250}
Extreme Gradient Boosting (XGB) Regressor n_estimators
max_depth
learning_rate
: [50,100,150,200,250]
: [5,10,25,50]
: [0,0.5,1]
{n_estimators: 250},
{max_depth: 7},
{learning_rate: 0.01}
Light Gradient Boosting Machine (LightGBM) Regressor) n_estimators
max_depth
learning_rate
: [50,75,100]
: [10,50,100]
: [0.05,0.1,0.5,1]
{n_estimators: 100},
{max_depth: 10},
{learning_rate: 0.5}