Table 2.
Machine learning model | Name of the parameter | Description | Standard values |
---|---|---|---|
Neural network | Decay | Decay rate weight | [0.2, 0.4, 0.6] |
Size | Unit of hidden layer | [5, 10, 15, 20, 25] | |
Random forest | ntree | Number of trees | 500 |
mtry | A randomly selected number of predictors | [2, 3, 5] | |
Extreme gradient boosted tree | Max_depth | Maximum of tree depth | [0, 1, 2, 3, 4, 5] |
Gamma | Penalty factor regularization | 0 | |
nrounds | Number of iterations | 150 | |
Colsample_bytree | Column fraction to be arbitrarily tested for every tree | 1 | |
Subsample | Subsample percentage from the training established to cultivate a tree | [0.5, 0.75, 1, 1.25] | |
Minimum_child_weight | Weight of low weight instance per node | 0.5 | |
Eta | Shrinkage or rate of learning | [0.1, 0.2, 0.3] |