Skip to main content
. 2022 Nov 12;22(22):8757. doi: 10.3390/s22228757

Table A1.

The results of the conducted randomised hyperparameter tuning processes for the classifiers in the article.

Model Hyperparameter Search Space Selected
LR Regularisation strength {0, 0.10, 0.20, …, 1} 0.40
class weight {0, 1, …, 10} 7
Maximum number of iterations {1000, 2000, …, 10,000} 7000
GB Learning rates {0.01, 0.02, …, 1} 0.10
Number of boosting stages {20, 40, …, 200} 160
Minimum number of samples required to split an internal node {1, 2, …, 10} 2
Minimum number of samples required to be at a leaf node {{1, 2, …, 10} 6
Maximum depth of the individual estimators {1, 2, …, 10} 9
AB Maximum number of estimators at which boosting is terminated {10, 20, …, 100} 90
Learning rates {0.01, 0.02, …, 1} 1.58
RF number of trees {50, 100, …, 500} 20
Maximum depth of the tree {1, 2, …, 10} 3
Minimum number of samples required to split an internal node, {1, 2, …, 10} 4
Minimum number of samples required to be at a leaf node {1, 2, …, 10} 6
Maximum number of leaf nodes {1, 2, …, 10} 3
minimum impurity decrease {0, 0.001, 0.002, …, 0.010} 0.004
Cost complexity pruning factor {0.01, 0.02, …, 0.10} 0.01
Minimum weighted fraction of the sum total of weights {0.01, 0.02, …, 0.10} 0.01
SVC Class weight {0, 1, …, 10} 6
Maximum integration {100, 200, …, 10,000} 2400

Note. AB: AdaBoost; LR: logistic regression; GB: gradient boosting; RF: random forest; SVC: support vector classifier.