Skip to main content
. 2021 Nov 1;2021:2158184. doi: 10.1155/2021/2158184

Table 5.

Grid search ranges used to tune the corresponding hyperparameters of each machine learning algorithm used in our study.

Algorithm Range
SVM + Linear C: {1 to 1000}, Gamma: {0.001 to 0.1}
SVM + RBF C: {1 to 1000}, Gamma: {0.001 to 0.1}
XGBoost Learning-rate: {0.01 to 0.1}, max-depth: {5 to 10}, n-estimators: {120 to 200}
ANN Hidden-layer-size: {20 to 60}, learning-rate-init: {0.01 to 1}, max-iter: {10 to 1000}
RF Min-sample-leaf: {3 to 7}, min-sample-split: {2 to 6}, n-estimators: {50 to 200}
LR C: {1 to 1000}, solver: lbfgs, max-iter: {100 to 1000}
K-NN Leaf-size: {30 to 45}, n-neighbor: {100 to 200}, p: {1 to 3}
Our CNN models Learning-rate: {1e − 01, 1e − 02, 1e − 03, 1e − 04, 1e − 05}, batch-size: {8, 16, 32, 64}, epochs: {10, 20, 30, 40, 50, 100}, optimizer: {“RMSProp”, “Adam,” “SGD”}