Skip to main content
. 2022 Mar 17;145:105405. doi: 10.1016/j.compbiomed.2022.105405

Table 2.

Hyper-parameters search space of classifiers for optimization.

Classifiers Hyper-parameters Range
Extra-Trees Estimators 600, 700, 800
Criterion Gini, Entropy
Max. features Auto, Sqrt, Log2
SVM C 0.10 to 1.0, step = 0.10
Kernel Linear, Poly, rbf, Sigmoid
Gamma Auto, Scale
RF Estimators 600, 700, 800
Max. features Auto, Sqrt, Log2
AdaBoost Estimators 600, 700, 800
Algorithm SAMME, SAMME.R
MLP Hidden layer sizes (64), (64,64), (128), (128,128)
Activation identity, logistic, tanh, relu
Solver lbfgs, sgd, adaml
Learning rate constant, invscaling, adaptive
XGBoost Estimators 600,700,800
Max. depth 4,5,6
GBoost Estimators 600, 700, 800
Criterion friedman_mse, mse
Max. features auto, sqrt, log2
Loss deviance, exponential
LR Penalty l1, l2, elasticnet
Solver newton-cg, lbfgs, liblinear, sag, saga
k-NN Number of neighbours 5 to 8, step = 1
Algorithm auto, ball tree, kd tree, brute
HGBoost Max. iteration 100 to 600, step = 100
Loss binary crossentropy