Skip to main content
. 2024 Apr 22;14:9200. doi: 10.1038/s41598-024-59387-8

Table 3.

Optimal hyperparameters and their investigated ranges with cross-validation scores for each method.

Method Hyperparameter Range Optimal value Cross-validation score
ANN No. of layers 1–4 3 0.883
No. of neurons 50, 100, 200 100
Activation function relu, tanh, elu tanh
Learning rate 0.001, 0.01 0.01
Optimizer SGD, Adam Adam
CatBoost Max. depth 2, 3, 4 3 0.873
Max. leaves 10, 31, 50 31
No. of estimators 10–100 20
Leaf regularization 0, 5, 10 0
KNN Algorithm “auto”, “ball_tree”, “kd_tree”, “brute” auto 0.886
No. of neighbors 1, 3, 5, 7, 9 5
Penalty 1, 2 1
Weights Uniform, distance Distance
SVM C 1, 10, 100, 1000 100 0.870
Class weight Balanced, none None
coef0 0, 1, 10 0
Degree None, 2, 3, 4, 5 None
Kernel “rbf”, “poly” rbf
RF Criterion ‘gini’, ‘entropy’, ‘log_loss’ Entropy 0.867
Max. depth 1, 2, 3 3
Max. features “sqrt”, “log2”, 1, 2 1
No. of estimators 10–100 40