Skip to main content
. 2025 Jan 7;15:1037. doi: 10.1038/s41598-025-85106-y

Table 2.

Range and tuned values of hyperparameters of all the machine learning methods utilized in this study.

Method Parameters and the range Optimum values and structure
DT

• Max depth (ranged 1 to 50)

• Min samples split (ranged 2 to 20)

• Min samples leaf (range 1 to 20)

• Max depth: 8

• Min samples split: 2

• Min samples leaf: 1

AB

• Number of estimators (ranged 1 to 50)

• Learning rate (0.01 to 1.0)

• Number of estimators: 5

• Learning rate: 10

RF

• Number of estimators (ranged 1 to 20)

• Max depth (ranged 1 to 20)

• Min sample split (ranged 2 to 20)

• Number of estimators: 19

• Max depth: 20

• Min sample split: 2

KNN • Number of neighbors (ranged 1 to 50) • Number of neighbors: 2
EL • Constructed based upon DT, AB, RF an KNN algorithms • Tuned values of each DT, AB, RF an KNN methods
SVM

• C hyperparameter (ranged 1 to 1000)

• Kernel function (linear, polynomial, RBF, Sigmoid)

• Gamma (range 1e-4 to 1.0)

• C hyperparameter: 701

• Kernel function: RBF

• Gamma: 0.33

CNN

• Number of filters (ranged 32 to 512)

• Filter size (3*3, 5*5 and 7*7)

• Pooling size (2*2 or 3*3)

• Number of filters: 32

• Filter size: 5*5

• Pooling size: 2*2

MLP-ANN

• Number of hidden layers (ranged 2 to 20)

• Number of neurons in each hidden layer (ranged 5 to 40)

• Activation function (relu, tanh, sigmoid)

• Learning rate (ranged 0.001 to 0.1)

• Number of hidden layers: 6

• Number of neurons in each hidden layer: 33

• Activation function: relu

• Learning rate: 0.001