Skip to main content
. 2022 Nov 21;2022:8670350. doi: 10.1155/2022/8670350

Table 2.

The optimal hyperparameters of machine learning classifiers.

Models Hyperparameters
Decision tree {Max depth: 3, max leaf nodes: 4, min samples leaf: 5, and min samples split: 165}
K-neighbors {n neighbors: 30}
XgBoost {Learning rate: 0.01, max depth: 3, n estimators: 100, and subsample: 0.3}
Gradient boosting {Learning rate: 0.05, max depth: 1, n estimators: 30, and subsample: 0.3}
Logistic regression {C: 0.1, l1 ratio: 0.01, max iter: 10000, and solver: Liblinear}
Support vector classifier {C: 0.5, degree: 1, kernel: “Linear”}
Light GBM {Learning rate: 0.2, max depth: 3, n estimators: 15, and subsample: 0.3}
Random forest {Max depth = 2, max features = 3, and n estimators = 5}
AdaBoost {Learning rate: 0.2, n estimators: 20}
Bernoulli naïve bayes {Default}