Skip to main content
. 2023 Jul 28;6:1179226. doi: 10.3389/frai.2023.1179226

Table 2.

Hyperparameter tuning summary.

Model classifiers Hyperparameter tuning description
RF # of _estimators = 200; longest path between root node and leaf node, max_ depth = 15; class_ weight = “balanced;” Number of maximum features for each tree, max_ features = sqrt; min_ samples_ split = 2; min_ samples_ leaf = 1; random_ state = 42
GB # of estimators = 200, max_depth = 4, and loss = ls
KNN Number of neighbors = 10; algorithms = “auto;” leaf_ size = 1; p = 1; weights = “uniform”
AdaBoost Similar to RF, define the Decision tree (Dt) classifier first in the same setting and then boost the Dt fit by AdaBoostClassifier.
SVM Kernel = linear; degree of similarity, gamma = 0.01; regularization, C = 10
LoR No critical hyperparameters need to be tuned.