Skip to main content
. 2023 Nov 20;2023:9965226. doi: 10.1155/2023/9965226

Table 3.

The choice of hyperparameters for each model [22, 23].

Machine learning models Hyperparameters Optimum values
Random forest [23] The depth of the tree (T), number of tree models (N) T = 3, N = 100

Logistic regression [22] Confidence factor used for pruning (C), class weight adjustment (class weight), maximum iteration (max_iter) C = 1.0, class weight = None, dual = False, max_iter = 100

Decision tree [22] Confidence factor used for pruning (C), minimum number of instances of each leaf (N) C = 0.25, N = 2

K-nearest neighbors [22] Number of neighbors (n_neighbors), weight function used in prediction (weights) n_neighbors = 5, weights = uniform

Support vector machine [22] Confidence factor used for pruning (C), kernel type (kernel); maximum iteration(max_iter) C = 1.0, kernel =  linear, max_iter = 100

XGBoost [22] Depth of the tree (T), learning rate, number of estimators, gamma, and several tuning parameters T = 3, learning rate = 0.1, number of estimators = 100