Skip to main content
. 2022 May 24;22:451. doi: 10.1186/s12877-022-03152-x

Table 4.

The best-tuned hyperparameters for each model

Classifier models Hyperparameters
Gradient Boosting max_depth = 10, max_features = 'sqrt', min_samples_split = 50, n_estimators = 800, random_state = 8, learning_rate = 0.5, subsample = 0.5
Random Forests max_depth = 60, max_features = 'sqrt', min_samples_split = 5, min_samples_leaf = 4, n_estimators = 400, random_state = 8
Artificial Neural Network activation = 'identity', alpha = 0.0001, batch_size = 'auto', hidden_layer_sizes = 7, learning_rate = 'adaptive', learning_rate_init = 0.001, max_iter = 500, solver = 'lbfgs'
Logistic Regression C = 0.4, multi_class = 'multinomial', random_state = 8, solver = 'saga'
Naive Bayes alpha = 1.0, fit_prior = True, class_prior = None
Support Vector Machine C = 0.1, degree = 4, kernel = 'poly', probability = True, random_state = 8
K-Nearest Neighbors n_neighbors = 3