Skip to main content
. 2024 Dec 18;14:30554. doi: 10.1038/s41598-024-81132-4

Table 5.

Summary of hyperparameters and hyperparameter optimization (HPO) methods for various machine learning algorithms.

ML algorithm Main HPs Optional HPs HPO methods Libraries
Ridge & lasso Alpha BO-GP Skopt
Logistic regression Penalty, c, solver BO-TPE, SMAC Hyperopt, SMAC
KNN n_neighbors Weights, p, algorithm BOs, Hyperband Skopt, Hyperopt, SMAC, Hyperband
SVM C, kernel, epsilon (for SVR) Gamma, coef0, degree BO-TPE, SMAC, BOHB Hyperopt, SMAC, BOHB
NB Alpha BO-GP Skopt
DT Criterion, max_depth, min_samples_split, min_samples_leaf, max_features, splitter, min_weight_fraction_leaf, max_leaf_nodes GA, PSO, BO-TPE, SMAC, BOHB TPOT, Optunity, SMAC, BOHB
RF & ET n_estimators, max_depth, criterion, min_samples_split, min_samples_leaf, max_features, splitter, min_weight_fraction_leaf, max_leaf_nodes GA, PSO, BO-TPE, SMAC, BOHB TPOT, Optunity, SMAC, BOHB
XGBoost n_estimators, max_depth, learning_rate, subsample, colsample_bytree, min_child_weight, gamma, alpha, lambda GA, PSO, BO-TPE, SMAC, BOHB TPOT, Optunity, SMAC, BOHB
Voting Estimators, voting weights GS Sklearn
Bagging Base_estimator, n_estimators max_samples, max_features GS, BOs Sklearn, Skopt, Hyperopt, SMAC
AdaBoost Base_estimator, n_estimators, learning_rate BO-TPE, SMAC Hyperopt, SMAC
Deep learning Number of hidden layers, ‘units’ per layer, loss, optimizer, Activation, learning_rate, dropout rate, epochs, batch_size, early stop patience, number of frozen layers (if transfer learning is used) PSO, BOHB Optunity, BOHB
Hierarchical clustering n_clusters, distance_threshold Linkage BOs, Hyperband Skopt, Hyperopt, SMAC, Hyperband
DBSCAN eps, min_samples BO-TPE, SMAC, BOHB Hyperopt, SMAC, BOHB
Gaussian mixture n_components covariance_type, max_iter, tol BO-GP Skopt
PCA n_components svd_solver BOs, Hyperband Skopt, Hyperopt, SMAC, Hyperband
LDA n_components solver, shrinkage BOs, Hyperband Skopt, Hyperopt, SMAC, Hyperband