Skip to main content
. Author manuscript; available in PMC: 2024 May 7.
Published in final edited form as: Adv Neural Inf Process Syst. 2021 Dec;2021(DB1):1–16.

Table 4:

ML methods and the hyperparameter spaces used in tuning.

Method Hyperparameters

AdaBoost {‘learning_rate’: (0.01, 0.1, 1.0, 10.0), ‘n_estimators’: (10, 100, 1000)}
KernelRidge {‘kernel’: (‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’), ‘alpha’: (0.0001, 0.01, 0.1, 1), ‘gamma’: (0.01, 0.1, 1, 10)}
LassoLars {‘alpha’: (0.0001, 0.001, 0.01, 0.1, 1)}
LGBM {‘n_estimators’: (10, 50, 100, 250, 500, 1000), ‘learning_rate’: (0.0001, 0.01, 0.05, 0.1, 0.2), ‘subsample’: (0.5, 0.75, 1), ‘boosting_type’: (‘gbdt’, ‘dart’, ‘goss’)}
LinearRegression {‘fit_intercept’: (True,)}
MLP {‘activation’: (‘logistic’, ‘tanh’, ‘relu’), ‘solver’: (‘lbfgs’, ‘adam’, ‘sgd’), ‘learning_rate’: (‘constant’, ‘invscaling’, ‘adaptive’)}
RandomForest {‘n_estimators’: (10, 100, 1000), ‘min_weight_fraction_leaf’: (0.0, 0.25, 0.5), ‘max_features’: (‘sqrt’, ‘log2’, None)}
SGD {‘alpha’: (1e-06, 0.0001, 0.01, 1), ‘penalty’: (‘l2’, ‘l1’, ‘elasticnet’)}
XGB {‘n_estimators’: (10, 50, 100, 250, 500, 1000), ‘learning_rate’: (0.0001, 0.01, 0.05, 0.1, 0.2), ‘gamma’: (0, 0.1, 0.2, 0.3, 0.4), ‘subsample’: (0.5, 0.75, 1)}