Skip to main content
. 2023 Aug 21;84(4):780–809. doi: 10.1177/00131644231191298

Table 5.

Hyperparameter Spaces of the Base Models and the Optimal Hyperparameters Found by Grid Search.

Base model Hyperparameter space
Naive Bayes var_smoothing [l,le-1,le-2,le-3,le-4,le-5,le-6,le-7,le-8,le-9]
Linear discriminant analysis solver [‘svd’, ‘lsqr’, ‘eigen’]
Gaussian process kernel [1*RBF(), l*DotProduct(), l*Matern(), l*RationalQuadratic(), 1 *WhiteKernel()]
Support vector machine kernel [‘rbf, ‘poly’, ‘sigmoid’, ‘linear’] degree [1,2,3,4,5,6] C [0.001,0.01,0.1, 1]
Decision tree max_features [‘auto’, ‘sqrt’, ‘log2’] ccp_alpha [0.1,0.01,0.001] max_depth [5,6,7,8,9]
Random forest max_.features [‘auto’, ‘sqrt’, ‘log2’] n_estimators [1,10, 30,100, 200] max_depth [5,6,7,8,9]
XGBoost learning _rate [0.05,0.10,0.15] max_depth [8, 10, 12, 15] min_child_weight [5,7,9, 11] gamma [0.0,0.1,0.2,0.3,0.4] colsample_bytree [0.4, 0.5, 0.7, 1.0]
AdaBoost learning _rate [0.1,1,10] n_estimators [10,100,200] algorithm [‘SAMME’,‘SAMME.R’]
Logistic regression solver [ ‘lbfgs’, ‘newton-cg’, ‘liblinear’, ‘sag’, ‘saga’ ] penalty [‘11’, ‘12’, ‘elasticnet’, ‘none’] max_iter [100, 1000, 2000] C [0.1,0.2,0.3,0.4,0.5]
TabNet N/A
K-nearest neighbors n_neighbors [1,2,3,4,5,6,7,8,9,10] weights [‘uniform’, ‘distance’]
Multilayer perceptron hidden_layer_sizes [(10,30,10),(10,),(10,30)] solver [‘lbfgs’, ‘sgd’, ‘adam’] activation [‘tanh’, ‘relu’] learning_rate [constant’, adaptive’, ‘invscaling’] alpha [0.02, 0.1, 1]