Skip to main content
. 2022 Feb 23;12:3057. doi: 10.1038/s41598-022-06459-2

Table 4.

Hyperparameter ranges for each machine learning model. Abbreviation definitions (in order of appearance from left to right): ML machine learning, learn rate learning rate, subsamp. subsampling per tree, cols/tree columns per tree, max iter. maximum iterations, SVM support vector machine, SVM-Gaussian SVM with Gaussian radial basis function kernel, SVM-Linear SVM with linear kernel, Log. Regression logistic regression, lasso lasso regression with L1 penalization, ridge ridge regression with L2 regularization, DFNN deep feedforward neural network, LSTM bidirectional long short term memory neural network, BrainNet CNN BrainNet convolutional network52, ReLU leaky slope rectified linear activation unit slope for x < 0.

Naïve Bayes Random forest Extremely random trees Adaptive boosting Gradient boosting SVM-Gaussian
Nonlinear classical ML NA

estimators [50,5e3]

max nodes [5,50]

estimators [50,5e3]

max nodes [5,50]

estimators [50,5e3]

learn rate [0.1, 0.9]

estimators [50,5e3]

learning rate [5,50]

max depth [1,10]

subsamp. [0.2, 0.8]

cols/tree [0.2, 1]

C [1e−4, 1e5]

max iter. [1e4, 1e5]

gamma [1e−2, 1e2]

SVM-linear Log. regression (Lasso) Log. regression (Ridge)
Linear classical ML

C [1e−4, 1e5]

max iter. [1e4, 1e5]

C [1e−4, 1e4]

max iter. [1e4, 1e5]

C [1e−4, 1e4]

max iter. [1e4, 1e5]

DFNN LSTM BrainNet CNN
Deep learning

hidden layers [1,3]

initial width [16, 256]

dropout fraction [0.1, 0.6]

L2 penalty [1e−4, 2e−2]

hidden layers [1,3]

initial width [16, 256]

dropout fraction [0.1, 0.6]

L2 penalty [1e−4, 2e−2]

hidden layers [0, 2]

initial width [16,64]

dropout fraction [0.1, 0.6]

ReLU leaky slope [0.1, 0.5]