Table 4.
Naïve Bayes | Random forest | Extremely random trees | Adaptive boosting | Gradient boosting | SVM-Gaussian | |
---|---|---|---|---|---|---|
Nonlinear classical ML | NA |
estimators [50,5e3] max nodes [5,50] |
estimators [50,5e3] max nodes [5,50] |
estimators [50,5e3] learn rate [0.1, 0.9] |
estimators [50,5e3] learning rate [5,50] max depth [1,10] subsamp. [0.2, 0.8] cols/tree [0.2, 1] |
C [1e−4, 1e5] max iter. [1e4, 1e5] gamma [1e−2, 1e2] |
SVM-linear | Log. regression (Lasso) | Log. regression (Ridge) | ||||
---|---|---|---|---|---|---|
Linear classical ML |
C [1e−4, 1e5] max iter. [1e4, 1e5] |
C [1e−4, 1e4] max iter. [1e4, 1e5] |
C [1e−4, 1e4] max iter. [1e4, 1e5] |
DFNN | LSTM | BrainNet CNN | ||||
---|---|---|---|---|---|---|
Deep learning |
hidden layers [1,3] initial width [16, 256] dropout fraction [0.1, 0.6] L2 penalty [1e−4, 2e−2] |
hidden layers [1,3] initial width [16, 256] dropout fraction [0.1, 0.6] L2 penalty [1e−4, 2e−2] |
hidden layers [0, 2] initial width [16,64] dropout fraction [0.1, 0.6] ReLU leaky slope [0.1, 0.5] |