Skip to main content
. Author manuscript; available in PMC: 2021 Dec 8.
Published in final edited form as: IEEE/ACM Trans Comput Biol Bioinform. 2020 Dec 8;17(6):1846–1857. doi: 10.1109/TCBB.2019.2910061

TABLE X.

Optimal Hyperparameters for MGH NeuroBank Corpus Level 4

GCNN regularization 2.65e–2
decay_steps 410
learning_rate 1.01e–2
pool mpool1
momentum 8.14e–1
num_epochs 350
batch_size 25
M [138, 60]
ps [[2]]
decay_rate 9.98e–1
Ks [[26]]
Fs [[31]]
dropout 6.22e–1
FF-ANN hidden_layer_sizes [976]
alpha 1.16
power_t 3.21e–1
activation relu
learning_rate_init 4.05e–1
early_stopping False
momentum 9.07e–1
tol 1.00e–5
nesterovs_momentum True
learning_rate invscaling
KNNs n_neighbors 6
metric canberra
weights distance
Linear Classifier learning_rate invscaling
tol 1.00e–5
n_jobs −1
power_t 1.84e–1
penalty l1
eta0 3.17e–4
loss log
l1_ratio 4.06e–1
alpha 1.23e–3
Random Forest max_depth 25
max_leaf_nodes 500
min_weight_fraction_leaf 4.33e–4
min_samples_split 2
min_samples_leaf 1
n_estimators 411
criterion gini
min_impurity_decrease 3.64e–5
Decision Tree min_samples_split 2
max_leaf_nodes None
criterion gini
min_impurity_decrease 7.73e–5
min_weight_fraction_leaf 2.53e–3
min_samples_leaf 1
max_features 250
max_depth 100
splitter best