Skip to main content
. Author manuscript; available in PMC: 2021 Dec 8.
Published in final edited form as: IEEE/ACM Trans Comput Biol Bioinform. 2020 Dec 8;17(6):1846–1857. doi: 10.1109/TCBB.2019.2910061

TABLE XI.

Optimal Hyperparameters for MGH NeuroBank Corpus Level 5

GCNN regularization 5.00e–2
decay_steps 400
learning_rate 1.00e–3
pool apool1
momentum 9.00e–1
num_epochs 350
batch_size 20
M [100, 60]
ps [[2]]
decay_rate 9.60e–1
Ks [[7]]
Fs [[25]]
dropout 5.00e–1
FF-ANN hidden_layer_sizes [946, 193]
alpha 1.11
power_t 8.87e–1
early_stopping False
learning_rate_init 9.86e–1
nesterovs_momentum True
learning_rate constant
momentum 8.76e–1
activation relu
KNNs n_neighbors 7
metric canberra
weights distance
Linear Classifier learning_rate invscaling
tol 1.00e–5
n_jobs −1
power_t 1.84e–1
penalty l1
eta0 3.17e–4
loss log
l1_ratio 4.06e–1
alpha 1.23e–3
Random Forest max_depth 25
max_leaf_nodes 500
min_weight_fraction_leaf 4.33e–4
min_samples_split 2
min_samples_leaf 1
n_estimators 411
criterion gini
min_impurity_decrease 3.64e–5
Decision Tree min_samples_split 2
max_depth 10
criterion entropy
min_impurity_decrease 1.23e–3
max_leaf_nodes None
min_weight_fraction_leaf 2.08e–3
min_samples_leaf 2
max_features None
splitter best