Skip to main content
. 2020 Jun 30;14:630. doi: 10.3389/fnins.2020.00630

Table 2.

Hyperparameter tuning experiments.

Model λ Dropout Learning rate Layers Hidden units Accuracy (%) Precision (%) Recall (%)
GCN (Baseline) None 0.5 (2,4,5 layer) 0.005 5 32/32/64/64/128 83.98 ± 3.2 84.59 ± 3.1 87.78 ± 6.4
GIN+Infomax 0.05 0.5 0.005 5 64 84.61 ± 2.9 86.19 ± 3.3 86.81 ± 4.9
GIN 0.0 - - - - 84.41 ± 2.8 85.39 ± 2.6 87.60 ± 7.5
- 0.01 - - - - 84.08 ± 2.2 86.72 ± 4.4 85.31 ± 5.5
- 0.1 - - - - 84.51 ± 2.1 86.85 ± 4.5 86.06 ± 5.5
- - 0.0 - - - 83.99 ± 3.4 85.78 ± 4.4 86.26 ± 6.1
- - - 0.01 - - 83.13 ± 3.4 85.89 ± 3.4 84.01 ± 5.2
- - - 0.001 - - 81.54 ± 3.3 85.45 ± 3.4 81.37 ± 7.3
- - - - 4 - 83.11 ± 3.2 84.62 ± 2.8 85.70 ± 4.2
- - - - - 32 83.13 ± 3.4 85.20 ± 4.3 85.14 ± 5.5

Bold value indicates the saliency with respect to the input (24).