Skip to main content
. 2024 Jun 3;24(11):3598. doi: 10.3390/s24113598

Table 2.

The suggested network architecture’s ideal parameters were chosen.

Parameters Values Optimal Value
Batch Size in GAN 4, 6, 8, 10, 12 10
Optimizer in GAN Adam, SGD, Adamax SGD
Number of CNN Layers 3, 4, 5 4
Learning Rate in GAN 0.1, 0.01, 0.001, 0.0001 0.0001
Number of Graph Conv Layers 2, 3, 4, 5, 6, 7 6
Batch Size in GCN 8, 16, 32 16
Batch normalization ReLU, Leaky-ReLU, TF-2 TF-2
Learning Rate in GCN 0.1, 0.01, 0.001, 0.0001, 0.00001 0.001
Dropout Rate 0.1, 0.2, 0.3 0.1
Weight of optimizer 4×103,4×104,4×105,4×106,4×107 4×106
Error function MSE, Cross Entropy Cross Entropy
Optimizer in GCN Adam, SGD, Adadelta, Adamax Adadelta