Table 2.
Hyper-parameter | Parameter | Validation | Test |
---|---|---|---|
Kernel size | 2 | 0.8310 | 0.8321 |
3 | 0.8121 | 0.8172 | |
Stride | 1 | 0.8310 | 0.8321 |
2 | 0.8089 | 0.8156 | |
Number of neurons | 25 | 0.8191 | 0.8232 |
81 | 0.8310 | 0.8321 | |
169 | 0.8189 | 0.8236 | |
Learning rate | 0.01 | 0.8250 | 0.8296 |
0.001 | 0.8310 | 0.8321 | |
0.0001 | 0.7763 | 0.7802 | |
Dropout probability | 0.1 | 0.8310 | 0.8321 |
0.2 | 0.8196 | 0.8228 | |
0.3 | 0.8180 | 0.8227 | |
Batch size | 200 | 0.8166 | 0.8231 |
250 | 0.8310 | 0.8321 | |
300 | 0.8135 | 0.8209 | |
Activation function | ReLU_ReLU | 0.8132 | 0.8224 |
ReLU_Sigmoid | 0.8127 | 0.8210 | |
ReLU_Tanh | 0.8127 | 0.8242 | |
Sigmoid_ReLU | 0.8224 | 0.8296 | |
Sigmoid_Sigmoid | 0.8245 | 0.8301 | |
Sigmoid_Tanh | 0.8271 | 0.8308 | |
Tanh_ReLU | 0.8253 | 0.8297 | |
Tanh_Sigmoid | 0.8245 | 0.8309 | |
Tanh_Tanh | 0.8310 | 0.8321 |
FCNN model obtains the optimal AUC value, based on the different hyper-parameters combinations.