Table 2. The proposed model with different structures.
| DNN-1 | DNN-2 | DNN-3 | DNN-4 | DNN-5 | DNN-6 | DNN-7 | |
|---|---|---|---|---|---|---|---|
| #Convolutional layer | 2 | 2 | 2 | 2 | 2 | 3 | 2 |
| Convolutional layer dim | 1D | 1D | 1D | 1D | 1D | 1D | 2D |
| #Filters | 32,32 | 64,64 | 32,32 | 32,32 | 32,32 | 32,32,15 | 32,32 |
| Kernel size | 3,3 | 3,3 | 3,3 | 3,3 | 3,3 | 3,3,3 | (3,3), (3,3) |
| #Pooling layer | Without pooling | Without pooling | 1 | Without pooling | Without pooling | Without pooling | Without pooling |
| Size of max-pooling | – | – | 2 | – | – | – | – |
| #Dense layers | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
| Activation function of fully-connected | ReLU + sigmoid (last dense layer) | ReLU + sigmoid (last dense layer) | ReLU + sigmoid (last dense layer) | ReLU + sigmoid (last dense layer) | ReLU + sigmoid (last dense la.yer) | ReLU + sigmoid (last dense layer) | ReLU + sigmoid (last dense layer) |
| Loss function | Binary cross entropy | Binary cross entropy | Binary cross entropy | Binary cross entropy | Binary cross entropy | Binary cross entropy | Binary cross entropy |
| Optimizer, learning rate | Adam, | Adam, | Adam, | Adam, | Adam, | Adam, | Adam, |
| 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | |
| #Epochs | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Batch size | 256 | 256 | 256 | 256 | 256 | 256 | 256 |
| Dropout rate | 0.2 | 0.2 | 0.2 | Without dropout | 0.2 | 0.2 | 0.2 |
| Class weight in training? | No | No | No | No | Yes | No | No |