Table 1.
Layers | Layer Parameters | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 |
---|---|---|---|---|---|---|
Filters | 16 | 16 | 8 | 8 | 16 | |
Convolutional Layer | Kernel Size | |||||
Activation Function | ReLu | ReLu | ReLu | ReLu | ELU | |
Max Pooling | Kernel Size | |||||
Dropout | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | |
Filters | 16 | 16 | 8 | 8 | 16 | |
Convolutional Layer | Kernel Size | |||||
Activation Function | ReLu | ReLu | ReLu | ReLu | ELU | |
Max Pooling | Kernel Size | |||||
Dropout | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | |
Filters | 16 | 16 | 8 | 8 | 16 | |
Convolutional Layer | Kernel Size | |||||
Activation Function | ReLu | ReLu | ReLu | ReLu | ELU | |
Max Pooling | Kernel Size | |||||
Dense | Neurons | 64 | 64 | 64 | 64 | 64 |
Activation Function | ReLu | ReLu | ReLu | ReLu | ELU | |
Dense | Neurons | 128 | 32 | 32 | 128 | 128 |
Activation Function | ReLu | ReLu | ReLu | ReLu | ELU | |
Dense | Neurons | 64 | 16 | 16 | 64 | 64 |
Activation Function | ReLu | ReLu | ReLu | ReLu | ELU | |
Dense | Neurons | 4 | 4 | 4 | 4 | 4 |
Activation Function | ReLu | ReLu | ReLu | ReLu | ELU | |
Test Loss | 0.2413 | 0.2459 | 0.1891 | 0.2040 | 0.2145 | |
Test Accuracy | 0.9440 | 0.9397 | 0.9483 | 0.9586 | 0.9570 |