Table 2.
Layer | Output Dimensions |
---|---|
Input | [128, 128, 3] |
Convolution | [128, 128, 8] |
Convolution | [128, 128, 16] |
Max Pool | [16, 64] |
Convolution | [16, 64] |
Convolution | [32, 64] |
Max Pool | [32] |
Convolution | [32] |
Convolution | [32, 64] |
Max Pool | [16, 64] |
Convolution | [16, 64] |
Convolution | [16, 16, 128] |
Max Pool | [8, 8, 128] |
Convolution | [8, 8, 128] |
Convolution | [8, 8, 256] |
Average Pool | [4, 4, 256] |
Flatten | 4096 |
Dense (relu) | 100 |
Dropout (0.3) | 100 |
Dense (softmax) | 43 |
All convolutional layers use 3 × 3 kernels and are followed by both ReLU activation and batch normalization.