Table 5.
CNN architecture.
Layer Type | Number of Filters | Size of Feature Map (Width × Height × Channel) | Size of Filter | Stride | Padding |
---|---|---|---|---|---|
Image input layer | 119 183 3 | ||||
1st convolutional layer | 96 | 55 87 96 | 11 11 3 | 2 2 | 0 0 |
Rectified linear unit (ReLU) layer | 55 87 96 | ||||
Local response normalization layer | 55 87 96 | ||||
Max pooling layer | 1 | 27 43 96 | 3 3 | 2 2 | 0 0 |
2nd convolutional layer | 128 | 27 43 128 | 5 5 96 | 1 1 | 2 2 |
ReLU layer | 27 43 128 | ||||
Local response normalization layer | 27 43 128 | ||||
Max pooling layer | 1 | 13 21 128 | 3 3 | 2 2 | 0 0 |
3rd convolutional layer | 256 | 13 21 256 | 3 3 128 | 1 1 | 1 1 |
ReLU layer | 13 21 256 | ||||
4th convolutional layer | 256 | 13 21 256 | 3 3 256 | 1 1 | 1 1 |
ReLU layer | 13 21 256 | ||||
5th convolutional layer | 128 | 13 21 128 | 3 3 256 | 1 1 | 1 1 |
ReLU layer | 13 21 128 | ||||
Max pooling layer | 1 | 6 10 128 | 3 3 | 2 2 | 0 0 |
1st fully connected layer | 4096 | ||||
ReLU layer | 4096 | ||||
2nd fully connected layer | 1024 | ||||
ReLU layer | 1024 | ||||
Dropout layer | 1024 | ||||
3rd fully connected layer | 2 | ||||
Softmax layer | 2 | ||||
Classification layer (output layer) | 2 |