Input Image Size |
224 × 224 × 3, with 5120 training images and 1280 images in each class |
Feature extraction |
Using EfficientNet with 1280 features |
First Convolution Layer |
32 filters; size = 3 × 3; ReLu; Padding = ‘Same’ |
First Max Pooling Layer |
Pooling Size: 2 × 2 |
Second Convolution Layer |
64 filters; size = 3 × 3; ReLu; Padding = ‘Same’ |
Second Max Pooling Layer |
Pooling size: 2 × 2 |
Third Convolution Layer |
128 filters; size = 3 × 3; ReLu; Padding = ‘Same’ |
Third Max Pooling Layer |
Pooling size: 2 × 2 |
Fourth Convolution Layer |
256 filters; size = 3 × 3; ReLu; Padding = ‘Same’ |
Fourth Max Pooling Layer |
Pooling Size: 2 × 2 |
Fifth Convolution Layer |
512 filters; size = 3 × 3; ReLu; Padding = ‘Same’ |
Fifth Max Pooling Layer |
Pooling Size: 2 × 2 |
Fully Connected Layer |
4096 nodes; ReLU |
Dropout Layer |
50% Neurons dropped randomly |
Dense_1 Layer |
8320 nodes; ReLu |
Dense_2 Layer |
516 nodes; ReLu |
Output Layer |
Four nodes; Softmax activation |
Optimization Function |
Adam optimization |
Learning Rate |
0.001 |
Loss Function |
Categorical cross entropy |