Table 5.
Model | Batch Size | Number of Epochs | Hidden Layer Size | Dropout | Learning Rate | Activation | Optimizer | Kernel Size |
---|---|---|---|---|---|---|---|---|
VGG16 | 150 | 200 | 8–96 neurons | 0.1 | 0.00001 | ReLu sigmoid | Adam | 2 × 2 |
VGG19 | 150 | 200 | 8–96 neurons | 0.1 | 0. 0001 | ReLu sigmoid | RMSprop | 2 × 2 |
InceptionV3 | 200 | 300 | 8–96 neurons | 0.1 | 0.0001 | ReLu sigmoid | Nadam | 2 × 2 |
ResNet50 | 100 | 200 | 8–96 neurons | 0.1 | 0.001 | ReLu sigmoid | Adamax | 2 × 2 |
ResNet101 | 250 | 300 | 8–96 neurons | 0.1 | 0.0001 | ReLu sigmoid | Adam | 2 × 2 |
GoogLeNet | 50 | 150 | 8–96 neurons | 0.1 | 0.0001 | ReLu sigmoid | SGD | 2 × 2 |
MobileNetV2 | 250 | 300 | 8–96 neurons | 0.1 | 0.01 | ReLu sigmoid | RMSprop | 2 × 2 |
AlexNet | 100 | 150 | 8–96 neurons | 0.1 | 0.00001 | ReLu sigmoid | Adadelta | 2 × 2 |
EfficientNet B7 | 200 | 300 | 8–96 neurons | 0.1 | 0.000001 | ReLu sigmoid | Adamax | 2 × 2 |
DenseNet121 | 200 | 350 | 8–96 neurons | 0.1 | 0.00001 | ReLu sigmoid | Adagrad | 2 × 2 |
NFNet | 150 | 250 | 8–96 neurons | 0.1 | 0.0001 | ReLu sigmoid | Adadelta | 2 × 2 |
Modified MobileNetV2 (Proposed Method) |
300 | 400 | 8–96 neurons | 0.1 | 0.0000001 | ReLu | RMSprop | 2 × 2 |