Table 5.
Execution time and size of the best-performing models.
| Base Model | Approach | Trainable Parameters | Epochs | Training Time |
|---|---|---|---|---|
| (s/epoch) | ||||
| VGG16 | Baseline | 186,667,011 | 28 | 108 s |
| EfficientNetB4 | Approach 1 (End-to-end) | 17,786,571 | 32 | 112 s |
| Xception | Approach 1 (End-to-end) | 21,077,675 | 20 | 111 s |
| ResNet50V2 | Approach 1 (Pretrained base) | 270,723 | 13 | 119 s |
| InceptionV3 | Approach 2 (End-to-end) | 28,076,195 | 18 | 123 s |
| Xception | Approach 2 (End-to-end) | 27,114,795 | 21 | 118 s |
| DenseNet169 | Approach 2 (Pretrained base) | 5,520,643 | 18 | 166 s |
Note: Average execution times measured using TensorFlow and the Keras API on the Google Colab Pro platform (Nvidia Tesla T4 and P100 GPU, 24 GB RAM). All base models were initialised with weights pretrained on the ImageNet dataset.