TABLE 2.
Basic learning parameters of designed neural Network’s architecture different training phases.
| Parameter | First phase (initial training) | Second phase (fine-tuning) | Third phase (fine-tuning on new data) |
|---|---|---|---|
| Data Used | Jeanray-2015 Selected Training Data | Jeanray-2015 Selected Training Data | Our training data |
| Base Model | ResNet50 (ImageNet weights network’s) | Same as First Phase | Same as First Phase |
| Attention Mechanism | CBAM applied to base model output | Same as First Phase | Same as First Phase |
| Frozen Layers | All base model layers | First 1/3 of base model layers frozen | None; all layers are trainable |
| Unfrozen Layers | Only new layers added on top | Last 2/3 of base model layers and new layers | Entire model is trainable |
| Optimizer | Adam optimizer | Adam optimizer | Adam optimizer |
| Initial Learning Rate | 1e-4 | 1e-5 | 1e-5 |
| Batch Size | 16 | 16 | 16 |
| Epochs | Up to 100 (with early stopping) | Up to 100 (with early stopping) | Up to 100 (with early stopping) |
| Loss Function | Categorical Crossentropy | Categorical Crossentropy | Categorical Crossentropy |
| Metrics | Accuracy | Accuracy | Accuracy |