Table 2.
Summary of the CNN architecture and learning parameters for machine learning models
| Model | Architecture Details | Learning Parameters | Activation Function |
|---|---|---|---|
| CNN | Input: (224, 224, 1) → Conv2D (32, 3 × 3, ReLU) → MaxPooling (2 × 2) → Dropout (0.3) → Conv2D (64, 3 × 3, ReLU) → MaxPooling (2 × 2) → Dropout (0.3) → Conv2D (128, 3 × 3, ReLU) → MaxPooling (2 × 2) → Dropout (0.3) → Conv2D (256, 3 × 3, ReLU) → MaxPooling (2 × 2) → Dropout (0.3) → Conv2D (512, 3 × 3, ReLU) → MaxPooling (2 × 2) → Dropout (0.3) → Flatten → Dense (256, ReLU) → Dropout (0.3) → Dense (4, Softmax) |
Optimizer: Adam Loss: Sparse Categorical Cross-Entropy Early Stopping: 10 Batch Size: 16 Epochs: 30 Dropout: 0.3–0.5 Batch Normalization: Yes Cross-Validation: 5-fold |
ReLU (hidden) Softmax (output) |
| SVM | - | Standardize Data: True, Solver: SMO, Cross-Validation: 5, Kernel: RBF, C: 1.0, Gamma: ‘scale’, probability = True, Cross-Validation: 5 | - |
| DT | - | Standardize Data: True, Criterion: ‘gini’, Max Depth: None, Min Samples Split: 2, Estimators: 100, Cross-Validation: 5, | - |
| RF | - | Standardize Data: True, Cross-Validation: 5, Number of Estimators: 100, Criterion: ‘gini’, Max Features: ‘auto’, Cross-Validation: 5 | - |