Table 2.
Training parameters of the proposed AlexNet-GRU model.
| Layers | K_Size | Input (shape) | Act_Func | Output (shape) |
|---|---|---|---|---|
| Con_Layer_1 | 3 × 3 | (60, 60, 3) | ReLU_Func | (58, 58, 128) |
| B-Norm_1 | — | (58, 58, 128) | — | (58, 58, 128) |
| Max_pooling_1 | 2 × 2 | (58, 58, 128) | — | (57, 57, 128) |
| Drop_out = 0.9 | — | (57, 57, 128) | — | (57, 57, 128) |
| Con_Layer_2 | 2 × 3 | (57, 57, 128) | ReLU_Func | (55, 55, 256) |
| B-Norm_2 | — | (57, 57, 128) | — | (55, 55, 256) |
| Max_pooling_2 | 2 × 2 | (55, 55, 256) | — | (54, 54, 256) |
| Drop_out = 0.9 | — | (54, 54, 256) | — | (54, 54, 256) |
| Con_Layer_3 | 2 × 3 | (54, 54, 256) | ReLU_Func | (52, 52, 256) |
| B-Norm_3 | — | (52, 52, 256) | — | (52, 52, 256) |
| Max_pooling_3 | 2 × 2 | (52, 52, 256) | — | (51, 51, 256) |
| Dropout = 0.5 | — | (51, 51, 256) | — | (51, 51, 256) |
| Con_Layer_4 | 2 × 3 | (51, 51, 256) | ReLU_Func | (49, 49, 256) |
| B-Norm_4 | — | (49, 49, 256) | — | (49, 49, 256) |
| Dropout = 0.9 | — | (49, 49, 256) | — | (49, 49, 256) |
| Con_Layer_5 | 2 × 3 | (49, 49, 256) | ReLU_Func | (47, 47, 256) |
| B-Norm_5 | — | (47, 47,256) | — | (47, 47, 256) |
| Dropout = 0.9 | — | (47, 47, 256) | — | (47, 47, 256) |
| Con_Layer_6 | 2 × 3 | (47, 47, 256) | ReLU_Func | (45, 45, 256) |
| B-Norm_6 | — | (45, 45, 256) | — | (45, 45, 256) |
| Dropout = 0.9 | — | (45, 45, 256) | — | (45, 45, 256) |
| Con_Layer_7 | 2 × 3 | (45, 45, 256) | ReLU_Func | (45, 45, 512) |
| B-Norm_7 | — | (43, 43, 512) | — | (43, 43, 512) |
| Max_pooling_4 | 3 × 2 | (43, 43, 512) | — | (42, 42, 512) |
| Dropout = 0.5 | — | (42, 42, 512) | — | (42, 42, 512) |
| Flatten | — | (42, 42, 512) | — | (903,168) |
| Dense1 | — | (903,168) | — | (1,024) |
| B-Norm_8 | — | (1,024) | — | (1,024) |
| Drop_out = 0.3 | — | (1,024) | — | (1,024) |
| Dense2 | — | (1,024) | — | (2,000) |
| B-Norm_9 | — | (2,000) | — | (2,000) |
| GRU | (1,024) | — | — | |
| Dense3 | — | (2,000) | — | (2,000) |