Table 2. The model's architecture.
Layer Number | Filter Size (FS) and Stride (S) | Activations |
---|---|---|
Input layer | – | 224 × 224 × 3 |
C1, B1, R1 | FS = 3 × 3, S = 1 | 224 × 224 × 32 |
C2, B2, R2 | FS = 5 × 5, S = 2 | 112 × 112 × 32 |
C3, B3, R3 | FS = 1 × 1, S = 1 | 112 × 112 × 32 |
C4, B4, R4 | FS = 3 × 3, S = 1 | 112 × 112 × 32 |
C5, B5, R5 | FS = 5 × 5, S = 1 | 112 × 112 × 32 |
C6, B6, R6 | FS = 7 × 7, S = 1 | 112 × 112 × 32 |
CN1 | Five inputs | 112 × 112 × 160 |
B1x | Batch Normalization Layer | 112 × 112 × 160 |
C7, B7, R7 | FS = 1 × 1, S = 2 | 56 × 56 × 64 |
C8, B8, R8 | FS = 3 × 3, S = 2 | 56 × 56 × 64 |
C9, B9, R9 | FS = 5 × 5, S = 2 | 56 × 56 × 64 |
C10, B10, R10 | FS = 7 × 7, S = 2 | 56 × 56 × 64 |
C11, B11, R11 | FS = 3 × 3, S = 2 | 56 × 56 × 32 |
CN2 | Five inputs | 56 × 56 × 228 |
B2x | Batch Normalization Layer | 56 × 56 × 228 |
G1 | – | 1 × 1 × 228 |
F1 | 200 FC | 1 × 1 × 200 |
D1 | Dropout layer with learning rate:0.5 | 1 × 1 × 200 |
F2 | 2 FC | 1 × 1 × 2 |
O (Softmax function) | Normal, Abnormal | 1 × 1 × 2 |