| 1. Grouped convolution 1 |
3 groups with 4 filters in each |
222 * 222 * 12 |
[3 3] |
[1 1] |
| ReLU + Batch normalization |
| 2. Grouped convolution 2 |
12 groups with 4 filters in each |
220 * 220 * 48 |
[3 3] |
[1 1] |
| ReLU + Batch normalization |
| Max pooling 1 |
– |
108 * 108 * 48 |
[5 5] |
[2 2] |
| 3. Grouped convolution 3 |
48 groups with 4 filters in each |
106 * 106 * 192 |
[3 3] |
[1 1] |
| Relu + Batch normalization |
| 4. Grouped convolution 4 |
192 groups with 4 filters in each |
104 * 104 * 768 |
[3 3] |
[1 1] |
| ReLU + Batch normalization |
| Max pooling 2 |
– |
51 * 51 * 768 |
[3 3] |
[2 2] |
| 5. Grouped convolution 5 |
786 groups with 2 filters in each |
49 * 49 * 1,536 |
[3 3] |
[1 1] |
| ReLU + Batch normalization |
| 6. Grouped convolution 6 |
1,536 groups with 2 filters in each |
47 * 47 * 3,072 |
[3 3] |
[1 1] |
| ReLU + Batch normalization |
| Max pooling 3 |
– |
23 * 23 * 3,072 |
[3 3] |
[2 2] |
| 7. Grouped convolution 7 |
3,072 groups with 1 filter in each |
21 * 21 * 3,072 |
[3 3] |
[1 1] |
| ReLU + Batch normalization |
| 8. Grouped convolution 8 |
3,072 groups with 1 filter in each |
19 * 19 * 3,072 |
[3 3] |
[1 1] |
| ReLU + Batch normalization |
| Max pooling 4 |
– |
9 * 9 * 3,072 |
[3 3] |
[2 2] |
| 9. Fully connected layer |
– |
1 * 400 |
– |
– |
| Dropout (0.4) + Batch normalization |
| 10. GMM fully connected layer |
– |
1 * 400 |
– |
– |
| Batch normalization |
| 11. GMM fully connected layer |
– |
1 * 250 |
– |
– |
| Batch normalization |
| 12. GMM fully connected layer |
– |
1 * 2 |
– |
– |
| SoftMax layer |
| Classification layer |