Table 2.
Difference between the three inceptions.
| Inception-v1 | Inception-v2 | Inception-v3 |
|---|---|---|
| Increase the number of units at each stage and shielding the large number of input filters of the last stage to the next layers | Increase the learning rate, remove dropout and local response normalization, shuffle training examples more thoroughly, reduce the L2 weight regularization and the photometric distortions | Trained much faster compared to the other inception and method |
| Error rate = 6.67% | Error rate = 4.82% | Error rate = 3.5% |