Table 1.
Parameters | AlexNet | GoogleNet | ResNet-50 | Se- ResNet-50 | DenseNet121 | Inception-V4 | Inception ResNet –V2 |
ResNeXt-50 | Se- ResNeXt-50 |
---|---|---|---|---|---|---|---|---|---|
Optimizer | ADAM | ADAM | ADAM | ADAM | ADAM | ADAM | ADAM | ADAM | ADAM |
Base learning rate | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 |
Learning decay rate | .1 | .1 | .1 | .1 | .1 | .1 | .1 | .1 | .1 |
Momentum β1 | .9 | .9 | .9 | .9 | .9 | .9 | .9 | .9 | .9 |
RMSprop β2 | .999 | .999 | .999 | .999 | .999 | .999 | .999 | .999 | .999 |
Dropout rate | .5 | .5 | .5 | .5 | .5 | .5 | .5 | .5 | .5 |
No of epochs | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 30 |
Train batch Size | 32 | 32 | 32 | 32 | 32 | 32 | 32 | 32 | 32 |
Test batch size | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 8 |
Total No of parameters | 60 M | 4 M | 25 M | 27.5 M | 7.97 M | 43 M | 56 M | 25 M | 27.56 M |