Skip to main content
. 2021 Jun 14;84(11):2504–2516. doi: 10.1002/jemt.23713

TABLE 1.

Training options for different pre‐trained models

Training options (random initialization weights, batch size = 32, learning rate = 0.00001 and number of epochs = 10)
Model Input size No. of layers
AlexNet (Simonyan & Zisserman, 2014) 227 × 227 8
GoogleNet (Ballester & Araujo, 2016) 224 × 224 22
InceptionV3 (Parente & Ferreira, 2018) 299 × 299 48
InceptionresNetV2 (Alom et al., 2017) 299 × 299 164
SqueezeNet (Pradeep et al., 2018) 227 × 227 18
DenseNet201 (Haupt et al., 2018) 224 × 224 201
ResNet18 (Wu et al., 2019) 224 × 224 18
ResNet50 (Alom et al., 2018) 224 × 224 50
ResNet101 (Ghosal et al., 2019) 224 × 224 101
VGG16 (Zu et al., 2020) 224 × 224 16
VGG19 (Qassim et al., 2018) 224 × 224 19