Skip to main content
. 2021 Mar 16;18(4):1099–1114. doi: 10.1007/s11554-021-01086-y

Table 2.

Configurations of convolutional neural networks used in this work

Architectures Highlights Configurations Number of features extracted
VGG [29] Factorized Convolution, a regularization strategy to avoid overfitting VGG16 512
VGG19 512
Inception [30] Inception Module, a building block for reducing the amount of extracted parameters InceptionV3 2048
InceptionResNetV2 1536
ResNet [13] Residual Block, the building block focused on vanishing-gradient optimizing ResNet50 2048
NASNet [35] NASNet search space, a new architecture model build from the dataset of interest NASNetLarge 4032
NASNetMobile 1056
Xception [3] Depthwise Separable Convolution layers, the spatial and cross-channel correlation is separated Xception 2048
MobileNet [14] Two news hyper-parameters in the Xception model, which are Width Multiplier, Resolution Multiplier MobileNet 1024
MobileNetV2 1280
DenseNet [15] Dense Block, a block that covers interconnects all layers DenseNet121 1024
DenseNet169 1664
DenseNet201 1920