Skip to main content
. 2020 Nov 25;15(11):e0242858. doi: 10.1371/journal.pone.0242858

Table 1. Convolutional neural network architectures used in this paper.

For ResNet34, the features considered involve only layers that are not “skipped over”, while for Inception_v3 only elementary layers with non-negative outputs (ReLU, max/avg-pooling) that are not on the auxiliary branch.

Network Number of features flz Number of trainable parameters Reference
AlexNet 2,816 57,163,623 Krizhevsky, 2012 [7]
VGG11 6,976 128,926,119 Simonyan, 2014 [28]
VGG13 7,360 129,110,631 Simonyan, 2014 [28]
VGG16 9,920 134,420,327 Simonyan, 2014 [28]
(0)Conv2d(3,64) (1)ReLU (2)Conv2d(64,64) (3)ReLU (4)MaxPool2d (5)Conv2d(64,128) (6)ReLU (7)Conv2d(128,128) (8)ReLU (9)MaxPool2d (10)Conv2d(128,256) (11)ReLU (12)Conv2d (256,256) (13)ReLU (14)Conv2d(256,256) (15)ReLU (16)MaxPool2d (17)Conv2d (256,512) (18)ReLU (19)Conv2d(512,512) (20)ReLU (21)Conv2d(512,512) (22)ReLU (23)MaxPool2d (24)Conv2d(512,512) (25)ReLU (26)Conv2d(512,512) (27)ReLU (28)Conv2d(512,512) (29)ReLU (30)MaxPool2d (31)Linear(25088,4096) (32)ReLU (33)Dropout(0.5) (34)Linear(4096,4096) (35)ReLU (36)Dropout(0.5) (37)Linear(4096,39)
VGG19 12,480 139,730,023 Simonyan, 2014 [28]
VGG11_bn 9,728 128,931,623 Simonyan, 2014 [28]; Ioffe, 2015 [36]
VGG13_bn 10,304 129,116,519 Simonyan, 2014 [28]; Ioffe, 2015 [36]
VGG16_bn 14,144 134,428,775 Simonyan, 2014 [28]; Ioffe, 2015 [36]
VGG19_bn 17,984 139,741,031 Simonyan, 2014 [28]; Ioffe, 2015 [36]
ResNet34 28,992 21,304,679 He, 2016 [29]
Inception_v3 27,712 24,453,166 Szegedy, 2016 [30]
VGG16_1FC 9,920 15,693,159 This paper: same convolutional part as VGG16, but with a single fully connected layer: Conv(VGG16); Dropout(0.5); Linear(25088,39)
VGG16_avg1FC 10,432 14,734,695 This paper: the convolutional part as VGG16, followed by an average pooling layer and a single fully connected layer: Conv(VGG16); AvgPool2d(7,7,512;1,1,512); Dropout(0.5); Linear(512,39)