Skip to main content
. 2019 Jul 1;45:606–614. doi: 10.1016/j.ebiom.2019.06.050

Table 1.

Performance table of training models.

Transferred models Accuracy
GPU time (seconds) Parameters (millions) Number of layers
Full Full-H25 Quarter Half
SqueezeNet 85.55 85.5 73.5 82.8 4137 1.24 68
Alexnet 87.2 83.6 73.7 82.6 3805 61 25
ResNet18 90.65 90.2 83.4 86 4256 11.7 72
MobileNet-v2 90.75 89.8 79.9 84.9 7032 3.5 155
GoogLeNet 90.9 88.7 68.2 85.5 5104 7 144
Resnet50 91.2 91.4 81.3 86.3 7302 25.6 177
Resnet101 91.55 91.7 83.6 86.1 12,215 44.6 347
Inception-v3 92 92.1 84.1 89.5 11,938 23.9 316
InceptionResnet-v2 92.1 91.9 82.2 86.9 33,283 55.9 825

“Quarter” set used about 2000 images for training and validation.

“Half” set used about 5000 images for training and validation.

“Full” represents an average accuracy of twice evaluation of full dataset (80% training and 20% validation).

“Full-H25” represents adding additional 25 fully connected hidden layer to “Full” model.

GPU time represents the processing power needed for training the model.