Skip to main content
. 2021 Oct 11;139:104927. doi: 10.1016/j.compbiomed.2021.104927

Table 3.

Comparison of deep CNNs as an end-2-end Network and as a feature extractor along with the proposed LMPL classifier (CNN+) in Experiment 2: COVID-19, SARS, and MERS pneumonia.

Method Performance Metrics (%)
Feature ExtractorLayer Sensitivity (Recall) Precision (PPV) F1-score Accuracy Average Rank
ResNet18 87.47 (5) 84.52 (7) 85.86 (5) 88.02 (4) 5.3
ResNet18+ 91.97 (8) 90.66 (7) 91.22 (7) 92.72 (7) 7.3
ResNet50 Avg_pool 88.11 (4) 84.90 (6) 86.35 (4) 87.87 (5) 4.8
ResNet50+ 94.25 (6) 91.26 (5) 92.64 (5) 93.58 (5) 5.3
ResNetV2 Avg_pool 73.10 (11) 76.76 (9) 74.56 (11) 78.74 (10) 10.3
ResNetV2+ 91.45 (10) 88.99 (9) 90.12 (8) 91.58 (9) 9.0
Inception pool5 83.76 (8) 85.38 (5) 84.38 (8) 86.59 (8) 7.3
Inception+ 95.68 (3) 94.35 (3) 94.95 (3) 95.72 (3) 3.0
InceptionV3 Avg_pool 80.70 (9) 76.38 (10) 77.95 (9) 80.46 (9) 9.3
InceptionV3+ 92.60 (7) 90.90 (6) 91.70 (6) 93.15 (6) 6.3
Xception Avg_pool 75.04 (10) 74.49 (11) 74.75 (10) 78.60 (11) 10.5
Xception+ 95.46 (4) 89.00 (8) 90.10 (9) 91.73 (8) 7.3
DenseNet201 Avg_pool 91.83 (3) 91.60 (2) 91.57 (2) 92.87 (2) 2.3
DenseNet201+ 96.59 (2) 96.20 (2) 96.39 (2) 96.72 (2) 2.0
SqueezeNet Pool10 85.64 (6) 84.51 (8) 85.02 (7) 87.30 (7) 7.0
SqueezeNet+ 85.39 (11) 86.12 (11) 85.45 (11) 87.87 (11) 11.0
ShuffleNet Node_200 84.68 (7) 85.95 (4) 85.27 (6) 87.73 (6) 5.8
ShuffleNet+ 91.80 (9) 88.71 (10) 90.10 (9) 91.16 (10) 9.5
AlexNet fc7 92.08 (2) 91.07 (3) 91.39 (3) 92.72 (3) 2.8
AlexNet+ 94.87 (5) 91.99 (4) 93.33 (4) 94.15 (4) 4.3
VGGNet19 fc7 92.32 (1) 92.58 (1) 92.45 (1) 93.30 (1) 1.0
VGGNet19+ 96.64 (1) 96.29 (1) 96.46 (1) 96.86 (1) 1.0
Avg. on CNN Avg. on CNN+ 85.88 84.38 84.50 86.75
93.34 91.32 92.04 93.20

*Bold numbers indicate the best performance.