Skip to main content
. 2021 Oct 11;139:104927. doi: 10.1016/j.compbiomed.2021.104927

Table 2.

Comparison of deep CNNs as an end-2-end network and as a feature extractor along with the proposed LMPL classifier (CNN+) in Experiment 1: Normal, COVID-19, and typical viral pneumonia.

Method Performance Metrics (%)
Feature ExtractorLayer Sensitivity (Recall) Precision (PPV) F1-score Accuracy Average Rank
ResNet18 96.40 (4) 96.35 (5) 96.37 (4) 96.17 (4) 4.3
ResNet18+ 97.60 (7) 98.06 (6) 97.82 (6) 97.52 (6) 6.3
ResNet50 Avg_pool 96.20 (7) 96.16 (6) 96.17 (6) 95.91 (6) 6.3
ResNet50+ 98.43 (3) 98.49 (4) 98.45 (4) 98.10 (4) 3.8
ResNetV2 Avg_pool 94.04 (10) 92.61 (10) 93.28 (11) 93.18 (10) 10.3
ResNetV2+ 97.48 (8) 97.07 (10) 97.26 (10) 97.01 (9) 9.3
Inception pool5 96.40 (4) 96.40 (4) 96.35 (5) 96.01 (5) 4.5
Inception+ 98.41 (4) 98.69 (3) 98.54 (3) 98.36 (3) 3.3
InceptionV3 Avg_pool 95.80 (8) 94.95 (9) 95.35 (8) 95.11 (8) 8.3
InceptionV3+ 98.26 (5) 98.10 (5) 98.18 (5) 97.75 (5) 5.0
Xception Avg_pool 92.00 (11) 92.11 (11) 92.05 (10) 91.73 (11) 10.8
Xception+ 97.24 (9) 97.34 (9) 97.29 (9) 96.98 (10) 9.3
DenseNet201 Avg_pool 97.48 (2) 97.66 (2) 97.56 (2) 97.30 (3) 2.3
DenseNet201+ 98.85 (2) 98.86 (2) 98.85 (2) 98.58 (2) 2.0
SqueezeNet Pool10 95.14 (9) 95.13 (8) 95.06 (9) 94.76 (9) 8.8
SqueezeNet+ 95.39 (11) 94.75 (11) 95.03 (11) 95.08 (11) 11.0
ShuffleNet Node_200 96.34 (6) 95.68 (7) 96.00 (7) 95.82 (7) 6.8
ShuffleNet+ 97.20 (10) 97.56 (8) 97.36 (8) 97.07 (8) 8.5
AlexNet fc7 97.24 (3) 97.57 (3) 97.40 (3) 97.33 (2) 2.8
AlexNet+ 97.67 (6) 97.84 (7) 97.75 (7) 97.39 (7) 6.8
VGGNet19 fc7 98.21(1) 98.43 (1) 98.32 (1) 98.10 (1) 1.0
VGGNet19+ 98.87 (1) 99.08 (1) 98.97 (1) 98.81 (1) 1.0
Avg. on CNN Avg on CNN+ 95.93 95.73 95.81 95.58
97.76 97.80 97.77 97.51

*Bold numbers indicate the best performance.