Skip to main content
. 2023 Jul 3;25:e43154. doi: 10.2196/43154

Table 2.

General characteristics of included studies.

Study Data set Reference standard Machine learning-deep learning Best result
Mizan et al [34] Shenzhen, Montgomery County Radiologist’s reading CNNsa: DenseNet-169, MobileNet, Xception, and Inception-V3 DenseNet-169 (precision 92%, recall 92%, F1-score 92%, validation accuracy 91.67%, and AUCb 0.915)
Hwang et al [35] Korean Institute of Tuberculosis, Montgomery County, Shenzhen Unclear (Korean Institute of Tuberculosis); radiologist’s reading Customized CNN based on AlexNet + transfer learning Customized CNN (AUC 96.7% [Shenzhen] and accuracy 90.5% [Montgomery County])
Hooda et al [36] Montgomery County, Shenzhen, Belarus, Japanese Society of Radiological Technology Unclear (Belarus); radiologist’s reading Proposed (blocks), AlexNet, ResNet, Ensemble (proposed + AlexNet + ResNet) Ensemble (accuracy 90.0%, AUC 0.96, sensitivity 88.42%, and specificity 92.0%)
Melendez et al [37] Zambia, Tanzania, Gambia Radiologist’s reading kNNc, multiple-instance learning–based system: miSVMd, miSVM + probability estimation and data discarding, single iteration-maximum pattern margin support vector machine + probability estimation and data discarding Single iteration-maximum pattern margin support vector machine + probability estimation and data discarding (0.86 [Zambia], 0.86 [Tanzania], and 0.91 [Gambia])
Rajaraman et al [38] Shenzhen, Montgomery County, Kenya, India Radiologist’s reading SVM with GIST, histogram of oriented gradients, speeded up robust features (feature engineering); SVM with AlexNet, VGG-16, GoogLeNet, ResNet-50; and ensemble approach Ensemble (Shenzhen [accuracy 93.4%, AUC 0.991], Montgomery County [accuracy 87.5%, AUC 0.962], Kenya [accuracy 77.6%, AUC 0.826], and India [accuracy 96.0%, AUC 0.965])
Zhang et al [39] Jilin, Guangzhou, Shanghai Unclear Proposed: Feed-forward CNN model with integrated convolutional block attention module and 4 other CNNs (AlexNet, GoogLeNet, DenseNet, and ResNet-50) Proposed network (recall/sensitivity 89.7%, specificity 85.9%, accuracy 87.7%, and AUC 0.943)
Melendez et al [40] Cape Town Culture Feature engineering: minimum redundancy maximum relevance—multiple learner fusion: RFe and extremely randomized trees Multiple learner fusion: RF and extremely randomized trees (AUC 0.84, sensitivity 95%, specificity 49%, and negative predictive value 98%)
Ghanshala et al [13] Montgomery County, Shenzhen, Japanese Society of Radiological Technology Radiologist’s reading SVM, RF, kNN, neural network Neural network (AUC 0.894, accuracy 81.1%, F1-score 81.1%, precision 81.1%, recall 81.1%, and average accuracy 80.45%)
Ahsan et al [41] Montgomery County, Shenzhen Radiologist’s reading CNN: VGG-16 VGG-16 + data augmentation (AUC 0.94 and accuracy 81.25%)
Sharma et al [42] Custom data set Unclear A total of 29 different custom artificial intelligence models Custom deep artificial intelligence model (100% normal, 100% COVID-19, 66.67% new COVID-19, 100% non–COVID-19, 93.75% pneumonia, 80% tuberculosis)
Hooda et al [18] Montgomery County, Shenzhen, Belarus, Japanese Society of Radiological Technology Unclear (Belarus);
radiologist’s reading
Ensemble of AlexNet, GoogLeNet, and ResNet Ensemble (accuracy 88.24%, AUC 0.93, sensitivity 88.42%, and specificity 88%)
van Ginneken et al [43] Netherlands, Interstitial Disease database Radiologist’s reading Active shape model segmentation, kNN classifier, weighted multiplier Proposed scheme with kNN (sensitivity 86%, specificity 50%, and AUC 0.82)
Chandra et al [14] Montgomery County, Shenzhen Radiologist’s reading SVM with hierarchical feature extraction SVM with hierarchical feature extraction (Montgomery County [accuracy 95.6%, AUC 0.95] and Shenzhen [accuracy 99.4% and AUC 0.99])
Karnkawinpong and Limpiyakorn [44] Montgomery County, Shenzhen, Thailand Radiologist’s reading AlexNet, VGG-16, and CapsNet CapsNet (accuracy 80.06%, sensitivity 92.72%, and specificity 69.44%)
Stirenko et al [45] Shenzhen Radiologist’s reading Customized CNN Customized CNN (64% [lossy data augmentation] and 70% [lossless data augmentation])
Rajpurkar et al [46] Africa Culture Customized CNN based on DenseNet-121 CheXaid (accuracy 79%, sensitivity 67%, and specificity 87%)
Sivaramakrishnan et al [47] Shenzhen, Montgomery County, Kenya, India Radiologist’s reading Customized CNN, AlexNet, VGG-16, VGG-19, Xception, and ResNet-50 Proposed pretrained CNNs (accuracy 85.5% [Shenzhen], 75.8% [Montgomery County], 69.5% [Kenya], and 87.6% [India]; AUC 0.926 [Shenzhen], 0.833 [Montgomery County], 0.775 [Kenya], and 0.956 [India])
Owais et al [48] Shenzhen, Montgomery County Radiologist’s reading Ensemble-shallow–deep CNN + multilevel similarity measure algorithm Ensemble on Montgomery County (F1-score 0.929, average precision 0.937, average recall 0.921, accuracy 92.8%, and AUC 0.965)
Xie et al [49] Japanese Society of Radiological Technology, Shenzhen, Montgomery County, local from the First Affiliated Hospital of Xi’an Jiao Tong University Radiologist’s reading Segmentation: U-Net; classification: proposed method based on Faster region-based convolutional network + feature pyramid network Faster region-based convolutional network + feature pyramid network (Shenzhen [AUC 0.941, accuracy 90.2%, sensitivity 85.4%, and specificity 95.1%], Montgomery County [AUC 0.977, accuracy 92.6%, sensitivity 93.1%, and specificity 92.3%], Local First Affiliated Hospital of Xi’an Jiao Tong University [AUC 0.993, accuracy 97.4%, sensitivity 98.3%, and specificity 96.2%])
Andika et al [50] Shenzhen Radiologist’s reading Customized CNN Customized CNN: normal (precision 83% and recall 83%); pulmonary tuberculosis (precision 84% and recall 84%); overall accuracy 84%
Das et al [51] Shenzhen, Montgomery County Radiologist’s reading InceptionNet V3 and modified (truncated) InceptionNet V3 Modified InceptionNet V3: Shenzhen train Montgomery County test (accuracy 76.05%, AUC 0.84, sensitivity 63%, specificity 81%, and precision 89%); Montgomery County train Shenzhen test (accuracy 71.47%, AUC 0.79, sensitivity 59%, specificity 73%, and precision 84%); and combined (accuracy 89.96%, AUC 0.95, sensitivity 87%, specificity 93%, and precision 92%)
Gozes and Greenspan [52] ChestX-ray14, Montgomery County, Shenzhen Radiologist’s reading MetaChexNet based on DenseNet-121 MetaChexNet: Shenzhen AUC 0.965, Montgomery County AUC 0.928, and combined AUC 0.937
Hooda et al [53] Shenzhen, Montgomery County Radiologist’s reading Proposed CNN Proposed CNN: accuracy 82.09% and loss 0.4013
Heo et al [19] Yonsei Radiologist’s reading VGG19, InceptionV3, ResNet50, DenseNet121, InceptionResNetV2, and CNN with demographic variables (VGG19 + demographic variables) CNN with demographic variables (VGG19 AUC 0.9213) and CNN with image-only information (VGG19 0.9075)
Lakhani and Sundaram [17] Shenzhen, Montgomery County, Belarus, Thomas Jefferson University Hospital Culture (Belarus and Thomas Jefferson); radiologist’s reading (all data sets) Ensemble of AlexNet and GoogLeNet Ensemble (AUC 0.99); Ensemble + radiologist augmented (sensitivity 97.3%, specificity 100%, and accuracy 98.7%)
Sathitratanacheewin et al [20] Shenzhen, ChestX-ray8 Radiologist’s reading Proposed CNN based on Inception V3 Proposed CNN (Shenzhen AUC 0.8502) and ChestX-ray8 (AUC 0.7054)
Dasanayaka and Dissanayake [54] Shenzhen, Montgomery County, Medical Information Mart for Intensive Care, and Synthesis Unclear (Medical Information Mart for Intensive Care and Synthesis); radiologist’s reading Proposed CNN based on generative adversarial network, UNET, and ensemble of VGG16 + InceptionV3 Ensemble (Youden’s index 0.941, sensitivity 97.9%, specificity 96.2%, and accuracy 97.1%)
Nguyen et al [55] Shenzhen, Montgomery County, National Institutes of Health-14 Radiologist’s reading ResNet-50, VGG16, VGG19, DenseNet-121, and Inception ResNet DenseNet (Shenzhen AUC 0.99 and Montgomery County AUC 0.80)
Meraj et al [56] Shenzhen, Montgomery County Radiologist’s reading VGG-16, VGG-19, ResNet50, and GoogLeNet VGG-16: Shenzhen (accuracy 86.74% and AUC 0.92), Montgomery County (accuracy 77.14% and AUC 0.75), and VGG-19 (AUC 0.90)
Becker et al [57] Uganda Unclear ViDi—industrial-grade deep learning image analysis software (suite version 2.0, ViDi Systems) ViDi software (overall AUC 0.98)
Hwang et al [58] Seoul National University Hospital, Boramae, Kyunghee, Daejeon Eulji, Montgomery County, Shenzhen Culture (Seoul National University Hospital, Boramae, Kyunghee, Daejeon); radiologist’s reading Proposed CNN Proposed CNN (AUC 0.977-1.000, area under the alternative free-response receiver operating characteristics curves 0.973-1.000, sensitivity 94.3%-100%, specificity 91.1%-100%, and true detection rate 94.5%-100%)
Pasa et al [59] Montgomery County, Shenzhen, Belarus Unclear (Belarus); radiologist’s reading Proposed CNN Proposed CNN: Montgomery County (accuracy 79.0% and AUC 0.811), Shenzhen (accuracy 84.4% and AUC 0.900), and combined 3 data sets (accuracy 86.2% and AUC 0.925)
Ahmad Hijazi et al [60] Shenzhen, Montgomery County Radiologist’s reading Ensemble of InceptionV3, VGG-16, and a custom-built architecture Ensemble (accuracy 91.0%, sensitivity 89.6%, and specificity 90.7%)
Hwa et al [61] Shenzhen, Montgomery County Radiologist’s reading Ensemble of InceptionV3 and VGG-16 Ensemble + canny edge (accuracy 89.77%, sensitivity 90.91%, and specificity 88.64%)
Ayaz et al [62] Shenzhen, Montgomery County Radiologist’s reading Ensemble (pretrained CNNs: InceptionV3, InceptionResnetv2, VGG16, VGG19, MobileNet, ResNet50, and Xception) with Gabor filter Ensemble with Gabor filter: Montgomery County (accuracy 93.47% and AUC 0.97) and Shenzhen (accuracy 97.59% and AUC 0.99)
Govindarajan and Swaminathan [63] Montgomery County Radiologist’s reading ELMf and online sequential ELM ELM (accuracy 99.2%, sensitivity 99.3%, specificity 99.3%, precision 99.0%, F1-score 99.2%, and Matthews correlation coefficient 98.6%) and online sequential ELM (accuracy 98.6%, sensitivity 98.7%, specificity 98.7%, precision 97.9%, F1-score 98.6%, and Matthews correlation coefficient 97.0%)
Rashid et al [64] Shenzhen Radiologist’s reading Ensemble of ResNet-152, Inception-ResNet-v2, and DenseNet-161 + SVM Ensemble with SVM (accuracy 90.5%, sensitivity 89.4%, specificity 91.9%, and AUC 0.95)
Munadi et al [65] Shenzhen Radiologist’s reading Image enhancements: unsharp masking, high-frequency emphasis filtering, and contrast-limited adaptive histogram equalization—deep learning (ResNet-50, EfficientNet-B4, and ResNet-18) Proposed EfficientNet-B4 + unsharp masking (accuracy 89.92% and AUC 0.948)
Abbas and Abdelsamea [66] Montgomery County Radiologist’s reading AlexNet AlexNet (AUC 0.998, sensitivity 99.7%, and specificity 99.9%)
Melendez et al [67] Zambia Radiologist’s reading Multiple-instance learning + active learning Multiple-instance learning + active learning (pixel-level AUC 0.870)
Khatibi et al [68] Montgomery County, Shenzhen Radiologist’s reading Logistic regression, SVM with linear and radial basis function kernels, decision tree, RF, and AdaBoost—CNNs (VGG-16, VGG-19, ResNet-101, ResNet-150, DenseNet, and Xception) Proposed stacked ensemble: Montgomery County (accuracy 99.26%, AUC 0.99, sensitivity 99.42%, and specificity 99.15%) and Shenzhen (accuracy 99.22%, AUC 0.98, sensitivity 99.39%, and specificity 99.47%)
Kim et al [69] ChestX-ray14, Montgomery County, Shenzhen, Johns Hopkins Hospital Culture (Johns Hopkins Hospital); radiologist’s reading ResNet-50 and TBNet TBNet on Johns Hopkins Hospital (AUC 0.87, sensitivity 85%, specificity 76%, positive predictive value 0.64, and negative predictive value 0.9) and Majority VoteTBNet and 2 radiologists (sensitivity 94%, specificity 85%, positive predictive value 0.76, and negative predictive value 0.96)
Rahman et al [70] Kaggle, National Library of Medicine, Belarus, National Institute of Allergy and Infectious Diseases TB data set, Radiological Society of North America CXR data set Unclear (Kaggle, Belarus, National Institute of Allergy and Infectious Diseases, Radiological Society of North America); radiologist’s reading Lung segmentation—U-Net; classification—MobileNetv2, SqueezeNet, ResNet18, Inceptionv3, ResNet 50, ResNet101, CheXNet, VGG19, and DenseNet201 Without segmentation: CheXNet (accuracy 96.47%, precision 96.62%, sensitivity 96.47%, F1-score 96.47%, and specificity 96.51%); with segmentation: DenseNet201 (accuracy 98.6%, precision 98.57%, sensitivity 98.56%, F1-score 98.56%, and specificity 98.54%)
Yoo et al [71] ChestX-ray14, Shenzhen, East Asian Hospital Unclear (East Asian Hospital); radiologist’s reading ResNet18 ResNet18: AXIR1 (accuracy 98%, sensitivity 99%, specificity 97%, precision 97%, and AUC 0.98) and AXIR2 (accuracy 80%, sensitivity 72%, specificity 89%, precision 87%, and AUC 0.80)
Oloko-Oba and Viriri [72] Shenzhen Radiologist’s reading Proposed ConvNet Proposed ConvNet (accuracy 87.8%)
Guo et al [73] Shenzhen, National Institutes of Health Radiologist’s reading Artificial bee colony (VGG16, VGG19, Inception V3, ResNet34, and ResNet50) and ResNet101 (proposed ensemble CNN) Ensemble: Shenzhen (accuracy 94.59%-98.46%, specificity 95.57%-100%, recall 93.66%-98.67%, F1-score 94.7%-98.6%, and AUC 0.986-0.999) and National Institutes of Health (accuracy 89.56%-95.49%, specificity 96.69%-98.50%, recall 78.52%-90.91%, F1-score 85.5%-94.0%, and AUC 0.934-0.976)
Ul Abideen et al [74] Shenzhen, Montgomery County Radiologist’s reading Proposed Bayesian convolutional neural network Bayesian convolutional neural network: Montgomery County (accuracy 96.42%) and Shenzhen (accuracy 86.46%)

aCNN: convolutional neural network.

bAUC: area under the curve.

ckNN: k-nearest neighbor.

dmiSVM: multiple instance support vector machine/ maximum pattern margin support vector machine.

eRF: random forest.

fELM: extreme learning machine.