Table 1.
Authors | Methodology | Imaging Modality | Dataset | Limits | Classification Problem | Classification Model | Accuracy [%] | Specificity [%] | Precision [%] | Area Under ROC Curve |
---|---|---|---|---|---|---|---|---|---|---|
Sethy et al. | deep learning | CXR | Firt dataset: 25 number of COVID-19+ and 25 number of COVID-19- X-ray images. Second dataset: 133 X-ray images of COVID-19+, including MERS, SARS, and ARDS and 133 chest X-ray images as COVID-19- |
Number of patients. Moreover, the limitation of this methodology is that if the patient is in a critical situation and unable to attend for CXR scanning. The model is limited to classify the input chest X-ray image into only two classes either normal or COVID-19 |
COVID-19 detection | resnet50 plus Support Vector Machine | 95.38 | |||
Jiao et al. | deep learning | CXR | 1834 patients were identified and assigned to the model training (n = 1285), validation (n = 183), or testing (n = 366) sets. The number of patients that were identified for external testing of the model were 475. | The artificial intelligence model showed decreased performance on the external testing set relative to the internal testing set, indicating that generalization might not be possible. This finding could have been due to several factors, including heterogeneous data and image acquisition between the different hospital systems. The model is limited to classifying the input chest X-ray image into only two classes either normal or COVID-19 | predict the binary outcome of COVID-19 disease severity (critical or non-critical) | EfficientNet deep neural network and clinical data | 0.85 on internal testing and 0.80 on external testing | |||
Al-Waisy et al. | deep learning | CXR | 200 X-ray images with confirmed COVID-19 infection come by Cohen’s GitHub database [58]; 200 COVID-19 CXR gathered from three different repository: Radiopaedia dataset [59], Italian Society of Medical and Interventional Radiology (SIRM) [60], and Radiological Society of North America (RSNA) [61]; 400 normal CXR by Kaggle’s CXR dataset [62] | Cases used in this study come from different databases. The model is limited to classifying the input chest X-ray image into only two classes, either normal or COVID-19 | COVID-19 detection | COVID-CheXNet system made by combining the results generated from two different deep learning models | 99.99 | 100 | 100 | |
Ozcan et al. | deep learning | CXR | 127 X-ray images diagnosed with COVID-19, 500 no-findings and 500 pneumonia class | No external testing of the model and the patient number is low for multi class classification | COVID-19 versus no findings classification/multi-class classification COVID-19 versus no findings versus pneumonia | single layer-based (SLB) and a feature fusion based (FFB) composite systems using deep features | 99.52/87.64 | 98.03/99.7 | ||
Ozturk et al. | deep learning | CXR | 127 X-ray images diagnosed with COVID-19, 500 no-findings and 500 pneumonia class | No external testing of the model and the patient number is low for multi class classification | COVID-19 versus no findings classification/multi-class classification COVID-19 versus no findings versus pneumonia | DarkNet model implementing 17 convolutional layers and introducing different filtering on each layer | 98.08/87.02 | 98.03/89.96 | ||
Du et al. | machine learning | CXR | 447 cases with COVID-19; 405 with other viral PNA, 1515 with bacterial PNA, 1862 with Clinical PNA, 256 with other infections and 663 with other diseases | The model has a moderate specificity | COVID-19 detection | 68.4 | ||||
Dey et al. | classifier ensemble technique | CXR | A total of 506 viral lung infection cases including 468 cases with COVID-19;46 bacterial lung infection and 26 fungal lung infection by https://github.com/ieee8023/covid-chestxray-dataset.; 1583 normal CXR and 4273 COVID-19+ CXR by https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia/version/2. | No external testing of the model and patient number. | classification of Normal, COVID-19, and Pneumonia cases | Choquet fuzzy integral using two dense layers and one softmax layer | 99.02 | 99 | ||
Alruwaili et al. | deep learning | CXR | 2905 CXR images, which are distributed into 219 COVID-19 images, 1345 viral pneumonia images, and 1341 for normal category | No external testing of the model and patient number. | COVID-19 vs. normal vs. viral pneumonia classification | Inception-ResNetV2 deep learning model | 99.83 | 98.11 | ||
Bukhari et al. | deep learning | CXR | 93 CXR which have no radiological abnormality; 96 CXR with the radiological features of pneumonia different from COVID-19 infection; 89 digital images of chest X-rays of patients diagnosed with COVID-19 infection |
The model takes a roughly higher training and testing run time compared to other models due to the complex structure of the inside modules. | healthy normal, bacterial pneumonia, viral pneumonia, and COVID-19 classification | resnet50 | 98.18 | 98.14 | ||
Khan et al. | deep learning | CXR | 1203 normal, 660 bacterial Pneumonia and 931 viral Pneumonia cases |
Small prepared dataset which indicates that given more data, the proposed model can achieve better results with minimum pre-processing of data | viral pneumonia, COVID-19, bacterial pneumonia, and normal classification/normal, COVID-19, and pneumonia classification | CoroNet: pretrained Xception convolution network | 89.6/95.0 | |||
Hemdan et al. | deep learning | CXR | 25 normal cases and 25 positive COVID-19 images | The model is limited to classifying the input chest X-ray image into only two classes, either normal or COVID-19. Another limit is the number of patients | COVID-19 detection | InceptionV3, MobileNetV2, VGG19, DenseNet201, Inception-ResNetV2, ResNetV2, and Xception model | 90 | 83 | ||
Sethy and Behera | deep learning and machine learning | CXR | 25 normal cases and 25 positive COVID-19 images | The model is limited to classifying the input chest X-ray image into only two classes, either normal or COVID-19 | detecting COVID-19 (ignoring SARS, MERS and ARDS) | deep learning for feature extraction and support vector machine (SVM) for classification | 95.38 | |||
Ouchicha et al. | deep learning | CXR | 219 COVID-19, 1341 normal and 1345 viral pneumonia | The model has been trained on a small dataset of few images of various COVID-19, viral pneumonia and normal cases from publically available database | classification of Normal, COVID-19, and Pneumonia cases | local and global features of CXR using two parallel layers with reaching various kernel sizes | 97.2 | |||
Gozes et al. | deep learning | CT | 106 COVID-19 chest CT scans and 99 normal ones | Patient number is low | COVID-19 versus no COVID-19 | robust 2D and 3D deep learning models | 0.948 | |||
Wang et al. [63] | deep learning | CT | 740 for COVID-19 negative and 325 for COVID-19 positive | Sample size was relatively small | COVID-19 versus no COVID-19 | GoogleNet Inception v3 convolution neural network | 89.5 | 88 | ||
Li et al. | deep learning | CT | 1292 with COVID-19, 1735 for community-acquired pneumonia, and 1325 for non-pneumonia abnormalities | The model is limited to classifying the input chest X-ray image into only two classes, either normal or COVID-19 | COVID-19 versus no COVID-19 | resnet50 | 90 | 0.96 | ||
Ko et al. | deep learning | CT | 1194 chest CT COVID-19 images and 1357 chest CT images with non–COVID-19 pneumonia | The model is limited to classifying the input chest X-ray image into only two classes, either normal or COVID-19 | Classification of COVID-19 patients | FCONet developed by transfer learning using one of four state-of-the-art pretrained deep learning models (VGG16, ResNet-50, Inception-v3, or Xception) | 99.87 | 100 | ||
Nguyen et al. | deep learning | CT | 101 with COVID-19, 118 with common Pneumonia and 118 for non-pneumonia abnormalities |
Patient number is low for three classes classification | normal, COVID-19, and pneumonia classification | convolutional neural network | 87 | 0.83 | ||
1544 with COVID-19, 1556 with common Pneumonia and 118 for non-pneumonia abnormalities |
97 | 0.99 | ||||||||
281 with COVID-19 and 1068 for non-pneumonia abnormalities | 86 | 0.87 | ||||||||
Zhang et al. [64] | CT | CT | There were 406 clearer COVID-19-positive lung CT images. The marked areas in the mask images are 0-“ground glass opacity,” 1-“consolidations,” 2-“lungs other,” 3-“background. | The complexity of the model and the number of patients | segment ground glass opaque lesions in COVID-19 lung CT images | COVSeg-NET model is based on the fully convolutional neural network model structure, which mainly includes convolutional layer, nonlinear unit activation function, maximum pooling layer, batch normalization layer, merge layer, flattening layer, sigmoid layer, and so forth | 100 | |||
Song et al. | deep learning | CT | A total of 88 patients diagnosed with the COVID-19, 101 patients infected with bacteria pneumonia, and 86 healthy persons | Patient number is low for a three classes classification | normal verus COVID-19 classification/discriminating COVID-19 patients from others | resnet50 | 96/86 | 0.99/0.95 | ||
Wang et al. [65] | machine learning | CT | A total of 1051 patients with RT-PCR confirmed COVID-19 and chest CT was included in this study | Patient selection bias, retrospective and multi-institutional nature of the study. | for prediction of COVID-19 progression using CT imaging and clinical data | 78 | 80 | |||
Xu et al. | deep learning | CT | A total of 618 CT samples were collected: 219 samples from 110 patients with COVID-19 and 224 samples from 224 patients with influenza-A viral pneumonia (IAVP) | Patient selection bias, patient number is low | early screening model to distinguish COVID-19 from IAVP and healthy cases through pulmonary CT images | 3D deep learning network that consists of four basic stages, which are pre-processing, candidate region segmentation, classification | 86.7 |