Skip to main content
. 2021 Mar 16;176:114883. doi: 10.1016/j.eswa.2021.114883

Table 1.

Comparison of various deep learning COVID-19 diagnostic systems.

Authors Dataset(s) used Techniques used Performance measures Remarks
Apostolopoulos and Mpesiana (2020) (a) Github repository from Joseph Cohen (b) Radiopaedia, (c) Italian Society of Medical and Interventional Radiology (SIRM), (d) Radiological Society of North America (RSNA), (e) Kermany Transfer learning using CNNs Sensitivity = 92.85%, Specificity = 98.75%, Accuracy-2 class = 98.75%, Accuracy-3 class = 93.48% A multi-class classification using VGGNet with a high performance is achieved. There are few drawbacks of the work. 1. The number of images used for COVID-19 patients is less. 2. Some of the cases considered for pneumonia symptoms are taken from old records. There was no match found among the records and images collected from the COVID-19 patients.
Wang and Wong (2020) (a) COVID-19 Image Data Collection (b) Chest X-ray Dataset Initiative (c) ActualMed COVID-19 Chest X-ray Dataset Initiative (d) RSNA Pneumonia Detection Challenge dataset (e) COVID-19 radiography database COVID-Net Accuracy = 93.3% The architecture proposed uses combinations of 1 × 1 convolution blocks which makes the architecture lighter with fewer number of parameters. Thus, reducing the computational complexity of the network. The model provided better performance in terms of accuracy, but there is still scope in improvising the sensitivity and Positive Predicted Value (PPV) in the model.
Narin et al. (2020) Dr. Joseph Cohen (GitHub repository) Deep CNN and ResNet50 Accuracy = 98%, Specificity = 100%, Recall= 96% In this work, deep architectures such as Deep CNN, Inception, and Inception-ResNet are used. The main drawback in this work is the number of images considered for building and testing the model. Deep learning architectures work well for huge data. In this work, only 50 images of each class have been considered. The scope of different variations of the virus spread or occurrence may not be captured in that limited number of images.
Sethy and Behera (2020) GitHub (Dr. Joseph Cohen) and Kaggle (X-ray images of Pneumonia) ResNet50  + SVM Accuracy = 95.38%, FPR = 95.52%, F1-score = 91.41%, Kappa = 90.76% The model used by the authors is a combination of ResNet and Support Vector Machine (SVM). The obtained accuracy is commendable but the model was built on very few samples.
Talaat et al. (2020) 1. Github repository collected from Joseph Cohen. 2. Collected by a team of researchers from Qatar University in Qatar and the University of Dhaka in Bangladesh along with collaborators from Pakistan and Malaysia medical doctors. Deep features and fractional-order marine predators algorithm Accuracy: 98.7; F-score: 99.6 The method exhibited very promising results using deep features that are extracted from Inception model and the decision is provided by a tree based classifier. However, the drawback of this method is the varying environments used for feature extraction and classification. Also, the method has been tested for very few images. The results may vary in case a larger dataset is fed to the model.
Xu et al. (2020) Private dataset of COVID-19, and Influenza-A pneumonia is collected ResNet and location based attention Sensitivity = 98.2%, Specificity = 92.2%, AUC = 0.996 The performance of the deep learning algorithms proved to provide sufficient results even with few data samples.
Oh et al. (2020) (a) Japanese Society of Radiological Technology (JSRT), (b) Chest posteroanterior (PA) radiographs were collected from 14 institutions including normal and lung nodule cases, (c) SCR database, (d) U.S. National Library of Medicine (USNLM) collected Montgomery Country (MC) dataset Patch-based CNN Accuracy = 93.3% Authors proposed a solution for handling the issue of training deep neural networks on the limited training dataset. Multiple sources are considered for the collection of thoracic lung and chest radiographs. The limitation of this method is the performance of the proposed system in terms of precision, recall, and accuracy.
Abbas et al. (2020) (a) Japanese Society of Radiological Technology (JSRT) Dr. Joseph Cohen Github repository and SARS Decompose, Transfer, and Compose (DeTraC) Accuracy = 95.12%, Specificity = 91.87%, Sensitivity = 97.91% The model provided better performance results. The limited data issue in this work is handled by performing a data augmentation step. But, augmenting the X-ray images may not be a proper solution to handle less data as the location of the presence of the virus spread may never be found correctly. Only frontal images of the chest X-rays are selected and given to the further processing in our work to overcome this problem.
Li et al. (2020) (a) Radiological Society of North America. RSNA pneumonia detection challenge, and (b) GitHub (Dr. Joseph Cohen) and Kaggle (X-ray images of pneumonia) COVID-MobileXpert Accuracy = 93.5% The model takes a noisy X-ray snapshot as an input so that a quick screening can be performed to identify presence of COVID-19. DenseNet-121 architecture is employed to pre-train and fine-tune the network. For on-device COVID-19 screening purposes, lightweight CNNs such as MobileNetv2, SqueezeNet, and ShuffleNetV2 are used.
Ahuja et al. (2020) The database contains 349 CT COVID19 positive CT images from 216 patients and 397 CT images of Non-COVID patients Data augmentation using stationary wavelets, pre-trained CNN models, abnormality localization Testing accuracy: 99.4% In this method, an abnormality localization is implemented along with COVID-19 detection. The results obtained from this method is promising and the use of CT images provided better visibility of images as compared to X-ray images.