An official website of the United States government
Here's how you know
Official websites use .gov
A
.gov website belongs to an official
government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you've safely
connected to the .gov website. Share sensitive
information only on official, secure websites.
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID-19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains active.
COVID-19 or novel coronavirus disease, which has already been declared as a worldwide pandemic, at first had an outbreak in a large city of China, named Wuhan. More than two hundred countries around the world have already been affected by this severe virus as it spreads by human interaction. Moreover, the symptoms of novel coronavirus are quite similar to the general seasonal flu. Screening of infected patients is considered as a critical step in the fight against COVID-19. As there are no distinctive COVID-19 positive case detection tools available, the need for supporting diagnostic tools has increased. Therefore, it is highly relevant to recognize positive cases as early as possible to avoid further spreading of this epidemic. However, there are several methods to detect COVID-19 positive patients, which are typically performed based on respiratory samples and among them, a critical approach for treatment is radiologic imaging or X-Ray imaging. Recent findings from X-Ray imaging techniques suggest that such images contain relevant information about the SARS-CoV-2 virus. Application of Deep Neural Network (DNN) techniques coupled with radiological imaging can be helpful in the accurate identification of this disease, and can also be supportive in overcoming the issue of a shortage of trained physicians in remote communities. In this article, we have introduced a VGG-16 (Visual Geometry Group, also called OxfordNet) Network-based Faster Regions with Convolutional Neural Networks (Faster R–CNN) framework to detect COVID-19 patients from chest X-Ray images using an available open-source dataset. Our proposed approach provides a classification accuracy of 97.36%, 97.65% of sensitivity, and a precision of 99.28%. Therefore, we believe this proposed method might be of assistance for health professionals to validate their initial assessment towards COVID-19 patients.
The outbreak of novel coronavirus, which is known as COVID-19, has created an alarming situation all over the world. Coronaviruses are enveloped non-segmented positive-sense RNA viruses belonging to the family Coronaviridae and the order Nidovirales and broadly distributed in humans and other mammals [1]. However, viruses that are responsible for Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS) also belong to the coronavirus family [2,3]. The outbreak of COVID-19 started in Wuhan, a town of Eastern China, in December 2019. This virus causes Pneumonia with symptoms such as fever, dry cough, and fatigue. In severe cases, the patient feels difficulty in breathing. Some patients also experience headaches, nausea, or vomiting. It spreads from person to person through droplets of cough or sneeze of an infected person [4]. Even if an uninfected person touches the droplets and then touches his face, especially eyes, nose, or mouth without washing hands can be infected by this novel coronavirus. As of May 8, 2020, according to the situation report of the World Health Organization (WHO), there are 210 countries affected by the novel coronavirus. On April 25, 2020, it was declared as a pandemic by the World Health Organization (WHO). Reverse Transcription Polymerase Chain Reaction (RT-PCR) is one of the foremost methods of testing coronavirus. This test is performed on respiratory samples, and the results are generated within a few hours to two days. Antibodies are also used to detect COVID-19, where blood samples are used to identify the virus. However, Health professionals use Chest X-Ray scans occasionally to specify lung pathology. In Wuhan, a study was performed on computerized tomography (CT) image reports, and it found that the sensitivity of CT images for the COVID-19 infection rate was about 98% compared to RT-PCR sensitivity of 71% [5]. At the early stage of this global pandemic, the Chinese clinical centers had insufficient test kits. Therefore, doctors recommend a diagnosis only based on clinical Chest CT results [6,7]. Even countries like Turkey uses CT images due to the insufficient number of test kits. Some studies state that lab reports and clinical image features are even better for early detection of COVID-19 [[8], [9], [10], [11]].
Moreover, health experts also noticed changes in X-Ray images before the symptoms are visible [12]. Deep Neural Network approach techniques have had successful application to many problems in recent times, such as skin cancer classification [13,14], breast cancer detection [15,16], brain disease classification [17], pneumonia detection in the chest X-Ray [18], and lung segmentation [[19], [20], [21]]. Therefore, precise, accurate, and faster intelligence detection models may help to overcome this problem in the rapid rise of this COVID-19 epidemic. In this article, we propose a novel framework to detect COVID-19 infection from X-Ray images using Faster Region Convolutional Neural Network (F R–CNN) deep approach. Based on an available benchmark dataset of COVIDx, we examined X-Ray images reports of COVID-19 along with the reports of patients with other diseases and normal persons. Also, for feature extraction, we have used the VGG-16 network for building our model.
2. Relevant work
Deep learning is a popular area of research in the field of artificial intelligence. It enables end-to-end modeling to deliver promised results using input data without the need for manual feature extraction. The use of machine learning methods for diagnostics in the medical field has recently gained popularity as a complementary tool for doctors. A molecular diagnosis method of novel coronavirus was proposed [22] by developing two 1-step quantitative real-time reverse-transcription PCR assays for detecting regions of the viral genome. In another exploration, authors have analyzed [23] the Epidemiological and clinical features of novel coronavirus and included the records of all COVID-19 infected patients, which were reported by the Chinese Center for Disease Control and Prevention until January 26, 2020. In recent times, many radiological images have been extensively used to detect COVID-19 confirmed cases. Besides, Wang et al. [24] suggested a Deep model named COVID-Net, an open-source deep neural network specially created for COVID19 patients recognition and achieved an accuracy of 92.4% in the classification of standard classes, non-COVID, and COVID-19 Pneumonia. Sethy et al. [25] classified the properties obtained from different models of CNN with the Support Vector Machine (SVM) Classifier using X-Ray images. The authors have curated the COVIDx dataset to detect COVID-19 patients through the radiography image. In another study, a ResNet50 [26] model was proposed by Narin et al., and it achieved a COVID-19 detection accuracy of 98%. The dataset was composed using fifty COVID-19 patients images taken from the open-source GitHub repository shared by Dr. Joseph Cohen and another fifty healthy patients images from Kaggle repository of Chest X-Ray Images (Pneumonia).
The results were obtained using a five-fold cross-validation technique with 98% of accuracy using the ResNet50 model, 97% of accuracy using the Inception-V3 model, and 87% of accuracy with Inception-ResNetv2. Besides, in terms of COVID-19 patients detection using X-Ray images, the Deep model of Ioannis et al. [27] reached a success rate of 98.75% for two classes and 93.48% for three classes. Authors have experimented with a collection of 1427 X-ray images, including 224 images with confirmed COVID-19 disease, 700 images with confirmed common bacterial Pneumonia, and 504 images of normal conditions. Experimental results of authors suggested that the Deep Learning approach with X-Ray imaging may extract significant biomarkers related to the COVID-19 disease. By comprising multiple CNN models, Hemdan et al. [28] have proposed a COVIDX-Net model that is capable of detecting confirmed cases of COVID-19. Their study is validated on 50 Chest X-ray images with 25 confirmed COVID-19 cases. Authors have included seven different architectures of deep convolutional neural network models, such as modified Visual Geometry Group Network (VGG-19) and the second version of Google MobileNet. Each deep neural network model was capable of analyzing the normalized intensities of the X-ray image to classify the patient status in either negative or positive COVID-19 cases. A transfer learning-based framework has been advised by Karmany et al. [29] to identify medical diagnoses and treatable diseases using image-based deep learning. It can effectively classify images for macular degeneration and diabetic retinopathy by accurately distinguishing bacterial and viral Pneumonia on the images of chest X-Rays. Pereira et al. [30] have proposed a classification schema considering the mentioned perspectives: a) a multiclass classification; b) hierarchical classification since Pneumonia can be structured as a hierarchy. Authors have composed a database, named RYDLS-20, contained CXR images of Pneumonia caused by different pathogens as well as CXR images of healthy lungs. The proposed approach tested in RYDLS-20 and achieved a macro-average of F1-Score of 0.65 using a multiclass approach and an F1-Score of 0.89 for the COVID-19 identification in the hierarchical classification scenario. In another study of 10 convolutional neural networks on CT images compiled by Ardakkani et al. [31] claimed a rapid method for COVID-19 diagnosis using an artificial intelligence-based technique. 1020 Computed Tomography (CT) slices from 108 patients with laboratory proven COVID-19 (the COVID-19 group) and 86 patients with other atypical and viral pneumonia diseases (the non-COVID-19 group) were included. AlexNet, VGG-16, VGG-19, SqueezeNet, GoogleNet, MobileNet-V2, ResNet-18, ResNet-50, ResNet-101, and Xception were used to distinguish between COVID-19 and non-COVID-19 CT images. ResNet-101 achieved the highest accuracy of 99.51% among all. In a recent study, Hall et al. [32] worked with 135 chest X-rays of patients diagnosed with COVID-19 cases and a set of 320 chest X-rays of patients diagnosed with viral Pneumonia with an overall accuracy of 91.24%. Through the experiment, authors have tried to observe whether chest X-rays images can be considered as a method for the diagnosis of COVID-19 cases. Pre-trained Resnet50 and VGG-16, along with their CNN, were trained on a balanced set of COVID-19 and Pneumonia chest X-rays.
3. Methodology
Convolutional Neural Networks (CNN) is a deep neural network-based learning architecture for processing a massive amount of data. Nowadays, it is widely applied for medical imaging analysis. The CNN is used extensively over other Machine Learning methods because it does not need any manual feature extraction as well as does not require specific segmentation. In Fig. 1
(A), CNN architecture design is shown, which consists of several blocks of input Layer, Hidden Layer (convolutional layer, pooling layer), fully connected layer, and Output Layer. However, for selecting an anchor, the anchors are divided into two categories (positive and negative) with an intersection-over-union (IoU) overlap ratio between ground-truth box and anchor as a classification index. The IoU overlap ratio is defined as in Equation (1).
1(A) represents the schematic diagram of a convolutional neural network architecture of X-Ray image classification; 1(B) showing the pipeline structure of Faster Regional based Convolutional Neural Network (Faster R–CNN), and lastly, 1(C) highlights the basic structure of Convolution, Pooling, and Dense layers of VGG-16 Network Architecture.
Using VGG-16, we minimize an objective function following the multi-task loss in Faster R–CNN, and the loss function is derived in Equation (2).
(2)
Lcls is the classification loss function, and Lreg is the regression loss function. Ncls and Nreg are the normalization coefficients of the classification loss function Lcls and the regression loss function Lreg respectively. λ is the weight parameter between Lcls and Lreg. The classification loss function Lcls is the logarithmic loss of two categories (COVID-19 and non-COVID), and it is defined as in the following Equation (3)
(3)
For the regression loss function, it is defined as in Equation (4)
(4)
Here, R is defined as a robust loss function in Equation (5)
(5)
Regarding model development, rather than creating a model from scratch, we built it according to our sample input requirements. We have used similar layers and filters as compared to the original Faster R–CNN architectures and gradually increased the number of filters. In addition, it is essential to consider the Faster R–CNN when analyzing our proposed model and algorithm. Our proposed framework consists of 24 convolutional layers, followed by two fully connected layers and six pooling layers. These layers are typical CNN layers with different filter numbers, sizes, and stride values.
Besides, more modified versions of CNN are available for model development, such as R–CNN, Fast R–CNN, and Faster R–CNN. In this exploration, we have used Faster Regional based Convolutional Neural Network (Faster R–CNN). Fig. 1(B) is showing the architecture of Faster R–CNN. Furthermore, we used the VGG-16 network as the foundation for model classification. We combine this with Faster R–CNN to use this model in a real-time system. Finally, a deep model with a large number of layers is essential for the extraction of properties of an image in a real-time detection system. For that reason, the model classification structure is capable of grasping and learning small differences. An illustration of the proposed model used in this study is shown in Fig. 2
.
Workflow representation of the proposed framework.
3.1. Data preparation
For dataset design, we have followed a two-step procedure for data preparation. Initially, we used the X-ray images of COVID-19 patients, which is available as open-source data. Furthermore, we developed a custom dataset to train and evaluate, which comprised a total of 5450 chest radiography images across 2500 patient cases. To prepare the custom dataset for our use in the experiment, we combined and modified two different publicly available datasets: COVID chest X-Ray dataset curated by Dr. Joseph Cohen, a postdoctoral fellow at the University of Montreal [33], and RSNA pneumonia detection challenge dataset from Kaggle [34]. In the later phase of this exploration, we moved to a newly available dataset dedicatedly developed for COVID-19 positive case detection using chest X-Ray images named as COVIDx [24]. The number of image samples we used for both (custom and COVIDx) the dataset is showing in Table 1
. For the custom dataset, we have used 5450 sample images, whereas, in the COVIDx dataset, we developed our dataset with 13800 images.
Table 1.
Different data representation of multiple datasets used in this research.
The COVIDx dataset is updated continuously with images shared by researchers from different regions. As of May 7, 2020, there are 183 X-Ray images diagnosed with COVID-19, 8066 patients as normal, and 5551 cases identified as non-COVID Pneumonia. By merging 'Normal' and 'non-COVID Pneumonia' into a single 'non-COVID' class, we designed it as a binary class dataset.
3.2. Model building and training
The most noticeable fact is that very few patients of COVID-19 associated with X-Ray images, which leads to the scarcity of availability of X-Ray images. For the model building and training, we have used the Googles TensorFlow library and VGG-16 for high-performance numerical computation. Regarding the cross-validation approach, our study used the K-fold cross-validation method (K = 10) with the support of Leave-one-out cross-validation. Algorithm 1 provides insight into K-fold cross-validation working procedures.
Our proposed framework of Faster R–CNN based COVID-19 positive case detection is developed and trained by using the support of Googles Colaboratory. Detection experiment executed on Google's cloud server's GPU with 12 GB of RAM and with the support of TensorFlow version 2.1.0. The proposed model was trained with a training learning rate of 2e-5, batch size of 8, and 100 epoch. The complete framework is built and evaluated using the Keras deep learning library with TensorFlow backend. All the experiment and data analysis is undergone in the Machine Intelligence Lab (MINTEL) of Dhaka International University.
3.3. Proposed architecture
According to proposed architecture (Fig. 3
), we have used the output of conv5_3 layer of VGG-16 as the feature map to the subsequent network. Therefore, we combined the automated features extracted by conv3_3, conv4_3, and conv5_3 layers with additional elements. Generally, the size of the feature maps generated by each convolution layer are different, however, we kept the size of the feature map of conv4_3 unchanged, and changed the size of the conv3_3 and conv5_3 layers. The pooling by subsampling is adopted for the feature map of conv3_3, and then upsampling is used to improve the resolution of conv5_3 to make them more consistent with the conv4_3. Lastly, the merged map can be obtained by adding the included map along with subsampling the output of conv3_3, upsampling the output of conv5_3, and conv4_3. Before using the fusion of the three-layer convolutional feature maps, we first used the local response normalization so that the activated values of the feature map remains the same.
Working procedure of proposed architecture (Faster R–CNN).
The value of the input (224 × 224 × 3) for VGG-16 classification has to remain fixed because the final block of the network uses Fully-Connected (FC) layers, which requires a fixed-length input. This operation is completed by flattening the output of the last convolutional layer to get a rank 1 tensor, before using the FC layers. Each layer details and parameters (Type, Shape, and Input size) of our proposed model is shown in Table 2
. For the experiment, we have considered the learning rate (2e-5), momentum optimizer for weight updates, and cross-entropy loss function.
Table 2.
A standard convolution is represented by “Conv”.“t1” and “t2” represent convolution stride 1 × 1 and 2 × 2 respectively. “dw” stands for Depth wise separable convolution which is made up with two layers. (i) Depth wise convolutions are used to apply a single filter in every input channel, and (ii) A 1 × 1 Pointwise convolution is used to create a linear combination of the output of the depth wise layer.
In this section, we discuss the loss observation, followed by the results of model validation. We performed experiments to detect and classify COVID-19 confirmed cases using X-Ray images and train the models in two classes: non-COVID and COVID-19. The model was evaluated using 10-fold cross-validation technique. We have used 90% of X-Ray images for training, and the rest of the 10% are used for testing or validation. Moreover, the loss function is highly essential to understand the excellence of the prediction. From Fig. 4
., we observe that the training loss and validity loss is decreased gradually after every 100 epoch.
It depicts the loss/accuracy curve, along with the number of Epoch values. It shows that training loss has decreased in every epoch and diverge to a minimum value after 100-epoch. Moreover, validity loss has also reduced like training loss; besides, accuracy on COVID-19 detection has increased after every epoch stage until 100 epoch.
It is also noticeable that both training loss and validation values increased significantly at the primary epoch because of the number of COVID-19 data in that specific class (Fig. 4). The effectiveness of the model is achieved through testing, cross-validation, and direct image input testing. In order to evaluate the performance of the model, the complete trained model is validated with the same model dataset using K-Fold cross-validation.
Time comparison is obtained by measuring the time model takes to complete the detection and classification. Based on the Confusion Matrix four-parameter: True Positive, False Positive, True Negative, and False Negative, our proposed architecture predicted the different samples of Chest X-ray images (Fig. 5). The model, however, made incorrect predictions mostly in poor images, and often it predicted the patient with Pneumonia as COVID-19 because they have similarities in image features.
Illustration of different sample images are shown here. Fig. 5 (A, B, D, E, and F) shows that the model predicts the sample as non-COVID, whereas the actual class is also a non-COVID. However, only 5 (C) predict the sample as non-COVID, whereas the actual class is COVID-19.5(G) depicts the generated confusion matrix based on 10-fold cross-validation method.
Performance metrics of the proposed model and 10-fold cross-validation results of different metrics and their average results are tabulated in Table 3
. It is observable that the developed model provided an average accuracy of 97.36%, and it obtained an average value of sensitivity, specificity, and F1-score of 97.65%, 95.48%, and 98.46%, respectively. It also provides a precision value of more than 99.00%.
Table 3.
10 fold cross-validation approach and results of different performance metrics.
In this section, a necessary review of the deep learning approaches in detecting COVID-19 positive cases for some of the articles in Table 4
are provided. Moreover, we also discuss our proposed method with other similar deep learning approaches, and a discussion table has been designed (Table 4) to understand the impact of our framework over other studies. From Table 3, it is evident that Deep CNN ResNet-50 [28] based approach produces slightly higher (~0.64%) detection accuracy in comparison with our proposed method.
Table 4.
Overview of papers using deep learning approaches with their working procedure and performance metrics for COVID-19 case detection.
A deep learning-based model with a total of 16,756 X-Ray images with multiclass classification (three) and also proposed a dedicated dataset of COVID19 X-Ray images named COVIDx.
Accuracy;
COVID-Net was able to achieve an accuracy of 92.40% for the classification of COVID19 positive cases.
Sensitivity;
COVID-Net has achieved decent sensitivity, which is 91.0% for COVID-19 cases.
Positive predictive value;
The positive predictive value of this approach is 98.9%.
The proposed model classified the characteristics obtained from various CNN (Convolutional Neural Network) models of the SVM (Support Vector Machine Classifier) using X-Ray images (25 COVID-19 positive and 25 Normal). The study claims that ResNet50 with the SVM classifier produces better results.
Accuracy;
The authors claimed that the accuracy of their model is 95.38% for COVID-19 case detection.
Sensitivity;
97.29% sensitivity is achieved through this model.
This study used three different CNN models (ResNet50, InceptionV3, and InceptionResNetV2) using 50 open access COVID-19 X-Ray images from Joseph Cohen, and 50 typical images from a Kaggle repository. Besides, their used non-COVID images are images of children aged between 1 and 5 years.
Accuracy;
They obtained an accuracy of 98% from their proposed model.
Sensitivity;
The claimed that recall or sensitivity of their model is 96%.
Specificity;
However, this method provides 100% specificity in detecting COVD-19 patients.
In this study, 224 approved COVID-19, 700 cases of Pneumonia, and 504 normal radiology images were used. They performed on both binary and 3-class classification using a transfer learning method.
Accuracy;
They achieved a performance accuracy of 98% for the binary class problem and 93% for the 3-class problem.
Sensitivity;
This study achieved 92% of sensitivity.
Specificity:
VGG-19 based approach obtained 98% of specificity.
This study deployed deep learning models to diagnose COVID-19 patients using chest X-rays. It proposed a COVIDx-Net model that included seven CNN models with 50 Chest X-Ray images (25 COVID19 positives, 25 normal).
Accuracy;
The highest accuracy obtained among these seven CNN models is 90%.
Precision;
Similar to accuracy, among the seven CNN models, the highest precision achieved by this model is 100%.
Sensitivity;
Moreover, the highest sensitivity obtained among the models is also 100%.
The authors used the modified Inception (M-Inception) deep model using CT images containing 195 COVID-19 positive images and 258 COVID-19 negative images.
Accuracy;
The obtained accuracy of this M-inception model is 82.90%.
Sensitivity;
This study claimed that they achieved a sensitivity of 81%.
Specificity;
Moreover, this method provides specificity of 84%.
This method proposed a three-dimensional Deep CNN model to detect COVID-19 from CT images. Their dataset contains 313 COVID-19 positive images and 229 non-COVID images.
Accuracy;
The accuracy gained by this model is 90.80%.
Sensitivity;
This study obtained 90.70% of sensitivity.
Specificity;
This model achieved 91.10% of specificity in detecting COVID-19 positive cases from CT images.
This study was performed in detecting COVID-19 positive cases using ResNet coupled with CT images. Their dataset contains the images of 224 Viral pneumonia and 175 healthy images
Accuracy;
The average accuracy achieved by the model from the perspective of CT cases as a whole is 86.7%.
Sensitivity;
In detecting COVID-19 positive cases, this study reported 86.7% of sensitivity.
This proposed model is based on the DarkNet method that is completely automated with an end-to-end structure without the need for manual feature extraction. They have used a total of 1125 images (125 COVID-19 positives, 500 Pneumonia images, and 500 NoFindings images) to experiment with their developed model.
Accuracy;
This method obtained an accuracy of 98.08% and 87.02% for binary and three classes, respectively.
Sensitivity;
The sensitivity achieved by this study is 85.35% and 95.13% for binary and three classes, respectively.
Specificity;
Similarly, the specificity is also 92.18% and 95.3% for binary and 3-classes, respectively.
Our Proposed Framework
Chest X-Ray
Faster R–CNN
A deep learning model to detect COVID-19 cases from Chest X-Ray images using faster R–CNN models with ten folds cross-validation technique. A real-time assessment tool for COVID-19 positive case detection. The dataset contains 183 COVID-19 positive X-Ray images and 13617 non-COVID X-Ray images.
Accuracy;
This proposed framework performed on binary classification and achieved a mean accuracy of 97.36%.
Sensitivity;
The mean sensitivity achieved by this model is 97.65%.
Specificity;
Also, for the specificity, the mean specificity obtained for 10 fold cross-validation method is 95.48%.
From the observation based on Table 4., most of the datasets contained a small amount of data (limited images for training and testing) to design and developed their model. However, in our study, we have used 183 COVID-19 positive images, and the number of total images was 13,800. Another noticeable fact, the most common techniques used by the authors for their model building were based on are VGG and ResNet. However, in our case, we have used Faster R–CNN to develop our model quicker and more reliable so that it can be utilized as a real-time assessment tool. The primary contribution of this research that our proposed approach is capable of classifying chest X-Ray images without using manual feature extraction. As such, it can assist the doctors and the radiologists in detecting COVID-19 positive cases and can be used as a real-time diagnosis method in crowded areas like the airport, terminal, shopping mall, super shop, and so on.
On the other hand, X-Ray images are highly preferred because of the promptness of accessibility for disease diagnosis. In contrast, CT images are not promptly available for diagnosis. From the studies, it is observable that, CT images based detection model produces relatively lower accuracy in comparison with the X-Ray image-based detection model. A comparison table (Table 5
) has been further designed by using the same dataset (COVIDx) to compare with the prominent state of the art deep learning approaches with our proposed framework. We have compared our proposed approach with DenseNet, ResNet50, InceptionV3, and AlexNet. However, after deploying the same dataset on available deep network approaches our proposed Faster R–CNN method still exhibits better results with an accuracy of 97.36% and a precision value of 99.29%.
Table 5.
Comparison table of available deep learning approaches with our proposed framework (Faster R–CNN).
In this study, we have proposed a deep learning model to detect COVID-19 cases from Chest X-Ray images. This automated system can perform binary classification without manual feature extraction with an accuracy of 97.36%. Moreover, this model is also capable of testing with a larger dataset and work with real-time systems. Furthermore, it can be helpful in areas where the test kit is not sufficient. Until now, there has been no recognition from the research community of medical experts for COVID-19 positive case detection from radiology images using deep learning framework. This proposed framework may be employed as a supplementary tool in screening COVID-19 patients in emergency medical support services with the rt-PCR machine. Therefore, for the initial assessment of COVID-19 patients, this tool can act as a effective medium of diagnosis under the supervision of radiologists and doctors. At this point, we are developing a custom larger dataset with more images of COVID-19 infected cases and working on our current framework to make our model more robust so that it can be used in detection of both CT and X-Ray images.
Author contribution
SKD and KHS had the idea for and designed the study and had full access to all the data in the study and take responsibility for the data and accuracy of the model generation. SKD, KHS, and MR contributed to the writing of the article. MR and TUI contributed to the critical revision of the report. All the data preparation and models developed by SKD and KHS. All authors contributed to data acquisition, data analysis, result validation, and reviewed and approved the final version.
Funding
None.
Ethical approval
Not required.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgement
None. No funding to declare.
References
1.Huang C., Wang Y., Li X., Ren L., Zhao J., Hu Y., Zhang L., Fan G., Xu J., Gu X., Cheng Z. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497–506. doi: 10.1016/S0140-6736(20)30183-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
2.Sohrabi C., Alsafi Z., O’Neill N., Khan M., Kerwan A., Al-Jabir A. World Health Organization declares global emergency: a review of the 2019 novel coronavirus (COVID-19) Int J Surg. 2020 Apr;76:71–76. doi: 10.1016/j.ijsu.2020.02.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
6.Pan F., Ye T. Time course of lung changes on chest CT during recovery from 2019 novel coronavirus (COVID-19) pneumonia. Radiology. 2020:200370. doi: 10.1148/radiol.2020200370. [DOI] [PMC free article] [PubMed] [Google Scholar]
7.Bernheim A., Mei X. Chest CT findings in coronavirus disease-19 (COVID-19): relationship to duration of infection. Radiology. 2020:200463. doi: 10.1148/radiol.2020200463. [DOI] [PMC free article] [PubMed] [Google Scholar]
8.Long C., Xu H. Diagnosis of the Coronavirus disease (COVID-19): rRT-PCR or CT? Eur J Radiol. 2020 Ap;20(4):384–385. doi: 10.1016/j.ejrad.2020.108961. [DOI] [PMC free article] [PubMed] [Google Scholar]
9.Lee E.Y., Ng M.Y., Khong P.L. COVID-19 Pneumonia: what has CT taught us? Lancet Infect Dis. 2020 doi: 10.1016/S1473-3099(20)30134-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
10.Shi H., Han X. Radiological findings from 81 patients with COVID-19 Pneumonia in Wuhan, China: a descriptive study. Lancet Infect Dis. 2020;20(4):425–434. doi: 10.1016/S1473-3099(20)30086-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
11.Zhao W., Zhong Z., Xie X., Yu Q., Liu J. Relation between chest CT findings and clinical conditions of coronavirus disease (COVID-19) pneumonia: a multicenter study. Am J Roentgenol. 2020:1–6. doi: 10.2214/AJR.20.22976. [DOI] [PubMed] [Google Scholar]
12.Li Y., Xia L. Coronavirus Disease 2019 (COVID-19): role of chest CT in diagnosis and management. Am J Roentgenol. 2020:1–7. doi: 10.2214/AJR.20.22954. [DOI] [PubMed] [Google Scholar]
13.Chan J.F.W., Yuan S. A familial cluster of Pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster. Lancet. 2020;395(10223):514–523. doi: 10.1016/S0140-6736(20)30154-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
14.Esteva A., Kuprel B., Novoa R.A. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–118. doi: 10.1038/nature21056. [DOI] [PMC free article] [PubMed] [Google Scholar]
15.Codella N.C., Nguyen Q.B., Pankanti S., Gutman D.A., Helba B., Halpern A.C., Smith J.R. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J Res Dev. 2017;61(4/5) 5-1. [Google Scholar]
16.Celik Y., Talo M., Yildirim O., Karabatak M., Acharya U.R. Automated invasive ductal carcinoma detection based using deep transfer learning with whole-slide images. Pattern Recogn Lett. 2020 May;133:232–239. [Google Scholar]
17.Cruz-Roa A., Basavanhally A. vol. 9041. International Society for Optics and Photonics; 2014, March. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks; p. 904103. (Medical imaging 2014: digital pathology). [Google Scholar]
18.Talo M., Yildirim O., Baloglu U.B., Aydin G., Acharya U.R. Convolutional neural networks for multiclass brain disease detection using MRI images. Comput Med Imag Graph. 2019;78:101673. doi: 10.1016/j.compmedimag.2019.101673. [DOI] [PubMed] [Google Scholar]
19.Rajpurkar P., Irvin J. 2017. Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225. [Google Scholar]
20.Gaál G., Maga B., Lukács A. 2020. Attention U-net based adversarial architectures for chest X-ray lung segmentation. arXiv preprint arXiv:2003.10304. [Google Scholar]
21.Souza J.C., Diniz J.O.B., Ferreira J.L., da Silva G.L.F., Silva A.C., de Paiva A.C. An automatic method for lung segmentation and reconstruction in chest X-ray using deep neural networks. Comput Methods Progr Biomed. 2019;177:285–296. doi: 10.1016/j.cmpb.2019.06.005. [DOI] [PubMed] [Google Scholar]
23.Zhnag N., Wang L., Dang X. Recent advances in the detection of respiratory virus infection in humans. J Med Virol. January, 2020 doi: 10.1002/jmv.25674. [DOI] [PMC free article] [PubMed] [Google Scholar]
24.Wang L., Wong A. 2020. COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest radiography images. arXiv preprint arXiv:2003.09871. [DOI] [PMC free article] [PubMed] [Google Scholar]
25.Sethy P.K., Behera S.K. 2020. Detection of coronavirus disease (COVID-19) based on deep features. [Google Scholar]
26.Narin A., Kaya C., Pamuk Z. 2020. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks; p. 10849. arXiv preprint arXiv:2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
27.Apostolopoulos Ioannis D., 1, Bessiana Tzani. 2003. COVID-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks; p. 11617. arXiv. [DOI] [PMC free article] [PubMed] [Google Scholar]
28.Hemdan E.E.D., Shouman M.A., Karar M.E. 2020. COVIDX-Net: a framework of deep learning classifiers to diagnose COVID-19 in x-ray images; p. 11055. arXiv preprint arXiv:2003. [Google Scholar]
29.Kermany D.S., Goldbaum M. 2018. Identifying medical diagnoses and treatable diseases by image-based deep learning. [DOI] [PubMed] [Google Scholar]
30.Pereira R.M., Bertolini D., Teixeira L.O., Silla C.N., Jr., Costa Y.M. COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. Comput Methods Progr Biomed. 2020:105532. doi: 10.1016/j.cmpb.2020.105532. [DOI] [PMC free article] [PubMed] [Google Scholar]
31.Ardakani A.A., Kanafi A.R., Acharya U.R., Khadem N., Mohammadi A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: results of 10 convolutional neural networks. Comput Biol Med. 2020:103795. doi: 10.1016/j.compbiomed.2020.103795. [DOI] [PMC free article] [PubMed] [Google Scholar]
32.Hall L.O., Paul R., Goldgof D.B., Goldgof G.M. 2020. Finding covid-19 from chest x-rays using deep learning on a small dataset. arXiv preprint arXiv:2004.02060. [Google Scholar]
33.Cohen J.P., Morrison P., Dao L. 2020. COVID-19 image data collection; p. 11597. arXiv:2003. [Google Scholar]
35.Song Y., Zheng S., Li L., Zhang X., Zhang X., Huang Z.…Chong Y. medRxiv; 2020. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. [DOI] [PMC free article] [PubMed] [Google Scholar]
36.Wang S., Kang B., Ma J., Zeng X., Xiao M., Guo J.…Xu B. medRxiv; 2020. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19) [DOI] [PMC free article] [PubMed] [Google Scholar]
37.Zheng C., Deng X., Fu Q., Zhou Q., Feng J., Ma H.…Wang X. medRxiv; 2020. Deep learning-based detection for COVID-19 from chest CT using weak label. [Google Scholar]
38.Xu X., Jiang X., Ma C., Du P., Li X., Lv S. 2020. Deep learning system to screen coronavirus disease 2019 pneumonia. arXiv preprint arXiv:200209334. [DOI] [PMC free article] [PubMed] [Google Scholar]
39.Ozturk T., Talo M., Yildirim E.A., Baloglu U.B., Yildirim O., Acharya U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med. 2020:103792. doi: 10.1016/j.compbiomed.2020.103792. [DOI] [PMC free article] [PubMed] [Google Scholar]