Skip to main content
IEEE - PMC COVID-19 Collection logoLink to IEEE - PMC COVID-19 Collection
. 2020 Sep 29;8:179437–179456. doi: 10.1109/ACCESS.2020.3027685

COVID-19 Control by Computer Vision Approaches: A Survey

Anwaar Ulhaq 1,, Jannis Born 2, Asim Khan 3, Douglas Pinto Sampaio Gomes 3, Subrata Chakraborty 4, Manoranjan Paul 1
PMCID: PMC8545281  PMID: 34812357

Abstract

The COVID-19 pandemic has triggered an urgent call to contribute to the fight against an immense threat to the human population. Computer Vision, as a subfield of artificial intelligence, has enjoyed recent success in solving various complex problems in health care and has the potential to contribute to the fight of controlling COVID-19. In response to this call, computer vision researchers are putting their knowledge base at test to devise effective ways to counter COVID-19 challenge and serve the global community. New contributions are being shared with every passing day. It motivated us to review the recent work, collect information about available research resources, and an indication of future research directions. We want to make it possible for computer vision researchers to find existing and future research directions. This survey article presents a preliminary review of the literature on research community efforts against COVID-19 pandemic.

Keywords: Artificial intelligence, COVID-19, computer vision, review, survey

I. Introduction

COVID-19, known as an infectious disease is caused by severe acute respiratory syndrome (SARS-CoV-2) [1] and named coronavirus due to its visual appearance (under an electron microscope) to solar corona (similar to a crown) [2]. The fight against COVID-19 has motivated researchers worldwide to explore, understand, and devise new diagnostic and treatment techniques to culminate this threat to our generation. In this article, we discuss how the computer vision community is fighting with this menace by proposing new types of approaches, improving efficiency, and speed of the existing efforts.

The scientific response to combat COVID-19 has been far quicker and widespread. A keyword search on PubMed and the major open-access preprint repositories (arXiv, bioRxiv and medRxiv) revealed that in 2019, 735 published papers included the word “coronavirus”. FIGURE 1 illustrates our findings. During the first half of 2020, this number has increased a thirty-fold and rose to astounding 21,806 articles. For comparison, the SARS pandemic, with less than 10,000 confirmed infections and <1,000 deaths, led roughly to a four-fold increase over two years (2002: 221 and 2004: 822). After the occurrence of MERS in 2012 (less than 3,000 confirmed infections and 1,000 deaths to date) a doubling in coronavirus related papers over four years (2011 to 2015) was observed.

FIGURE 1.

FIGURE 1.

A portrayal of current increase in research articles about coronavirus related research. Since their discovery in the early 1960s, coronavirus research has increased substantially; especially after the SARS outbreak in 2002 made clear their pandemic potential. Previously, the most productive full year was 2004 with 822 coronavirus papers. The SARS-CoV-2 pandemic has caused a leap, with 21,806 articles only in the first half of 2020 (reference date for the analysis was 30 June 2020). Note that the y-axis is displayed in log-scale for visual clarity and that the height of the coloured bars shows their relative contribution.

The Economist has dubbed the current Herculean task science of the times with the hope that such efforts would help speed up the development of a COVID-19 vaccine [3].

Numerous approaches in computer vision have been proposed so far, dealing with different aspects of combat the COVID-19 pandemic. These approaches vary in terms of their approach to the fundamental questions:

  • How can medical imaging facilitate faster and reliable diagnosis of COVID-19?

  • Which image features correctly classify conditions as Bacterial, Viral, COVID-19, and Pneumonia?

  • What can we learn from imaging data acquired from disease survivors to screen critical and non-critical ill patients?

  • How can computer vision be used to enforce social distancing and early screening of infected people?

  • How can 3D computer vision help to maintain healthcare equipment supply and guide the development of a COVID-19 vaccine?

The answers to these questions are being explored, and preliminary work has been done.

The contribution of this review article is as follows: This review article classifies COVID-related computer vision methods into broad categories and provides salient descriptions of representative methods in each group. We aspire to give readers the ability to understand the baseline efforts and kickstart their work where others have left. Furthermore, we aim to highlight new trends and innovative ideas to build a more robust and well-planned strategy during this war of our times.

Our survey will also include research articles in pre-print format due to the time urgency imposed by this disease. However, one limitation of this review is the inclusion of the risk of lower quality and work without due validation. Many of the works have not been put into the clinical trial as it is time-consuming. Nevertheless, our intention here is to share ideas from a single platform while highlighting the computer vision community efforts. We hope that our reader is aware of these contemporary challenges. This article is an extended and revised version of the earlier preprint survey [4]. We follow a top-down approach to describe the research problems that require urgent attention. We start with disease diagnosis, discuss disease prevention and control, followed by treatment-related computer vision research work.

We have organised the paper as follow: Section 2 describes the overall taxonomy of computer vision research areas by classifying these efforts into three classes. Section 3 provides a detailed description of each research area, relevant papers, and a brief description of representative work. Section 4 describes available resources, including research datasets, their links, deep learning models, and codes. Section 5 provides the discussion and future work directions followed by concluding remarks and references.

II. Historical Development

The novel coronavirus SARS-CoV-2 is the seventh member of the Corona viridae family of viruses which are enveloped, non-segmented, positive-sense RNA viruses [5]. The mortality rate of COVID-19 is less than that of the severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) coronavirus diseases (10% for SARS-CoV and 37% for MERS-CoV). However, it is highly infectious, and the number of cases is on continuous rise [6].

The disease outbreak first reported in Wuhan, the Hubei province of China, after several cases of pneumonia with unknown causes were reported on 31 December 2019. A novel coronavirus was discovered as the causative organism through in-depth sequencing analysis of samples of patient’s respiratory tract at Chinese facilities on 7 January 2020 [6]. The outbreak was announced as a Public Health Emergency of International Concern on 30 January 2020. On 11 February 2020, the World Health Organization (WHO) announced a name for the new coronavirus disease: COVID-19. It was officially being considered pandemic after the 11 March announcement by WHO [7].

III. Taxonomy of Key Areas of Research

In this section, we describe the classification of computer vision techniques that try to counter the menace of COVID-19. For better comprehensibility, we have classified them into three key areas of research: (i) diagnosis and prognosis, (ii) disease prevention and control, and (iii) disease treatment and management. FIGURE 2 shows this taxonomy. In the following subsections, we discuss the research fields, the relevant papers, and present a brief representative description of related works.

FIGURE 2.

FIGURE 2.

Classification of computer vision approaches for COVID-19 Control. Our survey classifies COVID-19 related computer vision methods into three broad categories.

A. Diagnosis and Prognosis

An essential step in this fight is the reliable, faster, and affordable diagnostic process that can be readily accessible and available to the global community. According to Cambridge dictionary [8], diagnosis is: “the making of a judgment about the exact character of a disease or other problem, especially after an examination, or such a judgment” and prognosis is “a doctor’s judgment of the likely or expected development of a disease or of the chances of getting better”.

Currently, Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) tests are considered as the gold standard for diagnosing COVID-19 [9]. During such a test, small amounts of viral RNA are extracted from a nasal swab, amplified, quantified. Virus detection is then performed using a fluorescent dye. Although accurate, the test is time-consuming, manual and requires biomolecular testing facilities which limits its availability in large scales and third-world countries. Care has to be taken in interpreting negative test results. A meta-study estimated the sensitivity over the disease process and found a maximal sensitivity of 80%, eight days after infection [10]. Some studies have also shown false-positive PCR testing [11].

1). Computed Tomography (CT) Scan

An alternative approach is the use of a radiology examination that uses computed tomography (CT) imaging [12]. A chest CT scan is a non-invasive test conducted to obtain a precise image of a patient’s chest. It uses an enhanced form of X-Ray technology, providing more detailed images of the chest than a standard X-Ray. It produces images that include bones, fats, muscles, and organs, giving physicians a better view, which is crucial when making accurate diagnoses.

A Chest CT scan is of two types: namely high-resolution and spiral chest CT scan [13]. The high-resolution chest CT scan provides more than a slice (or image) in a single rotation of the X-Ray tube. The spiral chest CT scan application involves a table that continuously moves through a tunnel-like hole while the X-Ray tube follows a spiral path. The advantage of the spiral CT is that it is capable of producing a three-dimensional image of the lungs.

Important CT features include ground-glass opacity, consolidation, reticulation/thickened interlobular septa, nodules, and lesion distribution (left, right or bilateral lungs) [14][17]. The most observable CT features discovered in COVID-19 pneumonia include bilateral and sub pleural areas of ground-glass opacification, consolidation affecting the lower lobes. Within the intermediate stage (4-14 days from symptom onset), crazy-paving pattern and possibly observable Halo sign become important features as well [6], [11], [12], [12], [14][18]. One case of CT images is shown in FIGURE 3 that illustrates ground glass opacities and ground halo features. As the identification of disease features is time-consuming, even for expert radiologists, computer vision can help by automating such a process.

FIGURE 3.

FIGURE 3.

CT images adapted from [6], [18] portray CT features related to COVID-19. Ground glass opacities (top) and ground glass halo (bottom).

2). Representative Work, Evaluation and Discussion

To date, various CT-scanning automated approaches have been proposed [8], [12][16], [18][27]. To discuss the approach and performance of the computer vision CT-based disease diagnosis, we have selected some recent representative works that provide an overview of their effectiveness. It is worth noting that they have been presenting different performance metrics and using a diverse number of images and datasets. These practices make their comparison very challenging. Some of the metrics include Accuracy, Specificity, Sensitivity, Positive predictive value (PPV), Negative predictive value (NPV), Area Under Curve (AUC), and F1 score. A quick elucidation on their definition can be useful. The accuracy of a method finds how correct the values are predicted. The precision finds the reproducibility of the measurement; Recall presents how many of the correct results are discovered while F1-score uses a combination of precision and recall for a balanced average result.

The first class of work discussed here approaches diagnosis as a segmentation problem. Chen et al. [22] has proposed a CT image dataset of 46,096 images of both healthy and infected patients, labelled by expert radiologists. It was collected from 106 patients admitted with 51 confirmed COVID-19 pneumonia and 55 control patients. The work used deep learning models for segmentation only so that it could identify the infected area in CT images between healthy and infected patients. It was based on UNet++ semantic segmentation model [23], used to extract valid areas in the images. It used 289 randomly selected CT images and tested it on other 600 randomly selected CT images. The model achieved a per-patient sensitivity of 100%, specificity of 93.55%, the accuracy of 95.24%, PPV (positive prediction value) of 84.62%, and NPV (negative prediction value) of 100%. In the retrospective dataset, it resulted in a per-image sensitivity of 94.34%, the specificity of 99.16%, the accuracy of 98.85%, PPV of 88.37%, and NPV of 99.61%. The trained model from this study was deployed at the Renmin Hospital of Wuhan University (Wuhan, Hubei province, China) to accelerate the diagnosis of new COVID-19 cases. It was also open-sourced on the Internet to enable a rapid review of new cases in other locations. A cloud-based open-access artificial intelligence platform was constructed to provide support for detecting COVID-19 pneumonia worldwide. For this purpose, a website has been made available to provide free access to the present model at (http://121.40.75.149/znyx-ncov/index). Table 1 presents a description of the representative techniques for CT based COVID-19 diagnosis.

TABLE 1. Representative Works for CT Based COVID-19 Diagnosis.
Study Classification Model and availability Segnentation Model Dataset No of Participants Deployed Performance
Jun Chen et al. [22] (http://121.40.75.149/znyx-ncov/index) UNet++ to extract valid areas in CT images using 289 randomly selected CT images 46,096 CT images 106 patients with 51 confirmed COVID-19 pneumonia Renmin Hospital of Wuhan University (Wuhan, Hubei province, China) Sensitivity of 100%, specificity of 93.55%, accuracy of 95.24%.
Shual Wang et al. [28] Modified inception [29] with transfer learning available at: https://ainsce-tj.cn/thai/deploy/public/pneumonia_ct. 453 CT images of pathogenconfirmed COVID-19 99 patients from Xi’an Jiaotong University First Affiliated Hospital, Nanchang University First Hospital and Xi’An No.8 Hospital of Xi’An Medical College The internal validation achieved accuracy of 82.9% specificity of 80.5% sensitivity of 84%. The external testing dataset showed accuracy of 73.1% with specificity of 67%.
Xiaowei Xu al. [30] Combination of Two CNN three-dimensional classification models ResNet-18 network [31] +location attention oriented model). VNET based segmentation model [32] A total of 618 CT samples were collected: 19, 224 CT samples Flue 1175 CT samples from healthy people 219 from 110 patients with COVID-224 patients with Influenza-A viral pneumonia Model accuracy 86.7%
Ying Song et al. [33] Details Relation Extraction neural networkDRE-Net + ResNet50 [34] with Feature Pyramid Network (FPN)+ Attention module An online server is available for online diagnoses with CT images by http://biomed.nscegz.cn/server/Ncov2019. 777 CT images 88 patients diagnosed with the COVID19101 patients infected with bacteria pneumonia, and 86 healthy persons AUC of 0.99 and recall (sensitivity) of 0.93. Accuracy of 0.86 and F-Score 0.87
Ophir Gozes e t al. [35] 2D deep convolutional neural network architecture based on Resnet-50 [34] U-net architecture for image segmentation [36] 56 patients with confirmed COVID-19 diagnosis 0.996 AUC (95% CI: 0.989-1.00) on Chinese control and infected patients. 98.2% sensitivity, 92.2% specificity.
Fei Shan [37] VB-Net” network to segment COVID-19 infection regions in CT scans 249 CI images 249 COVID-19 patients, and validated using new COVID-19 patients Dice similarity coefficients of 91.6% t 10.0% between automatic and manual segmentations
Cheng Jin et al. [17] Model is available at: www.github.com/ChenWWWeixiang/diagnosis_covid19. 2D CNN based AI system, model name is not specified 970 CT volumes 496 patients with confirmed COVID-19 accuracy 94.98% area under the receiver operating characteristic curve (AUC) of 97.91%
Mucahid Barstuganl et al. [38] Grey Level Co-occurrence Matrix Local Directional Pattern Grey Level Run Length Matrix Grey Level Size Zone Matrix Discrete Wavelet Transform + SVM 150 CT images. 99.68% classification accuracy
Lin Li [24] (COVNet), was developed to extract visual features from volumetric chest CT RESNET50. Model is available at: https://github.com/bkong999/COVNet U-Net for segmentation 4356 chest CT images The datasets were collected from 6 hospitals and 3,322 patients. The sensitivity and specificity for COVID- 19 are 90% and 96% respectively.
Chuansheng Zheng [39] 3D deep convolutional neural Network to Detect COVID-19 (DeCoVNet) from CT volumes. The developed deep learning software is available at https://github.com/sydneyOzq/covid-19-detection. Segmented using a pre-trained UNet UnionHospital,Tongji Medical College, Huazhong University of Science and Technology). Finally, 540 patients Obtained 0.959 ROC AUC and 0.976 PR AUC.
Shuo Jin [40] Transfer learning on ResNet-50 segmentation model as 3D U-Net++, Using 1,136 training cases (723 positives for COVID- 19) from five hospitals Deployed the system in 16 hospitals in China AUC 0.991 sensitivity of 0.974 and specificity of 0.922.
Mei et al. [27] Slice selection CNN with Inception-ResNet-v2 backbone. Followed by disease diagnosis CNN with ResNet-18 backbone. Code for the model: https://github.com/howchihlee/COVID 19_CT Unpublished 905 patients, from which 419 were COVID positive. 279 patients were used for testing AUC 0.92, sen-sitivity of 0.84 and specificity of 0.83.

The second type of work considered COVID-19 as a binary classification problem. Li et al. [24] proposed (COVNet), to extract visual features from volumetric chest CT using transfer learning on the RESNET50. Lung segmentation was performed as a pre-processing task using the U-Net model. It used 4356 chest CT exams from 3,322 patients from the dataset collected from 6 hospitals between August 2016 and February 2020. The sensitivity and specificity for COVID-19 are 90% (114 of 127; p-value<0.001) with 95% confidence interval (CI) of [95% CI: 83%, 94%] and 96% (294 of 307; p-value<0.001) with [95% CI: 93%, 98%], respectively. The model was also made available online for public use at https://github.com/bkong999/COVNet.

The diagnosis problem was also approached as a 3-category classification task: distinguishing healthy patients from those with other types of pneumonia and those with COVID-19. Li et al. [24] used data from 88 patients diagnosed with the COVID-19, 101 patients infected with bacteria pneumonia, and 86 healthy individuals. It proposed the DRE-Net (Relation Extraction neural network) based on ResNet50, on which the Feature Pyramid Network (FPN) [25] and the Attention module was integrated to represent more fine-grained aspects of the images. An online server is available for online diagnoses with CT images at http://biomed.nsccgz.cn/server/Ncov2019.

A recent landmark study was published by Mei et al. [27] in Nature Medicine. In a cohort of 906 RT-PCR tested patients (419 COVID-positive), a two-stage CNN was combined with an MLP on clinical features (age, sex, exposure history, symptoms) and the diagnostic performance was compared to senior radiologists. A “slice selection CNN” was used to select abnormal CT scans which were subsequently classified by the “disease diagnosis CNN”. Interestingly, fusing a 512-dimensional vector of the CT scans with clinical features yielded a joint model that significantly outperformed the CNN-only model in ROC-AUC and specificity. On a test set of 279 patients, the joint model surpassed senior radiologists in ROC-AUC (0.92 vs. 0.84), while showing worse specificity (83% vs. 94%) and statistically insignificant better sensitivity (84% vs. 75%). The model also correctly identified 68% of positive patients who exhibited normal CT scans according to the radiologists. It hints toward the potential of deep learning to pick-up complex, disease-relevant patterns that may stay indiscernible for radiologists.

Due to limited time available for annotations and labelling, weakly-supervised deep learning-based approaches have also been developed using 3D CT volumes to detect COVID-19. Zheng et al. [26] proposed 3D deep convolutional neural Network (DeCoVNet) to Detect COVID-19 from CT volumes. The weakly supervised deep learning model could accurately predict the COVID-19 infectious probability in chest CT volumes without the need for annotating the lesions for training. The CT images were segmented using a pre-trained UNet. It used 499 CT volumes for training, collected from 13 December 2019 to 23 January 2020, and 131 CT volumes for testing, collected from 24 January 2020 to 6 February 2020. The authors chose a probability threshold of 0.5 to classify COVID- positive and COVID- negative cases. The algorithm obtained an accuracy of 0.901, a positive predictive value of 0.840, and a high negative predictive value of 0.982. The developed deep learning model is available at https://github.com/sydney0zq/covid-19-detection.

3). X-Ray Imagery

One drawback of using CT imaging is the need for high patient dose and enhanced cost [43]. The low availability imposes Additional challenges for CT in remote areas and the need of patient relocation and exhaustive disinfection of the scanner rooms (several hours per day) that risk contagion for staff and other patients [44]. These disadvantages call into play chest X-Ray radiography (CXR) as a preferred first-line imaging modality with lower cost and a wider availability for detecting chest pathology. Digital X-Ray imagery computer-aided diagnosis is used for different diseases, including osteoporosis [45], cancer [46] and cardiac disease [39]. However, as it is really hard to distinguish soft tissue with a poor contrast in X-Ray imagery, contrast enhancement is used as pre-processing step [47], [48]. Lung segmentation of chest X-Rays is a crucial and important step in order to identify lung nodules and various segmentation approaches are proposed in the literature [49][52].

CXR examinations show consolidation in COVID-19 infected patients. In one study at Hong Kong [41], three different patients had daily CXR, two of them showed progression in the lung consolidation over 3–4 days. Further CXR examinations show improvement over the subsequent two days. The third patient showed no significant variations over eight days. However, a similar study showed that the ground glass opacities in the right lower lobe periphery on the CT are not visible on the chest radiograph, which was taken 1 hour apart from the first study. FIGURE 4 illustrates a scenario with three chest XR chosen out of the daily chest CXR for a patient. The consolidation can be observed in the CSR image. In a large-scale study of 636 ambulatory COVID-19 patients, Weinstock et al. found that 58% of CXR was normal and 89% were normal or mildly abnormal [53]. Interstitial changes (24%) and GGOs (19%) were the most prominent symptoms, and abnormalities were most prevalent in the lower lobe (34%). While the sensitivity of CXR is significantly lower than for CT, the American College of Radiology (ACR) recommends to conduct CXR with portable devices and only if “medically necessary” for better radiological analysis. It moreover firmly advises to not use any imaging technique for COVID-19 diagnosis but instead suggests biomolecular tests [54]. In the realm of AI,

FIGURE 4.

FIGURE 4.

Chest CXR of an elderly male patient (Wuhan, China, who travelled to Hong Kong, China). Provided are three chest XR chosen out of the daily chest CXR for this patient. The consolidation can be observed in the right lower zone on day 0 persist into day four, followed by novel consolidate changes in the right mid-zone periphery and perihelia region. Such type of mid-zone change improved on the day seven-film. Image adapted from [41].

Various CXR-related automated approaches are proposed. The following section discusses the most salient work, while Table 2 presents a more systematic presentation of such methods.

TABLE 2. Representative work for X-Ray based COVID-19 diagnosis.
Study Model Dataset Performance
Guszt’av Ga’al et al. [53] Attention U-Net+ adversarial+ Contrast Limited Adaptive Histogram Equalization (CLAHE) [75] 247 images from Japanese Society of Radiological Technology (JSRT) Dataset+ Shenzhen dataset contains a total of 662 chest X-Rays DSC of 97.5% on the JSRT dataset
Asmaa Abbas et al. [64] CNN features of pre-trained models on ImageNet and ResNet+ Decompose, Transfer, and Compose (DeTraC), for the classification of COVID-19 chest X-Ray images: The developed code is available at https://github.com/asmaa4may/DeTraCCOVId19 I80 samples of normal CXRs (with 4020 Inline graphic 4892 pixels) from the Japanese Society of Radiological Technology (JSRT) + Cohen JP. COVID-19 image data collection. https://githubcom/ieee8023/covid-chestxray-dataset. 2020;. High accuracy of 95.12% (with a sensitivity of 97.91%, a specificity of 91.87%, and a precision of 93.36%)
Ali Narin et al. [76] Pre-trained ResNet50 model with transfer learning The open source GitHub repository shared by Dr. Joseph Cohen+Chest X-Ray Images (Pneumonia) https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia Accuracy (97% accuracy for InceptionV3 and 87% accuracy for Inception-ResNetV2).
Linda Wang et al. [42] COVID-Net: lightweight residual projection expansion- projection-extension (PEPX) design pattern, Model is available publicly for open access at https://github.com/lindawangg/COVID-Net. COVIDx dataset: 16,756 chest radiography images across 13,645 patient cases from two open access data repositories Accuracy 92.4% on COVIDx dataset
Ezz El-Din Hemdan et al. [60] COVIDX-Net: based on seven different architectures of DCNNs; namely VGG19, DenseNet201, InceptionV3, ResNetV2, Inception ResNetV2, Xception, and MobileNetV2 COVID-19 cases provided by Dr. Joseph Cohen and Dr. Adrian Rosebrock [63] F1-scores of 89% and 91% for normal and COVID-19, respectively
Khalid EL ASNAOUI et al. [77] Fined tuned versions of (VGG16, VGG19, DenseNet201, Inception-ResNet-V2, Inception-V3, Resnet50, MobileNet-V2 and Xception). 5856 images (4273 pneumonia and 1583 normal). Resnet50, MobileNet-V2 and Inception-Resnet-V2 show highly satisfactory performance with accuracy (more than 96%).
Prabira Kumar Sethy et al. [78] Deep features from Resnet50 + SVM classification Data available in the repository of GitHub, Kaggle and Open-i as per their validated X-Ray images. Resnet50 plus SVM achieved accuracy, FPR, F1 score, MCC and Kappa are 95.38%,95.52%, 91.41% and 90.76% Respectively.
Ioannis D. Apostolopoulos1 et al. [79] Various fine-tune dmodels: VGG19, MobileNet, Inception,Inception Resnet V2, Xception 1427 X-Ray images. 224 images with confirmed Covid-19, 700 images with confirmed common pneumonia, and 504 images of normal conditions are included Accuracy with Xception was the highest, 95.57%, sensitivity of 8% and specificity of 99.99%.
Biraja Ghoshal et al. [58] Dropweights based Bayesian Convolutional Neural Networks (BCNN) 68 Posterior-Anterior (PA) X-Ray images of lungs with COVID-19 cases from Dr. Joseph Cohen’s Github repository, augmented the dataset with Kaggle’s Chest X-Ray Images (Pneumonia) from healthy patients, a total of 5941 PA chest radiography images across 4 classes (Normal: 1583, Bacterial Pneumonia: 2786, non-COVID-19 Viral Pneumonia: 1504, and COVID-19: 68). Accuracy of 89.82% with BCNN at dropweights rate=3%
Muhammad Farooq, Abdul Hafeez [59] 3-step technique to fine-tune a pre-trained ResNet-50 architecture to improve model performance COVIDx dataset Accuracy of 96.23% (on all the classes) on the COVIDx dataset
Yu-Huan Wu et al. [74] 3-class classifier (healthy, COVID-19, non-COVID-pneumonia with Res2Net backbone. Segmenation model wih VGG-16 backbone COVID-CS dataset (144,167 images, 750 patients of which are 400 COVID-19 positive) 95% sensitivity and 93% specificity

4). Representative Work, Evaluation and Discussion

To date, many deep learning-based computer vision models for X-Ray COVID-19 were proposed. One of the most significant development is the model COVID-Net [58] proposed by Darwin AI, Canada. In this work, human-driven principled network design prototyping is combined with machine-driven design exploration to produce a network architecture for the detection of COVID-19 cases from chest X-Ray. The first stage of the human-machine collaborative design strategy is based on residual architecture design principles. The dataset used to train and evaluate COVID-Net is referred to as COVIDx [58] and comprise a total of 16,756 chest radiography images across 13,645 patient cases. The proposed model achieved 92.4% accuracy 80% sensitivity for COVID-19 diagnosis.

The initial network design prototype makes one of three classes: a) no infection (normal), b) non-COVID19 infection (viral and bacterial), and c) COVID-19 viral infection. The goal is to aid clinicians to decide better which treatment strategy to employ depending on the cause of infection since COVID-19 and non-COVID19 infections require different treatment plans. In the second stage, data, along with human-specific design requirements, act as a guide to a design exploration strategy to learn and identify the optimal macro- and microarchitecture designs to construct the final tailor-made deep neural network architecture. The proposed COVIDNet network diagram is shown in FIGURE 5 and available publicly at https://github.com/lindawangg/COVID-Net.

FIGURE 5.

FIGURE 5.

Architectural diagram of COVID-Net [42]. We can observe High architectural diversity and selective long-range connectivity.

Hemdan et al. [59] proposed the COVIDX-Net based on seven different architectures of DCNNs; namely VGG19, DenseNet201 [60], InceptionV3, ResNetV2, InceptionResNetV2, Xception, and MobileNetV2 [61]. These models were trained on COVID-19 cases provided by Dr Joseph Cohen and Dr Adrian Rosebrock, available at https://github.com/ieee8023/covid-chestxray-dataset [62]. The best model combination resulted in F1-scores of 0.89 and 0.91 for normal and COVID-19 cases. Similarly, Abbas et al. [63] proposed a Decompose, Transfer, and Compose (DeTraC) approach for the classification of COVID-19 chest X-Ray images. The authors applied CNN features of pre-trained models on ImageNet and ResNet to perform the diagnoses. The dataset consisted of 80 samples of normal CXRs (with 4020 Inline graphic 4892 pixels) from the Japanese Society of Radiological Technology (JSRT) Cohen JP. COVID-19 image data collection, available at https://githubcom/ieee8023/covid-chestxray-dataset [62]. This model achieved an accuracy of 95.12% (with a sensitivity of 97.91%, a specificity of 91.87%, and a precision of 93.36%). The code is available at https://github.com/asmaa4may/DeTraCCOVId19.

Ghoshal and Tucker et al.. [57] introduced Uncertainty-Aware COVID-19 Classification and Referral model with the proposed Dropweights based on Bayesian Convolutional Neural Networks (BCNN). For COVID-19 detection to be meaningful, two types of predictive uncertainty in deep learning were used on a subsequent work [64]. One of it is Epistemic, or Model uncertainty accounts for the model parameters uncertainty as it does not take all of the aspects of the data into account or the lack of training data. The other is Aleatoric uncertainty that accounts for noise inherent in the observations due to class overlap, label noise, homoscedastic and heteroscedastic noise, which cannot be reduced even if more data were to be collected. Bayesian Active Learning by Disagreement (BALD) [65], is based on mutual information that maximizes the information between model posterior and predictions density functions approximated as the difference between the entropy of the predictive distribution and the mean entropy of predictions across samples.

A BCCN model was trained on 68 Posterior-Anterior (PA) X-Ray images of lungs with COVID-19 cases from Dr Joseph Cohen’s Github repository [62], augmented the dataset with Kaggle’s Chest X-Ray Images (Pneumonia) from healthy patients. It achieved 88.39% accuracy on the available dataset. This work additionally recommended visualisation of distinct features, as an additional insight to point prediction for a more informed decision-making process. It used the saliency maps produced by various state-of-the-art methods, e.g. Class Activation Map (CAM) [66], Guided Backpropagation, and Guided Gradient, and Gradients to show more distinct features in the CSR images.

A Capsule Network-based Framework called COVID-CAPS [67] is proposed for the Identification of COVID-19 cases from X-ray Images. A lightweight deep neural network (DNN) based mobile app is proposed in [68] that can process noisy images of chest X-ray (CXR) for point-of-care COVID-19 screening and is available at url:https://github.com/xinli0928/COVID-Xray. A 3-step approach to fine-tune a pre-trained ResNet-50 architecture to improve model performance is proposed by [58]. Similar other works are proposed recently [69][71].

To the best of our knowledge, [72] reported the largest dataset including 144,167 images from 750 patients (400 COVID patients). As deep-learning models are data-hungry and most other projects perform transfer learning on extremely small datasets (often < 1000 images), this is a remarkable project and a first step towards signifying more realistic and clinically relevant performance estimates. The classifier achieves a sensitivity of 95% and a specificity of 93%. Besides, a segmentation model is trained with the deep supervision strategy and shown to identify lesion areas of the positive predictions. One drawback of the work is that the models operate autonomously and the lesions identified by the segmentation model may by no means have been relevant for the positive prediction of the classifier.

5). Ultrasound Imaging

Lung ultrasound (LUS) is evolved over the last few years to its theoretical and operative aspects. One of the characteristic features of LUS is its ability to define the alterations affecting the ratio between tissue and air in the superficial lung [55], [78].

The practical advantages of LUS are numerous: US devices are portable, bringing along the salient benefit of performing a point-of-care LUS at the patient’s bedside or even home that can easily be repeated for monitoring purposes. LUS minimizes the requirement for transferring the patient, controlling the potential risk of further infection and spreading it among health care personnel.

In contrast to CT and X-Ray, US is non-irradiating, and the instruments are cheap and thus highly available even outside developed countries [79]. However, ultrasound is operator-dependent and to follow standardized protocols for LUS like the BLUE protocol [80], experienced technicians are desired. This is boon and bane: While conducting a full LUS can take a few minutes and cause significantly higher portions of data than other modalities, the auto-correlation is exceptionally high and diagnostic patterns are visible only in few frames. LUS was repeatedly shown superiority to CXR for diagnosing pulmonary diseases (for review see [81]), especially in resource-limited settings [82]. For COVID-19, LUS patterns are correlated to disease stage, comorbidities and severity of pulmonary injury [83] and most dominantly include B-lines, vertical artifacts that range from the pleural deep into the lung [84]. Importantly, LUS was lately reported higher sensitivity and equal specificity than CXR in diagnosing COVID-19 [85]. In a comparison of LUS to CT, it was shown that for all typical features of LUS in COVID-19 patients, analogs to known patterns in CT scans could be found [86]. FIGURE 6 illustrates the detection of COVID-19 from ultrasound images. While LUS is used commonly as a first-line examination method in European countries like Italy [87], it is not mentioned in the ACR recommendations as clinical practice for COVID [54]. Besides, some articles argued that LUS can assist early diagnosis and assessment of COVID and even found better sensitivity of LUS in detecting certain features [88]. This has caused a vivid debate on the role of LUS for the COVID pandemic [89][92].

FIGURE 6.

FIGURE 6.

Detection of COVID-19 from ultrasound images: Ultrasound imagery is widely available and accessible throughout the world and therefore, can be a valuable tool for monitoring disease progression. Adapted from [55].

6). Representative Work, Evaluation and Discussion

Since LUS is a less established practice for examining COVID-19 patients, less clinical data is recorded and publicly available. It is presumably a primary reason why fewer computer vision projects focus on it, despite the advocacy of recent trends in medicine (see above).. TABLE 3 presents a more categorical presentation of such methods.

TABLE 3. Representative Works for Infected Disease Prevention and Control.
Study Prevention Control Methodology Implementation/Dataset Performance
Zhongyuan Wang et al. [102] Masked Face Recognition based on deep learning Dataset is available at: https://github.com/X-zhangyang/Real-World-Masked-Face-Dataset. The multigranularity masked face recognition model we developed achieves 95% accuracy
Joshua M. Pearce [103] RepRap-class 3-D printers and open source microcontrollers, mass distributed manufacturing of ventilators 3D printing can facilitate supply chain.
W. Chiu, et al. [104] Inrfrared thermograpgy: Mass-fever screening 72,327 patients or visitors passed through the only entrance where a thermography station was in operation. Over a period of one month, hundred and five patients or visitors were detected to have a thermographic fever detection
Edouard A. Hay [105] Convolutional neural networks for identification of bacteria: Github repository https://github.com/rplab/Bacterial-Identification. Light sheet microscopy image data Over 90% accuracy

Preliminary investigations for clarifying the diagnostic and prognostic role of LUS in COVID-19 are underway. Computer vision on ultrasound imaging became increasingly popular in the last years [93], but comparably little work has been done on LUS.

The first work to apply computer vision on ultrasound probes of COVID-19 patients was POCOVID-Net, a deep convolutional neural network with a VGG backbone [94]. POCOVID-Net introduced an LUS dataset that initially consisted of 1103 images (654 COVID-19, 277 bacterial pneumonia, and 172 healthy controls), sampled from 64 videos. As of July 2020, the dataset contains ~150 videos and ~50 images, resembling the largest publicly available dataset of LUS: https://github.com/jannisborn/covid19_pocus_ultrasound. Besides, the trained models were deployed and can be freely used at: https://pocovidscreen.org. On the initial dataset, POCOVID-Net reports a video accuracy of 92% and a sensitivity and specificity of 96% and 79% for COVID-19 respectively. It accounts for a preliminary proof-of-concept that COVID-19 can be automatically distinguished from other pulmonary conditions through LUS, and it opens a branch to follow up on the granularity of the differentiation.

On the updated POCOVID-Net dataset, performance could be improved with an accuracy of 94%, sensitivity and specificity of 98% and 91% in a 5-fold cross-validation on LUS videos [95]. This work utilizes Bayesian deep learning to compute uncertainty estimations that are deemed crucial for medical imaging [96]. [95] then demonstrated how epistemic uncertainty estimations (measured by Monte Carlo dropout) could let the model self-recognize low confidence situations. Additionally, the authors computed and validated CAMs with the help of medical experts and found that the model learns in a completely unsupervised fashion to highlight lung consolidations (94% sensitivity) and, to a lesser extent, A-lines (62%).

The CAMs were overall found helpful for diagnosis by the experts. However, it leaves room for improvement in B-line detection. Interestingly, the performance could be mildly improved when the classifier was coupled with the segmentation model by [97].

The named work by [97] introduced a rich stack of CNN models for segmentation and severity assessment of COVID-19 patients. Based on ~1000 images from convex probes of 33 patients, an ensemble of 3 segmentation models (UNet, UNet++ and Deeplabv3+) is shown to reliable extract both, A-lines and COVID biomarkers (accuracy 96%, binary dice score 0.75). Besides, they classify COVID severity on four levels (0 to 3). They introduce a so-called regularised spatial transformer network that performs a weak localization by extracting two transformed image sections that, ideally, should contain pathological artifacts. Their model achieves a precision of 70% and a recall of 60% on the four-class classification. However, despite the authors claim to release a dataset of 277 LUS videos from 35 patients with a total of almost 60,000 frames, to date, only 60 videos can be accessed (after the account request is manually approval). No annotations are available for those videos, rendering a validation of the results effectively impossible.

As B-lines are maybe the most critical LUS feature in COVID patients, [98] presented a specialized approach for line artifact quantification that utilizes a non-convex regularization technique dubbed Cauchy proximal splitting. This technique outperforms state-of-the-art B-line identification [99] and detects 87% of the B-lines in 9 COVID-19 patients, reducing the error margin by 40% compared to [99].

Since ultrasound equipment is small and portable options are available (POCUS devices), the impact of web-independent, on-device analysis is high, especially since LUS belongs to the standard repertoire even in remote medical facilities.

Future projects could, for example, improve the mediocre results found in an ablation study with mobile-friendly CNNs [95] to facilitate on-device processing.

B. Prevention and Control

WHO has provided some guidelines on infection prevention and control (IPC) strategies for use when infection with a novel coronavirus is suspected [104]. Major IPC tries to control transmission in health care settings that include early recognition and source control and applying standard precautions for all patients. It also includes implementation of additional empiric precautions like airborne precautions for suspected cases of COVID-19, implementation of administrative controls, and use of environmental and engineering controls. Computer vision applications are providing valuable support for the implementation of IPC strategies.

1). Representative Work, Evaluation and Discussion

Protective techniques to control the virus spread in the early stage of disease progression were considered very early, as the usage of masks. Some countries like China implemented it as a control strategy at the start of the epidemic. Computer vision-based systems greatly facilitated such implementation.

Wang et al.. [100] proposed the Masked Face Recognition approach using a multi granularity masked face recognition model, resulting in 95% accuracy on a masked face image dataset. The data was made public for research and provided three types of masked face datasets, including Masked Face Detection Dataset (MFDD), [105], Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD) [106].

A similar strategy is the use of Infrared thermography. It can be used as an early detection strategy for infected people, especially in crowns like passengers at an airport-various medical applications of infrared thermography re summarised by Lahiri et al. [56], including fever screening. Somboonkaew et al. [107] introduced a mobile platform that can be used for an automatic fever screening system using forehead temperature. Ghassemi et al. [108] has discussed the best practices for standardized performance and testing of infrared thermographs. An Infection Screening System based on Thermography and CCD Camera is proposed by Negishi et al. [109] with Good Stability and Swiftness for Non-contact Vital-Signs Measurement by Feature Matching and MUSIC Algorithm. Earlier for SARD spread control. A computer vision system to help in fever screening by Chiu et al. [102] was used in earlier outbreaks of SARS. From 13 April to 12 May 2003, 72,327 patients and visitors passed through the only entrance allowed at TMU-WFH where a thermography station was in operation. FIGURE 7 illustrates the use of thermal imagery for temperature screening.

FIGURE 7.

FIGURE 7.

Temperature screening in process with thermal imagery of a subject who is talking on a mobile phone; (a) after 1 min of talking and (b) after 15 min of talking. It shows that the temperature of the encircled region increased from 30.56 to 35.15 C after 15 min of talking. The temperature of the region around the ear (indicated by an arrow) elevated from 33.35 to 34.82C. A similar system can be used for COVID-19 related fever screening.Adapted from [56].

Additional miscellaneous approaches for prevention and control are also worth noting. An example is pandemic drones using remote sensing and digital imagery, which were recommended for identifying infected people. Al-Naji et al. [110] have used such a system for remote life sign monitoring in disaster management in the past. A similar application is to use vision-guided robot control for 3D object recognition and manipulation. Moreover, 3D modelling and printers are helping to maintain the supply of healthcare equipment in this troubled time. Pearce [101] discusses RepRap-class 3-D printers and open-source microcontrollers. The applications are relevant since mass distributed manufacturing of ventilators has the potential to overcome medical supply shortages. Lastly, germ scanning is an essential step against combating COVID-19. Hay and Parthasarathy [103] has proposed a convolutional neural network for germ scanning such as the identification of bacteria Light-sheet microscopy image data with more than 90% accuracy.

C. Treatment and Clinical Management

Although various attempts and claims of vaccinations development are announced in the media, however, there is no agreed and widely used treatment for disease caused by the virus at the moment. However, many of the COVID-19 symptoms can be treated.depending on the clinical condition of the patient. An improvement in clinical management practices is possible through automating various practices with the help of computer vision. One example is the classification of patients based on the severity of the disease and advising them appropriate medical care. FIGURE 8. provides a scenario of progression and severity monitoring by using different saliency maps that provide additional insights diagnosis. These maps help to identify the areas of activation that can lead to disease progression monitoring and severity detection. FIGURE 9 illustrates the Corona score calculation on a 3D model of patients CT images for patient disease progression. It is one of the ways infected areas can be visualised, and disease severity can be predicted for better disease management and patient care. TABLE 4 presents a more categorical presentation of such methods.

FIGURE 8.

FIGURE 8.

Visualizations shown by using different saliency maps that provide additional insights diagnosis. These maps help to identify the areas of activation that can lead to disease progression monitoring and severity detection. Adapted from [57].

FIGURE 9.

FIGURE 9.

Method of corona score calculation for patient disease progression monitoring is illustrated. It is one of the ways infected areas can be visualised, and disease severity can be predicted for better disease management and patient care. [35].

TABLE 4. Representative Works for Infected Disease Treatment and Progression Monitoring.

Study Treatment Or Management Methodology Implications
Daniel Wrapp et al. [115] Using biophysical assays, it is shown that this protein binds at least 10 times more tightly than the corresponding spike protein of severe acute respiratory syndrome (SARS)-CoV to their common host cell receptor. The virus connects to host cells through its trimeric spike glycoprotein. 3.5-angstrom-resolution cryo-electron microscopy structure of the 2019-nCoVS trimer in the prefusion conformation was studied These studies provide valuable. information to guide the development of medical counter-measures for 2019-nCoV.
Ophir Gozes et al. [35] Corona score for patient disease progression monitoring and screening It was basedonthe development of CT images Dataset provide d by ChainZ (www.ChainZ.cn) and Corona score was used to screen critically ill patients. For instance, Corona score of 191.5 cm3 was observed at the time of Admission and after recovery, it turned 0 means no opacities
Yunlu Wang et. al. [114] Depth camera and deep learning is used to classify 6 clinically significant respiratory patterns. GRU neural network with bidirectional and attentional mechanisms (BI-AT-GRU) Abnormal respiratory patterns classifier can be able to large-scale screening of people infected with COVID-19
Yoshihiro Uesawa et. al. [118] Quantitative structure-activity relationship analysis with the help of deep learning Potential use in Drug discovery

1). Representative Work, Evaluation and Discussion

An essential part of the fight against the virus is clinical management, which can be done by identifying patients that are critically ill so that they get immediate medical attention or ventilator support. A disease progression score is recommended to classify different types of infected patients in [35]. It is called “corona score” and is calculated by measurements of infected areas and the severity of disease from CT images. The corona score measures the progression of patients over time, and it is computed by a volumetric summation of the network-activation maps.

MacLaren et al. [111] supports that radiological evidence can also be an essential tool to distinguish critically ill patients. Wang et al. [112] used depth camera and deep learning as abnormal respiratory patterns classifier that may contribute to the large-scale screening of people infected with the virus accurately and unobtrusively. Respiratory Simulation Model (RSM) is developed to control the gap between scarce real-world data and a large amount of training data. They proposed GRU neural network with bidirectional and attentional mechanisms (BI-AT-GRU) to classify six clinically significant respiratory patterns (Eupnea, Tachypnea, Bradypnea, Biots, Cheyne-Stokes, and Central-Apnea) to identify critically ill patients. The proposed model can classify the respiratory patterns with accuracy, precision, recall, and F1 of 94.5%, 94.4%, 95.1%, and 94.8%, respectively. Demo videos of this method working in situations of one subject and two subjects can be accessed online (https://doi.org/10.6084/m9.figshare. 11493666.v1).

The CoV spike (S) glycoprotein is the main target for vaccines, therapeutic antibodies, and diagnostics that can guide future decisions. The virus connects to host cells through its trimeric spike glycoprotein. Using biophysical assays, Wrapp et al. [113] illustrated that this protein binds to their common host cell receptor at least ten times more tightly than the corresponding spike protein of severe acute respiratory syndrome (SARS)-CoV. Protein X-ray crystallography can discover the atomic structure of molecules and their functions. It can further facilitate scientists to design new drugs targeted to that function. MAchine Recognition of Crystallization Outcomes (MARCO) [114] initiative has introduced deep convolutional networks to achieve an accuracy of more than 94% on the visual recognition task of identifying protein crystals. It uncovers the potential of computer vision and deep learning for drug discovery.

Quantitative structure-activity relationship (QSAR) analysis has perspectives on drug discovery and toxicology [115]. It employs structural, quantum chemical and physicochemical features calculated from molecular geometry as explanatory variables predicting physiological activity. Deep feature representation learning can be used for QSAR analysis by incorporating 360° images of molecular conformations. Uesawa [116] has proposed QSAR (Quantitative structure-activity relationship) analysis using deep learning using a novel molecular image input technique. Similar techniques can be used for drug discovery to pave the way for vaccine development for COVID-19.

IV. Dataset and Resources

A. CT Images

B. CX-Ray Images

  • COVID-19 Radiography database [118] - A team of researchers from Qatar University, Doha, and the University of Dhaka, Bangladesh, along with collaborators from Pakistan and Malaysia with medical doctors have created a database of chest X-Ray images for COVID-19 positive cases along with Normal and Viral Pneumonia images. In the current release, there are 219 COVID-19 positive images, 1341 normal images and 1345 viral pneumonia images. The authors said that they would continue to update this database as soon as new X-Ray images for COVID-19 pneumonia patients. The project can be found at GitHub with MATLAB codes and trained models: https://github.com/tawsifur/COVID-19-Chest-X-Ray-Detection. The research team managed to classify COVID-19, Viral pneumonia and Normal Chest X-Ray images with an accuracy of 98.3%. This scholarly work was submitted to Scientific Reports (Nature), and the manuscript was uploaded to ArXiv. Please make sure to give credit while using the dataset, code and trained models.

  • COVID-19 Image Data Collection [62]- An initial COVID-19 open image data collection is provided by Joseph Paul Cohen. all images and data are released under the following URL https://github.com/ieee8023/covid-chestxray-dataset.

  • COVIDx Dataset [42] - This is the release of the brand-new COVIDx dataset with 16,756 chest radiography images across 13,645 patient cases. The current COVIDx dataset is constructed by the open-source chest radiography datasets at https://github.com/ieee8023/covid-chestxray-dataset and https://www.kaggle.com/c/rsna-pneumonia-detection-challenge. It is a combination of data provided by many parties: the Radiological Society of North America (RSNA), others involved in the RSNA Pneumonia Detection Challenge, Dr Joseph Paul Cohen, and the team at MILA, involved in the COVID-19 image data collection project for making data available to the global community.

  • Chest X-Ray8 [119] - The chest X-Ray is one of the most commonly accessible radiological examinations for screening and diagnosis of many lung diseases. A tremendous number of X-Ray imaging studies accompanied by radiological reports are accumulated and stored in many modern hospitals’ Picture Archiving and Communication Systems (PACS), available at https://nihcc.app.box.com/v/ChestXray-NIHCC).

C. Other Images

  • Lung ultrasound dataset [94] - An open data collection initiative similar of LUS, similar to the one by Cohen et al. for CT and CXR. The growing database is continuously updated and while it partially collects data from dispersed public sources it also releases unpublished clinical data. The dataset is thought to facilitate differential diagnosis from LUS and provides 4 classes (healthy, bacterial pneumonia, COVID-19 and non-COVID viral pneumonia. As of July 2020, the dataset contains ~150 videos and ~50 images resembling the largest publicly available dataset of LUS: https://github.com/jannisborn/covid19_pocus_ultrasound.

  • Masked Face Recognition Datasets [100] - Three types of masked face datasets were introduced, including Masked Face Detection Dataset (MFDD), Real-world Masked Face Recognition Dataset (RMFRD) and Simulated Masked Face Recognition Dataset (SMFRD). MFDD dataset can be used to train an accurate masked face detection model, which serves for the subsequent masked face recognition task. RMFRD dataset includes 5,000 pictures of 525 people wearing masks and 90,000 images of the same 525 subjects without masks. To the best of our knowledge, this is currently the world’s largest real-world masked face dataset. SMFRD is a simulated masked face data set covering 500,000 face images of 10,000 subjects. These datasets are available at https://github.com/X-zhangyang/Real-World-Masked-Face-Dataset.

  • Thermal Images Datasets - There is no dataset of thermals for high fever screening. However, a fully annotated thermal face database and its application for thermal facial expression recognition were proposed by Kopaczka [120]. Information on further ideas of related data that can be figured out by using such systems is available at http://www.flir.com.au/discover/public-safety/thermal-imaging-for-detecting-elevated-body-temperature/.

V. Discussion and Future Work

Overall, it is encouraging that the computer vision research community had a massive response in return to the call for fighting COVID-19 epidemic. Data was collected and shared in a short time, and researchers proposed various approaches to address different challenges related to disease control. It became possible due to recent success in the field of deep learning and artificial intelligence. Web repositories like GitHub and ArXiv have contributed significantly to the rapid sharing of information. However, the impact of this research work is limited due to lack of clinical testing, fair evaluation and appropriate imaging datasets.

We argue that COVID-19 research landscape is quite broad that covers more than imaging and becomes beyond the scope of computer vision research. Similarly, we did not include any machine learning or signal processing work that does not include imaging modality. Most of the research work is performed around disease diagnosis problem with various performance metrics and without clinical trials that make it hard to compare their performance.

Similarly, various research datasets have been released for research purpose since the outbreak of the epidemic. However, these datasets can offer only limited scope and problem domains. For instance, for disease progression, often multiple images related to single patients are required with the timeline. Similarly, to evaluate different imaging modalities, researchers require multimodal imaging data related to the same patient that is not yet available for research purposes. The future work includes the fair performance comparison of different approaches, collection of a vast universal dataset and benchmark. We hope that the collective efforts of computer vision community like Imagaenet challenge can fill up this gap.

VI. Concluding Remarks

In this article, we presented an extensive survey of computer vision efforts and methods to combat the COVID-19 pandemic challenge and also gave a brief review of the representative work to date. We divide the described methods into four categories based on their role in disease control: Computed Tomography (CT) scans, X-Ray Imagery, Ultrasound imaging and Prevention and Control. We provide detailed summaries of preliminary representative work, including available resources to facilitate further research and development. We hope that, in this first survey on Computer vision methods for COVID-19 control with extensive bibliography content, one can find give valuable insight into this domain and encourage new research. However, this work can be considered only as an early review since many computer vision approaches are being proposed and tested to control the COVID-19 pandemic at the current time. We believe that such efforts will be having a far-reaching impact with positive results to periods during the outbreak and post the COVID-19 pandemic.

Biographies

graphic file with name ulhaq-3027685.gif

Anwaar Ulhaq (Member, IEEE) received the Ph.D. degree in artificial intelligence from Monash University, Australia. He has worked as a Research Fellow with the Institute for Sustainable Industries and Liveable Cities, Victoria University, Australia. He is currently serving as a Lecturer with the School of Computing and Mathematics and the Deputy Leader of the Machine Vision and Digital Health (MaViDH) Research Group, Charles Sturt University. He has published more than 50 peer-reviewed publications in the field of artificial intelligence. His research interests include signal and image processing, deep learning, data analytics, and computer vision. He holds the Professional Certificate in data analytics from the Harvard Business School, Harvard University, USA, and the Oxford Executive Leadership Programme from the University of Oxford.

graphic file with name born-3027685.gif

Jannis Born (Member, IEEE) received the B.Sc. degree in cognitive science and the M.Sc. degree in neural systems and computation jointly from ETH Zurich and the University of Zurich. He is currently pursuing the Ph.D. degree with the Computational Systems Biology Group, IBM Research Zurich, and the Machine Learning and Computational Biology Group, Department of Biosystems Science and Engineering (D-BSSE), ETH Zurich. His research interests include machine learning for healthcare applications and include deep learning, drug discovery, and medical imaging.

graphic file with name khan-3027685.gif

Asim Khan (Member, IEEE) received the M.Sc. degree in software engineering from Iqra University and the Master of Information Systems Management degree from Swinburne University, Melbourne, Australia, in 2010. He is currently pursuing the Ph.D. degree with Victoria University, Melbourne. His research interests include machine learning, deep learning, computer vision, and pattern recognition.

graphic file with name gomes-3027685.gif

Douglas Pinto Sampaio Gomes (Member, IEEE) received the B.E. degree in electrical engineering from the Federal University of Mato Grosso, Brazil, in 2013, the M.Sc. degree in power systems from the University of Sao Paulo, Brazil, in 2016, and the Ph.D. degree from Victoria University, Melbourne, Australia. His research interests include power systems, protection, power quality, and artificial intelligent systems.

graphic file with name chakr-3027685.gif

Subrata Chakraborty (Senior Member, IEEE) received the Ph.D. degree in decision support systems from Monash University, Australia. He has worked as an Academician with the University of Southern Queensland, Charles Sturt University, the Queensland University of Technology, and Monash University. He is currently a Senior Lecturer with the Faculty of Engineering and Information Technology, School of Information, Systems and Modelling, University of Technology Sydney (UTS), Australia. He is also a core member of the Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), UTS. His current research interests include optimization models, data analytics, machine learning, and image processing with decision support applications in diverse domains, including business, agriculture, transport, health, and education. He is a Certified Professional Senior Member of ACS.

graphic file with name paul-3027685.gif

Manoranjan Paul (Senior Member, IEEE) received the Ph.D. degree from Monash University, Australia, in 2005. He was a Postdoctoral Research Fellow with the University of New South Wales, Monash University, and Nanyang Technological University. He is currently a Full Professor, the Director of the Computer Vision Laboratory, and the Leader of the Machine Vision and Digital Health (MaViDH) Research Group, Charles Sturt University, Australia. He has published around 200 peer reviewed publications, including 72 journals. He has supervised 15 Ph.D. students to completion. He was an Invited Keynote Speaker in IEEE DICTA 2017 and 2013, CWCN 2017, WoWMoM 2014, and ICCIT 2010. His major research interests include video coding, image processing, digital health, wine technology, machine learning, EEG signal processing, eye tracking, and computer vision. He was awarded the ICT Researcher of the Year 2017 by Australian Computer Society. He obtained more than $3.6 million competitive external grant, including the Australian Research Council (ARC) Discovery grants and the Australia-China grant. He was the General Chair of PSIVT 2019 and the Program Chair of PSIVT 2017 and DICTA 2018. He is currently an Associate Editor of three top ranked journals such as the IEEE Transactions on Multimedia, the IEEE Transactions on Circuits and Systems for Video Technology, and the EURASIP Journal on Advances in Signal Processing.

Funding Statement

This work was supported by Charles Sturt University, COVID-19 Fund.

References

  • [1].Paules C. I., Marston H. D., and Fauci A. S., “Coronavirus infections|more than just the common cold,” Jama, vol. 323, no. 8, pp. 707–708, 2020. [DOI] [PubMed] [Google Scholar]
  • [2].Chen Y., Liu Q., and Guo D., “Emerging coronaviruses: Genome structure, replication, and pathogenesis,” J. Med. Virol., vol. 92, no. 4, pp. 418–423, Apr. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Coronavirus Research is Being Published at a Furious Pace. Accessed: Mar. 31, 2020. [Online]. Available: https://www.economist.com/graphic-detail/2020/03/20/coronavirus-research-is-being-published-at-a-furious- pace
  • [4].Ulhaq A., Khan A., Gomes D., and Paul M., “Computer vision for COVID-19 control: A survey,” 2020, arXiv:2004.09420. [Online]. Available: http://arxiv.org/abs/2004.09420 [DOI] [PMC free article] [PubMed]
  • [5].Hui D. S., Azhar E. I., Madani T. A., Ntoumi F., Kock R., Dar O., Ippolito G., Mchugh T. D., Memish Z. A., Drosten C., and Zumla A., “The continuing 2019-nCoV epidemic threat of novel coronaviruses to global health—The latest 2019 novel coronavirus outbreak in Wuhan, China,” Int. J. Infectious Diseases, vol. 91, pp. 264–266, Feb. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Shi H., Han X., Jiang N., Cao Y., Alwalid O., Gu J., Fan Y., and Zheng C., “Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: A descriptive study,” Lancet Infectious Diseases, vol. 20, no. 4, pp. 425–434, 2020, doi: 10.1016/S1473-3099(20)30086-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].WHO Director-General’s Opening Remarks at the Media Briefing on COVID-19-11 March 2020, World Health Org., Geneva, Switzerland, 2020. [Google Scholar]
  • [8].(2020). Explore the Cambridge Dictionary. Accessed Mar. 31, 2020. [Online]. Available: https://dictionary.cambridge.org/
  • [9].Wang W., Xu Y., Gao R., Lu R., Han K., Wu G., and Tan W., “Detection of SARS-CoV-2 in different types of clinical specimens,” JAMA, vol. 323, pp. 1843–1844, Mar. 2020, doi: 10.1001/jama.2020.3786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Kucirka L. M., Lauer S. A., Laeyendecker O., Boon D., and Lessler J., “Variation in false-negative rate of reverse transcriptase polymerase chain reaction–based SARS-CoV-2 tests by time since exposure,” Ann. Internal Med., vol. 173, no. 4, pp. 262–267, Aug. 2020, doi: 10.7326/M20-1495. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Chen C., Gao G., Xu Y., Pu L., Wang Q., Wang L., Wang W., Song Y., Chen M., Wang L., Yu F., Yang S., Tang Y., Zhao L., Wang H., Wang Y., Zeng H., and Zhang F., “SARS-CoV-2–positive sputum and feces after conversion of pharyngeal samples in patients with COVID-19,” Ann. Internal Med., vol. 172, no. 12, pp. 832–834, Jun. 2020, doi: 10.7326/M20-0991. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Ai T., Yang Z., Hou H., Zhan C., Chen C., Lv W., Tao Q., Sun Z., and Xia L., “Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases,” Radiology, vol. 296, no. 2, 2020, Art. no. 200642. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Computed Tomography (CT)—Chest. Accessed Mar. 31, 2020. [Online]. Available: https://www.radiologyinfo.org/en/info.cfm?pg=chestct/
  • [14].Li Y. and Xia L., “Coronavirus disease 2019 (COVID-19): Role of chest CT in diagnosis and management,” Amer. J. Roentgenol., vol. 214, no. 6, pp. 1–7, 2020. [DOI] [PubMed] [Google Scholar]
  • [15].Liu T., Huang P., Liu H., Huang L., Lei M., Xu W., Hu X., Chen J., and Liu B., “Spectrum of chest CT findings in a familial cluster of COVID-19 infection,” Radiol., Cardiothoracic Imag., vol. 2, no. 1, Feb. 2020, Art. no. e200025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Chen R., Chen J., and Meng Q.-T., “Chest computed tomography images of early coronavirus disease (COVID-19),” Can. J. Anesthesia/J. Canadien d’Anesthésie, vol. 67, no. 6, pp. 754–755, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Jin C., Chen W., Cao Y., Xu Z., Zhang X., Deng L., Zheng C., Zhou J., Shi H., and Feng J., “Development and evaluation of an AI system for COVID-19 diagnosis,” MedRxiv, Jun. 2020, doi: 10.1101/2020.03.20.20039834. [DOI] [PMC free article] [PubMed]
  • [18].Li X., Zeng X., Liu B., and Yu Y., “COVID-19 infection presenting with CT halo sign,” Radiol., Cardiothoracic Imag., vol. 2, no. 1, Jan. 2020, Art. no. e200026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Xie X., Zhong Z., Zhao W., Zheng C., Wang F., and Liu J., “Chest CT for typical 2019-nCoV pneumonia: Relationship to negative RT-PCR testing,” Radiology, Aug. 2020, Art. no.200343, doi: 10.1148/radiol.2020200343. [DOI] [PMC free article] [PubMed]
  • [20].Pan F., Ye T., Sun P., Gui S., Liang B., Li L., Zheng D., Wang J., Hesketh R. L., Yang L., and Zheng C., “Time course of lung changes on chest CT during recovery from 2019 novel coronavirus (COVID-19) pneumonia,” Radiology, Jun. 2020, Art. no.200370, doi: 10.1148/radiol.2020200370. [DOI] [PMC free article] [PubMed]
  • [21].Fang Y., Zhang H., Xie J., Lin M., Ying L., Pang P., and Ji W., “Sensitivity of chest CT for COVID-19: Comparison to RT-PCR,” Radiology, vol. 296, no. 2, pp. 1–3, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Chen J., Wu L., Zhang J., Zhang L., Gong D., Zhao Y., Hu S., Wang Y., Hu X., Zheng B., and Zhang K., “Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: A prospective study,” MedRxiv, Mar. 2020, doi: 10.1101/2020.02.25.20021568. [DOI] [PMC free article] [PubMed]
  • [23].Zhou Z., Siddiquee M. M. R., Tajbakhsh N., and Liang J., “UNet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, 2018, pp. 3–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Li L., Qin L., Xu Z., Yin Y., Wang X., Kong B., Bai J., Lu Y., Fang Z., Song Q., and Cao K., “Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT,” Radiology, Mar. 2020, Art. no.200905, doi: 10.1148/radiol.2020200905. [DOI] [PMC free article] [PubMed]
  • [25].Lin T.-Y., Dollar P., Girshick R., He K., Hariharan B., and Belongie S., “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 2117–2125. [Google Scholar]
  • [26].Zheng C., Deng X., Fu Q., Zhou Q., Feng J., Ma H., Liu W., and Wang X., “Deep learning-based detection for COVID-19 from chest CT using weak label,” MedRxiv, pp. 1–13, Mar. 2020, doi: 10.1101/2020.03.12.20027185. [DOI]
  • [27].Mei X.et al. , “Artificial intelligence–enabled rapid diagnosis of patients with COVID-19,” Nature Med., vol. 26, pp. 1224–1228, May 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Wang S., Kang B., Ma J., Zeng X., Xiao M., Guo J., Cai M., Yang J., Li Y., Meng X., and Xu B., “A deep learning algorithm using CT images to screen for corona virus disease (COVID-19),” MedRxiv, Feb. 2020, doi: 10.1101/2020.02.14.20023028. [DOI] [PMC free article] [PubMed]
  • [29].Szegedy C., Vanhoucke V., Ioffe S., Shlens J., and Wojna Z., “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 2818–2826. [Google Scholar]
  • [30].Xu X., Jiang X., Ma C., Du P., Li X., Lv S., Yu L., Chen Y., Su J., Lang G., Li Y., Zhao H., Xu K., Ruan L., and Wu W., “Deep learning system to screen coronavirus disease 2019 pneumonia,” 2020, arXiv:2002.09334. [Online]. Available: http://arxiv.org/abs/2002.09334 [DOI] [PMC free article] [PubMed]
  • [31].Chen P.-H. and Bak P. R., “Medical imaging 2019: Imaging informatics for healthcare, research, and applications,” Proc. SPIE, vol. 10954, Jun. 2019, Art. no. 1095401. [Google Scholar]
  • [32].Gibson E., Giganti F., Hu Y., Bonmati E., Bandula S., Gurusamy K., Davidson B., Pereira S. P., Clarkson M. J., and Barratt D. C., “Automatic multi-organ segmentation on abdominal CT with dense V-networks,” IEEE Trans. Med. Imag., vol. 37, no. 8, pp. 1822–1834, Aug. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Song Y., Zheng S., Li L., Zhang X., Zhang X., Huang Z., Chen J., Zhao H., Jie Y., Wang R., and Chong Y., “Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images,” MedRxiv, Feb. 2020, doi: 10.1101/2020.02.23.20026930. [DOI] [PMC free article] [PubMed]
  • [34].Yamazaki M., Kasagi A., Tabuchi A., Honda T., Miwa M., Fukumoto N., Tabaru T., Ike A., and Nakashima K., “Yet another accelerated SGD: ResNet-50 training on ImageNet in 74.7 seconds,” 2019, arXiv:1903.12650. [Online]. Available: http://arxiv.org/abs/1903.12650
  • [35].Gozes O., Frid-Adar M., Greenspan H., Browning P. D., Zhang H., Ji W., Bernheim A., and Siegel E., “Rapid AI development cycle for the coronavirus (COVID-19) pandemic: Initial results for automated detection & patient monitoring using deep learning CT image analysis,” 2020, arXiv:2003.05037. [Online]. Available: http://arxiv.org/abs/2003.05037
  • [36].Ronneberger O., Fischer P., and Brox T., “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. Springer, 2015, pp. 234–241. [Google Scholar]
  • [37].Shan F., Gao Y., Wang J., Shi W., Shi N., Han M., Xue Z., Shen D., and Shi Y., “Lung infection quantification of COVID-19 in CT images with deep learning,” 2020, arXiv:2003.04655. [Online]. Available: http://arxiv.org/abs/2003.04655
  • [38].Barstugan M., Ozkaya U., and Ozturk S., “Coronavirus (COVID-19) classification using CT images by machine learning methods,” 2020, arXiv:2003.09424. [Online]. Available: http://arxiv.org/abs/2003.09424
  • [39].Speidel M. A., Wilfley B. P., Star-Lack J. M., Heanue J. A., and Lysel M. S. V., “Scanning-beam digital X-ray (SBDX) technology for interventional and diagnostic cardiac angiography,” Med. Phys., vol. 33, no. 8, pp. 2714–2727, Jul. 2006, doi: 10.1118/1.2208736. [DOI] [PubMed] [Google Scholar]
  • [40].Jin S.et al. , “AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system in four weeks,” MedRxiv, Mar. 2020, doi: 10.1101/2020.03.19.20039354. [DOI] [PMC free article] [PubMed]
  • [41].Ng M.-Y., Lee E. Y., Yang J., Yang F., Li X., Wang H., Lui M. M.-S., Lo C. S.-Y., Leung B., Khong P.-L., Hui C. K.-M., Yuen K.-Y., and Kuo M. D., “Imaging profile of the COVID-19 infection: Radiologic findings and literature review,” Radiol., Cardiothoracic Imag., vol. 2, no. 1, Feb. 2020, Art. no. e200034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [42].Wang L. and Wong A., “COVID-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images,” 2020, arXiv:2003.09871. [Online]. Available: http://arxiv.org/abs/2003.09871 [DOI] [PMC free article] [PubMed]
  • [43].Kroft L. J., van der Velden L., Girón I. H., Roelofs J. J., de Roos A., and Geleijns J., “Added value of ultra–low-dose computed tomography, dose equivalent to chest X-ray radiography, for diagnosing chest pathology,” J. Thoracic Imag., vol. 34, no. 3, p. 179, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Qu J., Yang W., Yang Y., Qin L., and Yan F., “Infection control for CT equipment and radiographers’ personal protection during the coronavirus disease (COVID-19) outbreak in China,” Amer. J. Roentgenol., vol. 215, no. 4, Oct. 2020. [DOI] [PubMed] [Google Scholar]
  • [45].Pisani P., Renna M. D., Conversano F., Casciaro E., Muratore M., Quarta E., Di Paola M., and Casciaro S., “Screening and early diagnosis of osteoporosis through X-ray and ultrasound based techniques,” World J. Radiol., vol. 5, no. 11, p. 398, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [46].Al-antari M. A., Al-masni M. A., Choi M.-T., Han S.-M., and Kim T.-S., “A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification,” Int. J. Med. Informat., vol. 117, pp. 44–54, Sep. 2018. [DOI] [PubMed] [Google Scholar]
  • [47].Kanwal N., Girdhar A., and Gupta S., “Region based adaptive contrast enhancement of medical X-ray images,” in Proc. 5th Int. Conf. Bioinf. Biomed. Eng., May 2011, pp. 1–5. [Google Scholar]
  • [48].Eberhard J. W., Koegl R., and Keaveney J. P., “Adaptive enhancement of X-ray images,” US. Patent 4 942 596, Jul. 17, 1990.
  • [49].Pietka E., “Lung segmentation in digital radiographs,” J. Digit. Imag., vol. 7, no. 2, pp. 79–84, May 1994. [DOI] [PubMed] [Google Scholar]
  • [50].Candemir S., Jaeger S., Palaniappan K., Musco J. P., Singh R. K., Xue Z., Karargyris A., Antani S., Thoma G., and McDonald C. J., “Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration,” IEEE Trans. Med. Imag., vol. 33, no. 2, pp. 577–590, Feb. 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Dai B. W., Dong N., Wang Z., Liang X., Zhang H., and Xing E. P., “SCAN: Structure correcting adversarial network for organ segmentation in chest X-rays,” in Proc. 4th Int. Workshop, Deep Learn. Med. Image Anal. (DLMIA), 8th Int. Workshop, Multimodal Learn. Clin. Decis. Support (ML-CDS), Held Conjunct. MICCAI, Granada, Spain, vol. 11045. Springer, Sep. 2018, p. 263. [Google Scholar]
  • [52].Gaál G., Maga B., and Lukács A., “Attention U-Net based adversarial architectures for chest X-ray lung segmentation,” 2020, arXiv:2003.10304. [Online]. Available: http://arxiv.org/abs/2003.10304
  • [53].Weinstock M. B., Echenique A., DABR J. W. R., Leib A., and Illuzzi F. A., “Chest X-ray findings in 636 ambulatory patients with COVID-19 presenting to an urgent care center: A normal chest X-ray is no guarantee,” J. Urgent Care Med., vol. 14, no. 7, pp. 8–13, 2020. [Google Scholar]
  • [54].American College of Radiology. (Mar. 22, 2020). ACR Recommendations for the use of Chest Radiography and Computed Tomography (CT) for Suspected COVID-19 Infection. [Online]. Available: https://Advocacy-andEconomics/ACR-Position-Statements/Recommendations-for-Chest-Radiography-and-CTfor-Suspected-COVID19-Infection
  • [55].Soldati G., Smargiassi A., Inchingolo R., Buonsenso D., Perrone T., Briganti D. F., Perlini S., Torri E., Mariani A., Mossolani E. E., and Tursi F., “Is there a role for lung ultrasound during the COVID-19 pandemic?” J. Ultrasound Med., 2020. [DOI] [PMC free article] [PubMed]
  • [56].Lahiri B. B., Bagavathiappan S., Jayakumar T., and Philip J., “Medical applications of infrared thermography: A review,” Infr. Phys. Technol., vol. 55, no. 4, pp. 221–235, Jul. 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [57].Ghoshal B. and Tucker A., “Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection,” 2020, arXiv:2003.10769. [Online]. Available: http://arxiv.org/abs/2003.10769
  • [58].Farooq M. and Hafeez A., “COVID-ResNet: A deep learning framework for screening of COVID19 from radiographs,” 2020, arXiv:2003.14395. [Online]. Available: http://arxiv.org/abs/2003.14395
  • [59].El-Din Hemdan E., Shouman M. A., and Karar M. E., “COVIDX-Net: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images,” 2020, arXiv:2003.11055. [Online]. Available: http://arxiv.org/abs/2003.11055
  • [60].Yu X., Zeng N., Liu S., and Zhang Y.-D., “Utilization of DenseNet201 for diagnosis of breast abnormality,” Mach. Vis. Appl., vol. 30, nos. 7–8, pp. 1135–1144, Oct. 2019. [Google Scholar]
  • [61].Sandler M., Howard A., Zhu M., Zhmoginov A., and Chen L.-C., “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 4510–4520. [Google Scholar]
  • [62].Cohen J. P., Morrison P., and Dao L., “COVID-19 image data collection,” 2020, arXiv:2003.11597. [Online]. Available: http://arxiv.org/abs/2003.11597
  • [63].Abbas A., Abdelsamea M. M., and Gaber M. M., “Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network,” 2020, arXiv:2003.13815. [Online]. Available: http://arxiv.org/abs/2003.13815 [DOI] [PMC free article] [PubMed]
  • [64].Depeweg S., Hernández-Lobato J. M., Doshi-Velez F., and Udluft S., “Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning,” 2017, arXiv:1710.07283. [Online]. Available: http://arxiv.org/abs/1710.07283
  • [65].Houlsby N., “Efficient Bayesian active learning and matrix modelling,” Ph.D. dissertation, Dept. Eng., Univ. Cambridge, Cambridge, U.K., 2014. [Google Scholar]
  • [66].Selvaraju R. R., Cogswell M., Das A., Vedantam R., Parikh D., and Batra D., “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 618–626. [Google Scholar]
  • [67].Afshar P., Heidarian S., Naderkhani F., Oikonomou A., Plataniotis K. N., and Mohammadi A., “COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images,” 2020, arXiv:2004.02696. [Online]. Available: http://arxiv.org/abs/2004.02696 [DOI] [PMC free article] [PubMed]
  • [68].Li X., Li C., and Zhu D., “COVID-MobileXpert: On-device COVID-19 patient triage and follow-up using chest X-rays,” 2020, arXiv:2004.03042. [Online]. Available: http://arxiv.org/abs/2004.03042
  • [69].Luz E., Silva P. L., Silva R., Silva L., Moreira G., and Menotti D., “Towards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images,” 2020, arXiv:2004.05717. [Online]. Available: http://arxiv.org/abs/2004.05717
  • [70].Horry M. J., Chakraborty S., Paul M., Ulhaq A., Pradhan B., Saha M., and Shukla N., “X-ray image based COVID-19 detection using pre-trained deep learning models,” engrXiv, 2020, doi: 10.31224/osf.io/wx89s. [DOI]
  • [71].Horry M. J., Chakraborty S., Paul M., Ulhaq A., Pradhan B., Saha M., and Shukla N., “COVID-19 detection through transfer learning using multimodal imaging data,” IEEE Access, vol. 8, pp. 149808–149824, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Wu Y.-H., Gao S.-H., Mei J., Xu J., Fan D.-P., Zhao C.-W., and Cheng M.-M., “JCS: An explainable COVID-19 diagnosis system by joint classification and segmentation,” 2020, arXiv:2004.07054. [Online]. Available: http://arxiv.org/abs/2004.07054 [DOI] [PubMed]
  • [73].Reza A. M., “Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement,” J. VLSI Signal Process.-Syst. Signal, Image, Video Technol., vol. 38, no. 1, pp. 35–44, Aug. 2004. [Google Scholar]
  • [74].Narin A., Kaya C., and Pamuk Z., “Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks,” 2020, arXiv:2003.10849. [Online]. Available: http://arxiv.org/abs/2003.10849 [DOI] [PMC free article] [PubMed]
  • [75].El Asnaoui K., Chawki Y., and Idri A., “Automated methods for detection and classification pneumonia based on X-ray images using deep learning,” 2020, arXiv:2003.14363. [Online]. Available: http://arxiv.org/abs/2003.14363
  • [76].Sethy P. K. and Behera S. K., “Detection of coronavirus disease (COVID-19) based on deep features,” Preprints.org, Tech. Rep., 2020, doi: 10.20944/preprints202003.0300.v1. [DOI]
  • [77].Apostolopoulos I. D. and Mpesiana T. A., “Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks,” Phys. Eng. Sci. Med., vol. 43, no. 2, pp. 635–640, Jun. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [78].Buonsenso D., Pata D., and Chiaretti A., “COVID-19 outbreak: Less stethoscope, more ultrasound,” The Lancet Respiratory Med., vol. 8, no. 5, p. e27, 2020, doi: 10.1016/S2213-2600(20)30120-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [79].Sippel S., Muruganandan K., Levine A., and Shah S., “Use of ultrasound in the developing world,” Int. J. Emergency Med., vol. 4, no. 1, pp. 1–11, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [80].Lichtenstein D. A. and Mezière G. A., “Relevance of lung ultrasound in the diagnosis of acute respiratory failure*: The BLUE protocol,” Chest, vol. 134, no. 1, pp. 117–125, Jul. 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [81].Balk D. S., Lee C., Schafer J., Welwarth J., Hardin J., Novack V., Yarza S., and Hoffmann B., “Lung ultrasound compared to chest X-ray for diagnosis of pediatric pneumonia: A meta-analysis,” Pediatric Pulmonol., vol. 53, no. 8, pp. 1130–1139, Aug. 2018. [DOI] [PubMed] [Google Scholar]
  • [82].Amatya Y., Rupp J., Russell F. M., Saunders J., Bales B., and House D. R., “Diagnostic use of lung ultrasound compared to chest radiograph for suspected pneumonia in a resource-limited setting,” Int. J. Emergency Med., vol. 11, no. 1, p. 8, Dec. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [83].Smith M., Hayward S., Innes S., and Miller A., “Point-of-care lung ultrasound in patients with COVID-19–a narrative review,” Anaesthesia, vol. 75, pp. 1096–1104, Apr. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [84].Mohamed M. F. H., Al-Shokri S., Yousaf Z., Danjuma M., Parambil J., Mohamed S., Mubasher M., Dauleh M. M., Hasanain B., AlKahlout M. A., and Abubeker I. Y., “Frequency of abnormalities detected by point-of-Care lung ultrasound in symptomatic COVID-19 patients: Systematic review and meta-analysis,” Amer. J. Tropical Med. Hygiene, vol. 103, no. 2, pp. 815–821, Aug. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [85].Pare J., Camelo I., Mayo K., Leo M., Dugas J., Nelson K., Baker W., Shareef F., Mitchell P., and Schechter-Perkins E., “Point-of-care lung ultrasound is more sensitive than chest radiograph for evaluation of COVID-19,” Western J. Emergency Med., vol. 21, no. 4, p. 771, Jun. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [86].Peng Q.-Y., Wang X.-T., and Zhang L.-N., “Findings of lung ultrasonography of novel corona virus pneumonia during the 2019–2020 epidemic,” Intensive Care Med., vol. 46, no. 5, pp. 849–850, May 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [87].Vetrugno L., Bove T., Orso D., Barbariol F., Bassi F., Boero E., Ferrari G., and Kong R., “Our Italian experience using lung ultrasound for identification, grading and serial follow-up of severity of lung involvement for management of patients with COVID-19,” Echocardiography, vol. 37, no. 4, pp. 625–627, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [88].Yang Y., Huang Y., Gao F., Yuan L., and Wang Z., “Lung ultrasonography versus chest CT in COVID-19 pneumonia: A two-centered retrospective comparison study from China,” Intensive Care Med., vol. 46, no. 9, pp. 1761–1763, Sep. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [89].Soldati G., Smargiassi A., Inchingolo R., Buonsenso D., Perrone T., Briganti D. F., Perlini S., Torri E., Mariani A., Mossolani E. E., and Tursi F., “Is there a role for lung ultrasound during the COVID-19 pandemic?” J. Ultrasound Med., vol. 39, no. 7, pp. 1247–1467, Jul. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [90].Buonsenso D., Pata D., and Chiaretti A., “COVID-19 outbreak: Less stethoscope, more ultrasound,” Lancet Respiratory Med., vol. 8, no. 5, p. e27, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [91].Vetrugno L., Orso D., Deana C., Bassi F., and Bove T., “Covid-19 diagnostic imaging: Caution need before the end of the game,” Academic Radiol., vol. 27, no. 9, p. 1331, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [92].Haseli S. and Iranpour P., “Lung ultrasound in COVID-19 pneumonia: Prospects and limitations,” Academic Radiol., vol. 27, no. 7, pp. 1044–1045, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [93].Liu S., Wang Y., Yang X., Lei B., Liu L., Li S. X., Ni D., and Wang T., “Deep learning in medical ultrasound analysis: A review,” Engineering, vol. 5, no. 2, pp. 261–275, Apr. 2019. [Google Scholar]
  • [94].Born J., Brändle G., Cossio M., Disdier M., Goulet J., Roulin J., and Wiedemann N., “POCOVID-Net: Automatic detection of COVID-19 from a new lung ultrasound imaging dataset (POCUS),” 2020, arXiv:2004.12084. [Online]. Available: http://arxiv.org/abs/2004.12084
  • [95].Born J., Wiedemann N., Brändle G., Buhre C., Rieck B., and Borgwardt K., “Accelerating COVID-19 differential diagnosis with explainable ultrasound image analysis,” 2020 arXiv:2009.06116. [Online]. Available: https://arxiv.org/abs/2009.06116
  • [96].Leibig C., Allken V., Ayhan M. S., Berens P., and Wahl S., “Leveraging uncertainty information from deep neural networks for disease detection,” Sci. reports, vol. 7, no. 1, pp. 1–14, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [97].Roy S.et al. , “Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound,” IEEE Trans. Med. Imag., vol. 39, no. 8, pp. 2676–2687, Aug. 2020. [DOI] [PubMed] [Google Scholar]
  • [98].Karakuş O., Anantrasirichai N., Aguersif A., Silva S., Basarab A., and Achim A., “Detection of line artefacts in lung ultrasound images of COVID-19 patients via non-convex regularization,” 2020, arXiv:2005.03080. [Online]. Available: http://arxiv.org/abs/2005.03080 [DOI] [PMC free article] [PubMed]
  • [99].Anantrasirichai N., Hayes W., Allinovi M., Bull D., and Achim A., “Line detection as an inverse problem: Application to lung ultrasound imaging,” IEEE Trans. Med. Imag., vol. 36, no. 10, pp. 2045–2056, Oct. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [100].Wang Z., Wang G., Huang B., Xiong Z., Hong Q., Wu H., Yi P., Jiang K., Wang N., Pei Y., Chen H., Miao Y., Huang Z., and Liang J., “Masked face recognition dataset and application,” 2020, arXiv:2003.09093. [Online]. Available: http://arxiv.org/abs/2003.09093
  • [101].Pearce J. M., “A review of open source ventilators for COVID-19 and future pandemics,” F1000Research, vol. 9, no. 218, p. 218, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [102].Chiu W., Lin P., Chiou H., Lee W., Lee C., Yang Y., Lee H., Hsieh M., Hu C., Ho Y., Deng W., and Hsu C., “Infrared thermography to mass-screen suspected SARS patients with fever,” Asia Pacific J. Public Health, vol. 17, no. 1, pp. 26–28, 2005, doi: 10.1177/101053950501700107. [DOI] [PubMed] [Google Scholar]
  • [103].Hay E. A. and Parthasarathy R., “Performance of convolutional neural networks for identification of bacteria in 3d microscopy datasets,” PLoS Comput. Biol., vol. 14, no. 12, 2018, Art. no. e1006628. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [104].Rational use of Personal Protective Equipment for Coronavirus Disease (COVID-19) and Considerations During Severe Shortages: Interim Guidance, 6 April 2020, World Health Org., Geneva, Switzerland, 2020. [Google Scholar]
  • [105].Feng Y.. (2020). Open Source Face Mask Detection Data + Model + Code + Online Web Experience, All Open Source. Accessed: Mar. 31, 2020. [Online]. Available: https://zhuanlan.zhihu.com/p/107719641?utm(%)20source=com.yinxiang [Google Scholar]
  • [106].(2019). Dlib C++Library. Accessed: Mar. 31, 2020. [Online]. Available: http://dlib.net/
  • [107].Somboonkaew A., Prempree P., Vuttivong S., Wetcharungsri J., Porntheeraphat S., Chanhorm S., Pongsoon P., Amarit R., Intaravanne Y., Chaitavon K., and Sumriddetchkajorn S., “Mobile-platform for automatic fever screening system based on infrared forehead temperature,” in Proc. Opto-Electron. Commun. Conf. (OECC) Photon. Global Conf. (PGC), Jul./Aug. 2017, pp. 1–4. [Google Scholar]
  • [108].Ghassemi P., Pfefer T. J., Casamento J. P., Simpson R., and Wang Q., “Best practices for standardized performance testing of infrared thermographs intended for fever screening,” PLoS ONE, vol. 13, no. 9, pp. 1–24, 2018, doi: 10.1371/journal.pone.0203302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [109].Negishi T., Sun G., Sato S., Liu H., Matsui T., Abe S., Nishimura H., and Kirimoto T., “Infection screening system using thermography and ccd camera with good stability and swiftness for non-contact vital-signs measurement by feature matching and music algorithm,” in Proc. 41st Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), Jul. 2019, pp. 3183–3186. [DOI] [PubMed] [Google Scholar]
  • [110].Al-Naji A., Perera A. G., Mohammed S. L., and Chahl J., “Life signs detector using a drone in disaster zones,” Remote Sens., vol. 11, no. 20, p. 2441, 2019. [Google Scholar]
  • [111].Maclaren G., Fisher D., and Brodie D., “Preparing for the most critically ill patients with COVID-19: The potential role of extracorporeal membrane oxygenation,” Jama, vol. 323, no. 13, pp. 1245–1246, 2020. [DOI] [PubMed] [Google Scholar]
  • [112].Wang Y., Hu M., Li Q.-L., Zhang X.-P., Zhai G., and Yao N., “Abnormal respiratory patterns classifier may contribute to large-scale screening of people infected with COVID-19 in an accurate and unobtrusive manner,” 2020, arXiv:2002.05534. [Online]. Available: https://arxiv.org/abs/2002.05534
  • [113].Wrapp D., Wang N., Corbett K. S., Goldsmith J. A., Hsieh C.-L., Abiona O., Graham B. S., and McLellan J. S., “Cryo-EM structure of the 2019-nCoV spike in the prefusion conformation,” Science, vol. 367, no. 6483, pp. 1260–1263, 2020. [Online]. Available: https://science.sciencemag.org/content/367/6483/1260 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [114].Bruno A. E., Charbonneau P., Newman J., Snell E. H., So D. R., Vanhoucke V., Watkins C. J., Williams S., and Wilson J., “Classification of crystallization outcomes using deep convolutional neural networks,” PLoS ONE, vol. 13, no. 6, 2018, Art. no. e0198883. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [115].Perkins R., Fang H., Tong W., and Welsh W. J., “Quantitative structure-activity relationship methods: Perspectives on drug discovery and toxicology,” Environ. Toxicol. Chem., Int. J., vol. 22, no. 8, pp. 1666–1679, 2003. [DOI] [PubMed] [Google Scholar]
  • [116].Uesawa Y., “Quantitative structure–activity relationship analysis using deep learning based on a novel molecular image input technique,” Bioorganic Medicinal Chem. Lett., vol. 28, no. 20, pp. 3400–3403, 2018. [DOI] [PubMed] [Google Scholar]
  • [117].Yang X., He X., Zhao J., Zhang Y., Zhang S., and Xie P., “COVID-CT-dataset: A CT scan dataset about COVID-19,” 2020, arXiv:2003.13865. [Online]. Available: http://arxiv.org/abs/2003.13865
  • [118].Chowdhury M. E. H., Rahman T., Khandakar A., Mazhar R., Kadir M. A., Mahbub Z. B., Reajul Islam K., Khan M. S., Iqbal A., Al-Emadi N., Reaz M. B. I., and Islam T. I., “Can AI help in screening viral and COVID-19 pneumonia?” 2020, arXiv:2003.13145. [Online]. Available: http://arxiv.org/abs/2003.13145
  • [119].Wang X., Peng Y., Lu L., Lu Z., Bagheri M., and Summers R. M., “ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 2097–2106. [Google Scholar]
  • [120].Kopaczka M., Kolk R., and Merhof D., “A fully annotated thermal face database and its application for thermal facial expression recognition,” in Proc. IEEE Int. Instrum. Meas. Technol. Conf. (I2MTC), May 2018, pp. 1–6. [Google Scholar]

Articles from Ieee Access are provided here courtesy of Institute of Electrical and Electronics Engineers

RESOURCES