Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2021 Dec 1;21(23):8045. doi: 10.3390/s21238045

Role of Artificial Intelligence in COVID-19 Detection

Anjan Gudigar 1, U Raghavendra 1,*, Sneha Nayak 1, Chui Ping Ooi 2, Wai Yee Chan 3, Mokshagna Rohit Gangavarapu 1, Chinmay Dharmik 1, Jyothi Samanth 4, Nahrizul Adib Kadri 5, Khairunnisa Hasikin 5, Prabal Datta Barua 6,7,8, Subrata Chakraborty 8,9, Edward J Ciaccio 10, U Rajendra Acharya 11,12,13
Editor: Alessandro Bevilacqua
PMCID: PMC8659534  PMID: 34884045

Abstract

The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.

Keywords: artificial intelligence, computer-aided diagnostic tool, deep neural networks, hand-crafted feature learning, supervised learning

1. Introduction

COVID-19 was first reported by the Wuhan Municipal Health Commission, China, in December 2019. It is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and is considered one of the deadliest global pandemics in history [1]. The World Health Organization (WHO) declared the COVID-19 outbreak a pandemic in March 2020, and there have been 203,944,144 cases and 4,312,902 deaths globally according to the WHO statistics of 12 August 2021 (available online: https://covid19.who.int/table (accessed on 12 August 2021)). The pandemic situation has caused worldwide distress by affecting people socially, medically, and economically. This infectious disease in severe form often leads to acute respiratory syndrome and the development of pneumonia. The outbreak was thought to be initiated via zoonotic spread from the seafood markets in Wuhan, China. Later, it was believed that transmission between humans was responsible for community spread of the infection throughout the world, and approximately 200 countries have been affected by this pandemic [2,3,4,5]. Although individuals of all ages are at risk of being infected, severe COVID-19 symptoms are more likely in people aged 60 and above, and individuals with comorbidities.

Once the SARS-CoV-2 virus enters the body via respiratory aerosol, it acts on the respiratory system, and affects patients with varying degrees of clinical severity. During the initial days of infection, the clinical presentation remains asymptomatic, although immune response is mediated in the body. Those persons affected are infectious at this phase, and the disease can be diagnosed by nasal swab [6,7,8]. Further migration of the virus from nasal epithelial cells into the upper respiratory tract results in symptoms of fever, dry cough, malaise, etc. The majority of infected patients do not progress beyond this phase, as the immune response from the host is sufficient to contain the disease from spreading to the lower respiratory tract and lungs [9] (refer to Figure 1).

Figure 1.

Figure 1

Pictorial representation of normal and COVID-19 affected lungs.

Approximately one-fifth of infected cases develop lower respiratory tract infection, and these patients present with acute respiratory distress syndrome (ARDS). Histologically, this stage reveals lung sequestration along with host cell apoptosis. Persistent inflammation and diffuse alveolar damage are common histopathologic patterns observed among the infected patients exhibiting ARDS [5,10].

COVID-19 affects people in different ways. Asymptomatic patients will have positive nasal swab results and normal chest X-ray images. Patients with mild illness exhibit different commonly known symptoms such as fever, sore throat, dry cough, malaise and body aches or nausea, vomiting, abdominal pain, and loose stools. Patients with moderate illness show symptoms of pneumonia with no significant hypoxemia (persistent fever and cough). This group of infected patients also shows abnormal lesions on high-resolution chest computed tomography (CT). Severe illness is defined as patients who present with pneumonia and significant systemic hypoxemia (SpO2 < 92%). In cases of critical infection, the patients show life-threatening complications such as ARDS, along with shock, coagulation defects, encephalopathy, heart failure, and acute kidney injury [11,12,13,14,15,16,17].

Disease confirmation and the severity of the disease can be determined by nasal/throat swab, several serological tests, and imaging modalities. Reverse transcription polymerase chain reaction (RT-PCR) remains the best molecular method in the diagnosis of the disease [18]. However, as in the case of other diagnostic methods, RT-PCR is not error-free. It is time consuming, expensive, and requires manual effort. In order to judge the diagnostic test results accurately, expert clinicians are required to read them. The correct interpretation of the test results requires a high level of clinical expertise, as the data may vary significantly from person to person [19]. Moreover, the availability of test kits is limited, especially in rural geographical regions, and, if available, the quality of the kits may not be guaranteed. Moreover, persons can experience discomfort—slight pain and irritation—during the nasal swab test. Using image modalities such as X-rays or CT scans, it is possible to obtain a quick result for critical situations, even before receiving RT-PCR test results.

In order to overcome these existing shortcomings, many computer-aided diagnostic tools (CADTs) using artificial intelligence (AI) and machine learning techniques have been utilized to support clinical findings from the imaging modalities [20]. These prediction techniques can precisely detect the disease, thereby aiding in the prevention and detection of epidemiologic risk [21]. These automated tools using cost-effective imaging modalities assist to address COVID-19 by preventing false negative reports, and can be used in the case of scarcity or non-availability of RT-PCR test kits in rural areas. Researchers have reported exhaustive studies using imaging modalities for the detection of COVID-19 [19,21,22,23,24,25,26,27,28,29]. Although these review papers have shown the significance of deep learning and machine leaning algorithms for automated detection, this paper explores the following key points in association with detection:

  1. The state-of-the-art AI techniques (deep neural network (DNN) and hand-crafted feature learning (HCFL) based models) used to detect COVID-19.

  2. Analysis of the results of AI techniques with various imaging modalities.

  3. The key challenges and future direction in the detection of COVID-19.

The structure of this paper is as follows. Section 2 describes the search criteria applied to accumulate and filter research articles from various sources. In Section 3, a consolidated review of extensively employed AI techniques using different medical imagery for COVID-19 detection is presented. The results using various datasets and methods are analyzed in Section 4. The key challenges, future scope, and recommendations are discussed in Section 5. Finally, the conclusions of our systematic review are presented in Section 6.

2. Search Criteria and Selection Process

The methodology from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [30] was adopted in this study. The systematic search process was carried out using the search query on Scopus, Google Scholar, PubMed, and Springer. The following search items were used: “COVID-19”, “Automated detection of COVID-19”, “Computer-aided diagnosis for COVID-19”, “Deep learning techniques for COVID-19” (using “Chest X-ray” OR “Chest computed tomography” OR “Lung Ultrasound”), and “Database for COVID-19”. In order to widen the search process, we also included AI algorithms with specific techniques, its subfield, and its utilization (such as “CAD tools”, “Convolutional neural networks”, “Machine Learning”, “Classification”, and “Identification”). Related articles written in English from 2020 to 2021 were downloaded, not limited to country or region of the author. The search process took 22 days to complete. A total of 1277 articles were found. The relevance of the downloaded articles to the main aims of this study was verified using a search string strategy. Articles related to detection of COVID-19 using clinical data, statistical analysis, and case studies with no data mining and deep leaning techniques were excluded from the selection.

The relevance of a paper was based on title, abstract, materials and methods. An article was considered based on the voting scheme by the authors’ group of the current study. The authors are well-versed in the field of deep learning and machine learning techniques using various imaging modalities. Low-quality and conference papers were removed from the database. A final total of 202 papers (184 articles with 18 review papers) were compiled and analyzed. The selection process is shown in Figure 2. To the best of our knowledge, we have considered the data mining and deep learning research publications reported to present for identification of COVID-19 using various image modalities.

Figure 2.

Figure 2

Overview of the selection process for relevant articles.

3. AI Techniques for COVID-19 Detection

Based on the state-of-the-art AI techniques to automatically detect COVID-19 using medical imagery, we categorized the methodologies as: (i) the DNN-based approach, (ii) the HCFL-based approach, and (iii) the hybrid approach. The input data consisted mainly of X-ray, CT, and US medical images of patients. In the DNN-based approach, convolutional neural networks (CNNs) are employed to automatically characterize the COVID-19 imagery. The DNN approach groups the feature extraction and classification components into an integrated neural network. In the HCFL-based approach, knowledge of features extraction techniques is required, followed by feature selection/ranking and classification stages. The hybrid approach fuses the methodologies from DNN- and HCFL-based approaches to obtain promising results. Figure 3 illustrates the key components used in the COVID-19 detection system.

Figure 3.

Figure 3

The complete framework to detect COVID-19 using various approaches.

3.1. COVID-19 Dataset: Medical Image

RT-PCR is the gold standard to diagnose COVID-19 using a nasal/throat swab. Sometimes the test results may not be available immediately and may cause a false negative result, due to the quality of the sample [31]. In such situations, various chest imaging modalities such as X-ray, CT, and Ultrasound (US) help to confirm COVID-19 suspects [32]. The combination of AI techniques with various imaging modalities can assist to increase the efficiency of COVID-19 detection worldwide [32].

The development of an automated COVID-19 detection system based on chest X-ray imagery requires labeled images of normal and COVID-19 cases so as to train the system to differentiate healthy persons from COVID-19 patients. To test the system with an independent test dataset and to enhance its efficacy, it is necessary for these datasets to be made available publicly. With large datasets, it is possible for researchers to cross verify existing AI models before installation in hospitals or testing centers. Hence, medical images such as chest X-ray, CT, and lung US images are essential for the development of an automated COVID-19 detection system. Many researchers have of their own volition or in collaboration with hospitals, aggregated the COVID-19 datasets with various imaging modalities and released them publicly to assist research communities. Figure 4 shows examples of several chest images from publicly available datasets.

Figure 4.

Figure 4

Sample images using various medical image modalities.

The majority of the state-of-the-art AI techniques depend on publicly available datasets (refer to Table 1). The first dataset uses the X-ray as the imaging modality, and is very popular due to the huge dataset collected from nine different sources and made available in a single source (refer to the given source in Table 1). It is noted that there are only a few public sources available for US images, compared to X-ray and CT images. In addition to the public datasets mentioned in Table 1, there are also other sources which have not yet been as widely utilized as an X-ray image source (available in: https://public.roboflow.ai/classification/covid-19-and-pneumoniascans (accessed on 19 July 2021)), for CT images [33], https://www.kaggle.com/andrewmvd/covid19-ct-scans (accessed on 19 July 2021), and for US images [34], https://github.com/jannisborn/covid19_ultrasound (accessed on 19 July 2021).

Table 1.

Summary of frequently used publicly available datasets for the detection of COVID-19.

S.No. Paper/Source Imaging Modality Total Number of Images
1 Available in: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 21 August 2021) X-ray Normal: 10,192
COVID: 3616
Viral Pneumonia:1345
Lung opacity: 6012
2 Available in: https://www.kaggle.com/prashant268/chest-xray-covid19-pneumonia (accessed on 21 August 2021) X-ray Normal: 1583
COVID: 576
Pneumonia: 4273
3 [35]/Available in: https://github.com/UCSD-AI4H/COVID-CT (accessed on 21 August 2021) CT COVID:349
NonCovid: 397
4 [36] Available in:
https://www.kaggle.com/plameneduardo/sarscov2-ctscan-dataset (accessed on 21 August 2021)
CT COVID:1252
Noncovid:1230
5 [37]/Available in: https://mosmed.ai/datasets/covid19_1110 (accessed on 21 August 2021) CT 1110 patients with severity grading (CT-0 to CT-4)
6 [38]/Available in: https://zenodo.org/record/3757476#.YPUTnugzbIU (accessed on 21 August 2021) CT 20 labeled COVID-19 CT scans (1800 + annotated slices)
7 [39]/Available in: https://github.com/BorgwardtLab/covid19_ultrasound (accessed on 21 August 2021) US Videos and images
Healthy: 90
COVID-19: 92
Bacterial Pneumonia: 73
Viral Pneumonia: 6

The X-ray images collected from various researchers in different parts of the world are available in portable network graphics format with the size of 299 × 299 pixels (https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 21 August 2021)). In [35], COVID-19 CT images were collected in various sizes from medRxiv (https://www.medrxiv.org/, latest accessed on 29 November 2021) and bioRxiv (https://www.biorxiv.org/, latest accessed on 29 November 2021), which was posted from 19 January to 25 March 2020. The minimum, average, and maximum widths are 124, 383, and 1485, respectively. The minimum, average, and maximum heights are 153, 491, and 1853, respectively [35]. In [36], CT scans have been collected from real patients in Sao Paulo Hospital, Brazil. It is also observed that the CT images were collected from municipal hospitals in Moscow, Russia. These are segregated based on severity i.e., CT1–CT4: COVID-19-related findings. The number of cases for each category is: CT0—254; CT1—684; CT2—125; CT3—45; and CT4—2 [37]. The largest publicly available lung US dataset was released in [39]. In total, 261 recordings (202 videos and 59 images) were gathered from 216 patients using either convex or linear probes. In addition, the British Society of Thoracic Imaging has also released a COVID-19 image database for teaching purposes (available in: https://www.bsti.org.uk/training-and-education/covid-19-bsti-imaging-database/ (accessed 19 July 2021)). Authors can use these underutilized datasets to enhance the heterogeneous capability of their own dataset. In addition, using the freely available datasets, researchers can initiate a community-oriented research effort to develop various models using AI techniques. Hence, it is also possible for the researchers to generalize their system using the various medical images.

3.2. Methodology

This section discusses the key processing stages covered by the different authors in the development of state-of-the-art COVID-19 detection systems.

3.2.1. Preprocessing/Segmentation

Preprocessing is the initial stage used to enhance image quality by improving contrast and standardizing image pixel intensity levels. This stage plays a major role in obtaining accurate results. Usually, image quality is greatly improved by employing the contrast limited adaptive histogram equalization (CLAHE) technique [40]. Denoising techniques such as the Kirsch filter [41], Weiner filter [42], and pixel intensity normalization are also implemented. Other preprocessing techniques such as edge detection using the Prewitt filter (PF) [42], histogram equalization (HE), and gamma correction (GC) [43] may be useful. The aforementioned techniques are used in several works and can significantly increase the accuracy of the results.

For the CNN-based method, a common set of preprocessing techniques are employed. These techniques include resizing and shuffling. Furthermore, images are converted to RGB and then input to a CNN. In order to visualize the image more distinctly, the image boundaries are smoothed by normalization using morphological filters and by applying different filters and enhancement techniques. In addition, lung imagery is extracted using segmentation techniques such as region growing [44] and watershed [45], UNet [46], and LinkNet [47], where the latter is a variant of UNet and the variational data imputation method (VDI) [48].

In the process of training a deep learning model, sometimes there may be a shortage of datasets. In such situations, data augmentation techniques may be used to create additional data by slightly altering the existing data, thereby creating different versions of the original data. This acts as a regularizer and reduces overfitting while training the model. Data augmentation techniques such as rotation, cropping, flipping, and translation [49], Gaussian blur, and contrast adjustment have been used [50]. For the class imbalance, SMOTE [51] has been employed by several authors. Synthesis images can also be created using an adversarial network (GAN) [52], conditional GAN [53], auxiliary classifier generative adversarial network (ACGAN) [54] and Keras’ ImageDataGenerator (https://keras.io/api/preprocessing/image/ (accessed on 16 September 2021)).

3.2.2. Feature Extraction

Feature extraction is mainly adapted to identify nonlinearities in the lung, thereby identifying lung abnormalities, if any. Several feature extraction techniques have been activated to detect COVID-19 more accurately. Handcrafted feature extraction methods such as the discrete wavelet transform (DWT) [55] and gray-level co-occurrence matrix (GLCM), and Haralick texture features [56] are the more commonly used methods. In addition, the features are also extracted with the two-dimensional (2D) curvelet transform (CTf) [57], residual exemplar local binary pattern (ResExLBP) [58], first order statistical features (FOSF) [50], histogram of oriented gradients (HOG) [59], dual-tree complex contourlet transform (DTCT) [60], local directional number pattern (LDN) [61], Pillow library [62] and fractional multichannel exponent moments (FrMEMs) [63], local binary pattern (LBP) [64], and multichannel fractional order Legendre Fourier moments (MFrLFM) [65], to characterize textural information.

Similarly, features models have also been extracted using a CNN-based approach. In this approach, base architectures such as ResNet101 [66], AlexNet [67], DenseNet-201 [68], VGG16 [69], GoogLeNet [70], MobileNetv2 [71], Inceptionv3 [72], SqueezeNet [73], VGG19 [74], and Xception [75] have been adjusted for feature learning and extraction. Transfer learning (TL) has been arrayed to cope with the limitations that arise from lack of freely accessible labeled medical images. In addition to TL, methods such as the multilayer perceptron convolutional neural network (MLP-CNN) have been assembled to handle mixed data types consisting of numerical/categorical and image data [76]. Similarly, a high-resolution network (HRNet) has been used for extracting detailed features [77]. In addition, the authors have also furnished customized CNN models to improve system performance.

3.2.3. Feature Selection/Optimization

Feature selection is employed to reduce redundant content by preserving significant information. The sequential feature selector algorithm (SFS) [78], chaotic salp swarm algorithm (CSSA) [79], advanced squirrel search optimization algorithm (ASSOA) [80], and harmony search (HS) [81] algorithm are extensively utilized to reduce redundant information in feature representation. Similarly, ReliefF and Neighborhood Component Analysis (NCA) are used to select optimal features, i.e., RFINCA [82]. In addition, methods such as binary gray wolf optimization (GWO) [83] and hybrid social group optimization (HSGO) [84] have proven their efficacy in providing best optimized features. Scientists have also fitted the fractional-order marine predators algorithm (FO-MPA) [85], minimum redundancy and maximum relevance (mRMR) [86], and manta ray foraging optimization (MRFO) [63] in order to select the most significant features. Feature dimensionality reduction has been undertaken using a t-distributed stochastic neighbor embedding (t-SNE) technique [87] and principal component analysis (PCA) [88]. Apart from these methods, feature selection using mutual information (MI) [89], Relief-F [90] and the dragonfly algorithm (DA) [91], and the guided whale optimization algorithm (Guided WOA) [92] have also been set up. In addition, feature selection has been performed using maximum entropy and ANOVA test [93].

Because optimizers are the crucial part of the neural network, the most commonly used algorithms for DNN approaches are the stochastic gradient descent, adaptive learning rate optimization algorithm [94], and root mean square propagation [95], which are supplied to update the network weights. CNN with GWO and whale optimization with the BAT algorithm have been employed to tune the hyperparameters [96,97]. Furthermore, biogeography-based optimization [98], and the multi-objective differential evolution (MODE) parameter tuning method have been used to optimize the parameters [99].

3.2.4. Classification

In the classification stage, a decision is made on test images by predicting the labels. In order to categorize COVID-19 infections, highly accurate classifier techniques play an important role. Classifier techniques such as random forest (RF) [100], the support vector machine (SVM) [101], and the bagging tree classifier [102] have proven their efficacy in multiclass classifications. In addition to these classification techniques, k-nearest neighbor (k-NN) [103], decision tree (DT) [104], Naïve Bayes (NB) [105] and artificial neural network (ANN) [106], generalized regression neural network (GRNN) [107], MLP neural network [108], probabilistic neural network (PNN) [109], and extreme learning machine (ELM) [110] classifier are also used by the research community. Moreover, adaptive boosting (AdaBoost) [111], eXtreme Gradient Boosting (XGBoost) [112], and logistic regression (LR) [113] have also been incorporated by various investigators. However, the authors selected the classifiers based on the best achieved results for the extracted features. Table 2, Table 3, Table 4 and Table 5 are summaries of state-of-the-art techniques used in the automated detection of COVID-19 with various image modalities.

Table 2.

State-of-the-art AI techniques to detect COVID-19 using chest X-ray imagery.

Paper Method Used: Preprocessing + Segmentation + Feature Extraction + Feature Selection + Classification or CNN + Classification Result Obtained Dataset Used
(Most Are Public)
No. of Classes
[114] Image enhancement + WS +deep CNN (ResNet50) and DWT and GLCM+ mRMR+ RF Cvd.Acc: 99.45, Cvd.Sen:.99.17, Cvd.Pre: 97.51,F1-Score: 0.9833 N:1500,C-19: 790,BP: 1304,VP: 1215
(after data augmentation)
2 (C-19, NC)
Cvd.Acc: 98.48, Cvd.Sen: 98.72, Cvd.Pre: 97.89,F1-Score: 0.9829 4
[115] Color layout descriptor + k-NN Cvd.Sen: 96.5, Cvd.Pre: 96.5 Total:86
[116] CNN model + Long short-term memory (LSTM) Cvd.Acc: 99.4, Cvd.Sen: 99.3, Cvd.Spe: 99.2, F1-Score: 98.9, AUC: 99.9 N: 1525, C-19: 1525,P: 1525 3
[117] Concatenation of the Xception and ResNet50V2 Cvd.Acc (avg.): 91.4 N: 8851,C-19: 180,P: 6054 3
[118] CNN model Cvd.Acc: 95, Cvd.Sen: 96.9, Cvd.Spe: 97.5, Cvd.Pre: 95, F-measure: 95.6 N: 310,C-19: 284,BP: 330,VP: 327 3(N, C-19, P)
Cvd.Acc: 89.6, Cvd.Sen: 89.92, Cvd.Spe: 96.4, Cvd.Pre: 90,F-measure: 96.4 4
[119] CNN model AUROC: 0.96 Pvt. + Public Dataset 3
[120] DarkNet based CNN model Cvd.Acc(avg.): 98.08, Cvd.Sen(avg.): 95.13, Cvd.Spe(avg.): 95.3, Cvd.Pre (avg.): 98.03,F1-Score (avg.): 96.51 N: 500,C-19: 127,P: 500 2 (N, C-19)
Cvd.Acc(avg.): 87.02, Cvd.Sen(avg.): 85.35, Cvd.Spe(avg.): 92.18, Cvd.Pre (avg.): 89.96,F1-Score (avg.): 87.37 3
[121] 2D-CTf + CSSA+ EfficientNet-B0 Cvd.Acc: 99.69, Cvd.Sen: 99.44, Cvd.Spe: 99.81, Cvd.Pre: 99.62, F-measure: 99.53 N: 1281,C-19: 159,VP: 1285 3
[122] VGG-16 model Cvd.Acc(avg.): 97 N: 3520,C-19: 250,P: 2753 3
[123] ResNet50 + ResNet101 Cvd.Acc: 97.77, Cvd.Sen: 97.14, Cvd.Pre: 97.14 N: 315,C-19: 250, BP: 300,VP: 350 2(C-19,O)
[58] ResExLBP + Relief-F+ SVM Cvd.Acc: 99.69, Cvd.Sen: 98.85, Cvd.Spe: 100 N: 234, C-19: 87 2
[124] VGG16 model Cvd.Acc: 98.1 N: 2880, C-19: 415, P: 5179 2(C-19,NC)
Cvd.Acc: 94.5 3
[125] ResNet18, ResNet50, SqueezeNet,& DenseNet121 Cvd.Sen: 98, Cvd.Spe(avg.): 90 C-19: 200, NC:5000 2
[126] Capsule Network-based architecture Cvd.Acc: 95.7, Cvd.Sen: 90, Cvd.Spe: 95.8, AUC: 0.97 2(C-19,O)
[127] VGG16 model Cvd.Sen: 97.62, Cvd.Spe: 78.57 N:142, C-19: 142 2
[128] ResNet101 Cvd.Acc: 71.9, Cvd.Sen: 77.3, Cvd.Spe: 71.8 C-19: 154, NC: 5828 (test data) 2
[129] Deep learning model Cvd. Acc C-19: 100,P: 93.75,N: 100 N: 66, C-19: 51,NC: 21,P: 160,TB: 54 5
[130] Sequential CNN model Cvd.Acc: 98.3, Cvd.Sen: 100, Cvd.Pre: 96.72, F1-Score: 98.3,ROC area: 0.983 N: 659, C-19: 295 2
[131] HE +VGG16-based model Cvd.Acc (avg.): 86, Cvd.Sen (avg.): 86, Cvd.Spe(avg.): 93, Cvd.Pre(avg.):86,F1-Score: 86 N: 132, C-19: 132,P: 132 3
[132] Histogram matching and autoencoder and CLAHE + Custom CNN model Cvd.Acc (avg.):94.43, Cvd.Sen (avg.): 92.53, Cvd.Spe: 96.33, Cvd.Pre(avg.): 93.76,
F1-Score (avg.): 93.14,AUC (avg): 0.988
N: 4337,C-19: 2589 2
[133] Ensemble of ResNet-18 Model Cvd.Acc: 95.5, Cvd.Sen: 100, Cvd.Pre: 94 N: 1579,C-19: 184,P: 4245 3
[134] HE+ lung segmentation using UNet + Various deep model are analyzed.
[135] 4 models analyzed (Best: VGG16 and VGG19) Cvd.Acc: 99.38, Cvd.Sen: 100, Cvd.Spe: 99.33 N: 802, C-19: 790 2
[136] CLAHE+VGG16 and VGG19 used (Best: VGG16) Cvd.Acc: 95.9, Cvd.Sen: 92.5, Cvd.Spe: 97.5,AUC: 0.950 (max. only for C-19) N: 607,C-19: 607,P: 607 3
[137] CNN model to separate COVID-19 and pneumonia
[138] Alexnet, Googlenet, and Restnet18 is used
(Googlenet best for 4 classes)
Cvd.Acc: 80.56, Cvd.Sen: 80.56, Cvd.Pre: 84.17, F1-Score: 82.32 N: 79,C-19: 69, BP: 79, VP: 79 4
[76] MLP-CNN Cvd.Acc: 95.4, Cvd.Sen: 95, Cvd.Pre: 92.5, F1-Score: 93.6 C-19: 112, NC: 30 2
[139] LightCovidNet Cvd.Acc (avg.): 96.97 N: 1341,C-19: 446,P: 1345 3
[140] MobileNet v2 Cvd.Acc: 96.78, Cvd.Sen: 98.66, Cvd.Spe: 96.46 N: 504, C-19: 224, P: 714 2(C-19,O)
Cvd.Acc: 94.72 3(N,C-19,P)
[141] Truncated InceptionNet Cvd.Acc (avg.): 98.77, Cvd.Sen(avg.): 95, Cvd.Spe(avg.): 99, Cvd. Pre(avg.): 99
F1 score(avg.): 0.97, AUC (avg.):0.99
N:2003, C-19:162,P: 4280, TB:400
4
[142] CNN model Cvd. Prec (avg.), Cvd. Sen (avg.), F1-score (avg.): 100 C-19: 500, P: 500 2
[143] CNN model Cvd.Acc (testing): 94.4 N:8066, C-19:183,P: 5551 3
[144] COVID-Net model Cvd.Acc: 93.3 Total: 13,975 from 13,870 patients 3(N,C-19,P)
[85] CNN model (Inception) + FO-MPA + k-NN Cvd.Acc: 98.7, F-score: 98.2 DS1: C-19 +ve: 200, C-19 -ve: 1675 2
Cvd.Acc: 99.6, F-score: 99 DS2: C-19 +ve: 219, C-19 -ve: 1341
[63] FrMEMs + MRFO + k-NN Cvd.Acc: 96.09, Cvd.Sen: 98.75, Cvd.Pre: 98.75 DS1: C-19 +ve: 216,C-19 -ve: 1675 2
Cvd.Acc: 98.09, Cvd.Sen: 98.91, Cvd.Pre: 98.91 DS2: C-19 +ve: 219,C-19 -ve: 1341
[145] Xception model + SVM Cvd.Acc: 99.33, Cvd.Sen: 99.27, Cvd.Spe: 99.38, Cvd.Pre: 99.27, F1-score:99.27,AUC: 99.32 N: 565,C-19: 537 2
[146] Discriminative cost sensitive learning approach Cvd.Acc: 97.01, Cvd.Pre: 97, Cvd.Sen: 97.09,F1-score: 96.98 N: 1000,C-19: 239,P: 1000 3
[147] CNN model Cvd.Sen (avg.): 91.05, Cvd.Spe(avg.): 99.61, Cvd.Acc(avg.): 98.34,ROC-AUC(avg.): 95.33 N: 1583,C-19: 225 2
Cvd.Sen (avg.): 92.88, Cvd.Spe(avg.): 99.79, Cvd.Acc(avg.): 99.44,ROC-AUC(avg.): 96.33 C-19: 225, P: 4292 2
F1 score (avg.): 94.10 N: 1583,C-19: 225,P: 4292 3
[148] HE and GC + DenseNet103 + ResNet18
Cvd.Acc: 91.9 N: 191, C-19: 180,BP: 54, VP: 20,TB: 57 4(N,BP,VP,TB)
[149] VGG16 model Cvd.Acc, Cvd.Sen, Cvd. Prec, F-score: 80 C-19: 70, NC: 70 2
[54] ACGAN based model (CovidGAN) Cvd.Acc: 95.00 N: 403, C-19: 721 2(N, C-19)
[150] CNN model Cvd.Acc: 99.70, Cvd.Pre: 99.70, Cvd.Sen: 99.70, Cvd.Spe: 99.55 N: 1579, C-19: 423,VP:1485 2(N,C-19VP)
[151] Deep learning model Cvd.Acc: 97.25, Cvd.Pre: 97.24,F1-score: 97.21 N: 27,228, C-19: 209, P: 5794 3
[152] CNN + gated recurrent unit (GRU) Cvd.Sen: 96, Cvd.Pre: 96, F1-score: 95 N: 141, C-19: 142, P: 141 3
[153] Ensemble of deep CNN model (InceptionResNetV2 + ResNet152V2 + VGG16+ DenseNet201) Cvd.Acc: 99.2, Cvd.Sen: 99.12, Cvd.Spe: 99.07, F-score: 99.17,AUC: 99.21 N:2039, C-19:1663,P: 401,TB:394 4
[154] MCFF-Net66-Conv1-GAP Cvd.Acc: 94.66 N:1500,C-19:942, BP:1802,VP:1797 4
[155] ResNet50V2 + t-SNE Cvd.Acc: 95.49, Cvd.Sen: 99.19, Cvd.Pre:96.19, F1-score: 98.0, AUC: 95.49 N: 616, C-19: 616,P: 616 3
[156] CNN model Cvd.Acc:100, Cvd.Sen:100, Cvd.Spe:100, Cvd.Prec:100, F1-score:100, AUC:100 N:42, C-19:136 2
[157] Enhanced Inception-ResNetV2 model Cvd.Acc(avg.): 98.80, Cvd.Sen(avg.): 99.11, Cvd.Prec(avg.): 98.61,F1 score(avg.): 98.86 N:1341,C-19:219,VP: 1345 3
[158] CNN model and GoogLeNet Cvd.Acc: 97.62, Cvd.Sen: 98.29, Cvd.Spe: 97.64, F-score: 98.30,AUC: 97.96 N: 1421,C-19: 1332 2
[159] VGG16 Model Cvd.Acc: 98.72, Cvd.Sen: 98.78, Cvd.Spe: 98.70, Cvd.Prec: 96.43, F1-score: 97.59 N:1341,C-19:1200,VP:1345 3
[160] AlexNet Cvd.Acc: 99.13, Cvd.Sen: 99.4, Cvd.Spe: 99.15,F-score: 99.49,AUC: 99.31 Consists: N,C-19,P,TB 4
[161] Ensemble of MobileNet and InceptionV3 Cvd.Acc: 96.49, Cvd.Prec: 93.01, Cvd.Sen: 92.97,F-score: 92.97 N:1050,C-19:1050,BP:1050,VP:1050 4
[162] VGG16 model Cvd.Acc(avg.): 91.69, Cvd.Sen(avg): 95.92, Cvd.Spe(avg.): 100 Total: 7720 3(N, C-19,P)
[163] CLAHE + InceptionV3 + ANN Cvd.Acc: 97.19 N: 1583,P: 4273 2
[97] CNN with various optimization algorithm Cvd.Acc:96, Cvd.Sen:100, Cvd.Spe:99, Cvd.Pre:96, F1-Score:0.98 N: 1583, C-19: 576, VP:4273 3
[164] VGG16 model Cvd.Acc: 96, Cvd.Sen: 92.64, Cvd.Spe: 97.27 N: 504, C-19: 224 2
Cvd.Acc: 92.53, Cvd.Sen: 86.7, Cvd.Spe: 95.1 N:504, C-19: 224, P: 700 3
[50] FOSF and GLCM and HOG + GWO + Ensemble of classifiers Cvd.Acc: 98.06, Cvd.Sen: 98.83, Cvd.Spe: 96.51, Cvd.Pre: 98.26,F-measure: 98.55 AUC:0.97 N: 782, C-19: 782, P: 782 2 (N,AB)
Cvd.Acc: 91.32, Cvd.Sen: 96.51, Cvd.Spe: 86.2, Cvd.Pre:87.36,F-measure: 91.71,AUC: 0.91 2(C-19,P)
[165] Ensemble of deep CNN model (VGG19 + DenseNet121) + SVM Cvd.Acc: 99.71 N:2341, C-19: 798,P: 2345 2 (C-19,NC)
Cvd.Acc: 98.28, Cvd.Sen (avg), Cvd.Pre(avg.),F1-Score (avg.): 98.33 3
[166] CNN model + Ensemble of classifiers Cvd.Acc: 98.91, Cvd.Sen: 97.82, Cvd.Pre: 100,F1-Score: 98.89 N: 2300,C-19: 2300 2
[167] Deep learning model (Inception architecture) Cvd.Acc: 96, Cvd.Sen: 93, Cvd.Spe: 97, Cvd.Pre: 97, F1-Score: 0.96 C-19: 435,NC: 505 2
[168] UNet with ResNet + CNN model Cvd.Acc (avg.): 96.32 N:1840,C-19:433,BP:2780,VP:1345,TB: 394 5
[169] Two separate CNN models for binary and ternary classification Cvd.Acc: 98.7, Cvd.Sen: 100, Cvd.Spe: 98.3 N:145,C-19: 145, BP: 145 2(N, C-19)
Cvd.Acc: 98.3, Cvd.Sen: 99.3, Cvd.Spe: 98.1 3
[170] VGG16 and Xception model (Best: Xception) Cvd.Sen: 100, Cvd.Spe: 97.6, F1-Score: 97.7 N: 400, C-19: 402,P:200,I: 35 2
[171] Various DNN + Majority voting scheme Cvd.Acc: 99.31 N: 1338, C-19: 237, VP: 1336 3
[172] Customized CNN Model Cvd.Acc: 92.95, Cvd.Sen (avg.): 90.72, Cvd.Pre(avg.): 94.04,F1-Score(avg.): 0.9204 N: 1341, C-19: 744 (Independent set) 2
[173] NanoChest-net model Analyzed with various datasets.
[174] VGG16+ HS + k-NN Cvd.Acc, Cvd.Sen, Cvd.Pre,F1-Score, AUC:100 N: 480,C-19: 280 2
[175] OptiDCNN model Cvd.Acc: 99.11 N: 5000, C-19: 184 2
[176] HOG and CNN(VGG19) + ME + CNN classifier + WS Cvd.Acc: 99.49, Cvd.Sen: 93.65, Cvd.Spe: 95.7 C-19 +ve: 1979, C-19 -ve: 3111 2
[177] Ensemble-CNNs (based on ResNeXt-50, Inception-v3, and DenseNet-161) Cvd.Acc: 75.23 ± 3.40, Cvd.Sen: 75.20, Cvd.Spe: 87.60, Cvd.Pre: 78.28, F1-Score: 73.43
AUC: 0.8140
N: 711, C-19: 711,P:711,BP:711,VP:711
Lung Opacity not Pneumonia:711 (public+Pvt.)
3(N,C-19,P)
Cvd.Acc: 81.00 ± 2.39, Cvd.Sen: 82.96, Cvd.Spe: 85.24, Cvd.Pre: 82.99,F1-Score: 81.49,
AUC: 0.8810
5
[178] Showed that a system with 2-class model are not valid for the diseases with similar symptoms, by conducting various experiments
[179] Exemplar COVID-19FclNet9 + SVM Cvd.Acc: 99.64 N: 150,C-19:127 2
Cvd.Acc: 98.84 N: 4000,C-19: 3616, P: 1345 3
Cvd.Acc: 97.60 N: 234,C-19:125,BP:242,VP:148 4
[180] Decompose, Transfer, and Compose (DeTraC)+PCA Cvd.Acc: 93.1, Cvd.Sen:100 N: 80, C-19:105,SARS: 11 3
[77] UNet + HRNet Cvd.Acc: 99.26, Cvd.Sen:98.53, Cvd.Spe: 98.82 Total: 272 2
[181] Various CNN model used (Best:EfficientNetB0) Cvd.Acc:92.93, Cvd.Sen: 90, Cvd.Spe: 95, Cvd. Prec: 88.3,F1- score: 0.88 N: 1341, C-19: 420, P: 1345 3
[182] EfficientNet B3-X Cvd.Acc: 93.9, Cvd.Sen: 96.8, Cvd.PPV: 100 N:7966+100, C-19: 152+31 P: 5421+100 3
[183] Various pre-trained CNN models (Best: ResNet50) Cvd.Acc: 96.1 (N,C-19), Cvd.Acc: 99.5(C-19,VP), Cvd.Acc: 99.7(C-19,BP) N: 2800, C-19: 341, BP: 2772, VP: 1493 2
[184] CNN model + SVM Cvd.Acc (avg.): 95.81, Cvd. Prec(avg.): 95.27, F1 score(avg.): 94.94 N:1266 +317, C-19:460 + 116 P:3418 + 855 (Pvt.) 3
[185] ResNet50+ SVM Cvd.Sen:80, Cvd.Spe: 81, AUC: 0.81 Training and validation
C-19:250, NC:250
Testing independent set
C-19:74,NC:36
(Pvt.)
2
[186] VisionPro Deep Learning™ + COGNEX’s F-score: 95.3 (for segmented lung) N: 7966+100,C-19: 258+100
P: 5451+100
3
[84] Pillow library + HSGO + SVM Cvd.Acc:99.65 C-19: 371, NC: 1341 2
[187] CNN model Cvd.Acc (avg.): 98.03, Cvd.Sen(avg.): 98.83, Cvd.Spe(avg.): 97 DS1:C-19: 217, NC: 1126
DS2:C-19: 2025, NC: 2025
2
[188] AlexNet + Relief + SVM Cvd.Acc: 99.18 N:1583, C-19: 219, P:4290 3
[189] RGB to YUV and YUV to RGB + CNN Cvd.Acc: 84.76, Cvd.Sen: 98.99, Cvd.Spe: 92.19, F-score: 0.9389,AUC: 0.5948 N:28,C-19:78,P: 79(each for BP and VP) 4
[190] CNN model Cvd.Acc: 98.44 Total: 392, C-19: 196 2
[191] Deep CNN model Cvd.Acc(avg.): 91.62, AUC:91.71 C-19 +ve: 538, C-19 –ve: 468 2
[192] Deep CNN model Cvd.Acc(avg.):99.2, Cvd.Sen(avg.):99.2,F1- score: 0.992 N, C-19: 2484 (each)
N, C-19,P: 3829 (each)
2
Cvd.Acc(avg.):95.2, Cvd.Sen(avg.):95.2,F1-score: 0.952 3
[193] MobileNetV2 Cvd.Acc: 92.91, Cvd.Pre: 92 N: 234, C-19: 390 2
[49] DenseNet201 model+ Quadratic SVM Cvd.Acc: 98.16, Cvd.Sen: 98.93, Cvd.Spe: 98.77 N: 2924, C-19: 683,P: 4272 3
[194] Cluster-based learning + Ensemble of classifiers Cvd.Acc (avg.):100 N:79,C-19: 69, BP:79, VP:79 2(N,C-19)
Cvd.Acc(avg.): 85.23 3(N,C-19,BP)
Cvd.Acc(avg.): 74.05 4
[195] Various deep CNN models are compared
(Best: XCeptionNet)
F1-score: 0.97 N: 1345+238, C-19:490+ 86,P:3632+ 641
(Train + Test)
3
[196] CNN model Cvd.Acc: 98.19 N: 10,456, C-19: 573, P: 11,673 (Pvt.) 2(C-19,P)
Cvd.Acc: 91.21 3
[197] Federated learning model Cvd.Acc: 98.72 N: 1266, C-19: 460,P: 3418 (Pvt.) 2(C-19,P)
Cvd.Acc: 95.96 3
[80] ResNet50 + ASSOA + MLP Cvd.Acc: 99.70 Total: 5863 2(C-19+ve, C-19-ve)
[198] Several CNN models are analyzed (Best: VGG16) Cvd.Acc: 91 N:1341, C-19:219,P:1345 3
[199] Semi-supervised open set domain adversarial network (SODA) Avg. AUC-ROC Score: 0.9006(C-19), 0.9082(P) With different domain target dataset
[200] VGG16 model Cvd.Acc: 97, Cvd.Sen: 99, Cvd.Spe: 99, Cvd.Pre: 97, F-score: 98 N:1400, C-19: 210, P: 1400 3
[201] CovFrameNet (deep learning architecture) Cvd.Acc: 100, Cvd.Sen: 85, Cvd.Spe: 100, Cvd.Pre: 85, F-score: 90, AUC: 50 Using two different dataset
[202] Self-supervised super sample decomposition for transfer learning (4S-DT) model Cvd.Acc: 97.54, Cvd.Sen: 97.88, Cvd.Spe: 97.15 DS1: N: 296, C-19: 388, SARS: 41 3(N, C-19, SARS)
Cvd.Acc: 99.80, Cvd.Sen: 99.70, Cvd.Spe: 100 DS2: N: 1583,C-19: 576,P: 4273 3 (N,C-19,P)
[203] VDI + Residual encoder + SVM Cvd.Acc: 93.60, Cvd.Sen: 88, Cvd.Pre: 100, F1-score: 93.60 C-19: 315, NC: 357 2
[204] RCoNetks Cvd.Acc (avg.):97.89, Cvd.Sen(avg.):97.76, Cvd.Spe(avg.):98.24, Cvd.PPV(avg.):97.93, F1-score(avg.):97.63 N: 8851, C-19: 238, P: 6045 3

Cvd.Acc (%): COVID accuracy, Cvd.Sen(%): COVID sensitivity, Cvd.Spe(%): COVID specificity, Cvd.Pre(%): COVID precision, Normal: N, COVID-19: C-19, Pneumonia: P, Bacterial pneumonia: BP, Viral pneumonia: VP, Tuberculosis: TB, Non-COVID: NC, Others: O, Abnormal: AB, Private: Pvt., DS: dataset, Severe: S, Non-severe: NS, Mild: M, Moderate: mod, Critical: cr, Infected/Infection: I, Not infected: NI, Community acquired pneumonia (CAP): P, Lung cancer: LC.

Table 3.

State-of-the-art AI techniques to detect COVID-19 using CT scans.

Paper Method Used: Preprocessing + Segmentation + Feature Extraction + Feature Selection + Classification or CNN + Classification Result Obtained Dataset
(Most Are Public)
No. of Classes
[205] Various deep models are analyzed (Best: ResNet101) Cvd.Acc: 99.51, Cvd.Sen: 100, Cvd.Spe: 99.02, AUC: 0.994 C-19: 108,NC: 86,Total: 1020 slice, (Pvt.) 2
[206] EfficientNet family based architecture Cvd.Acc: 98.99, Cvd.Sen: 98.80, Cvd.PPV:99.20 DS 1- NC: 1230, C-19: 1252 2
Cvd.Acc: 56.16, Cvd.Sen: 53.06, Cvd.PPV: 54.74 (Train DS 1 & Test DS2) DS 2: NC: 463,C-19: 349
[207] LinkNet + DenseNet + DT Cvd.Acc(avg.): 94.4, Cvd.Pre(avg.): 96.7, Cvd.Rec(avg.): 95.2, F1-score(avg.): 96.0 C-19:445,NC:233 2
[208] novel conditional generative model, called CoSinGAN Independent testing is done using 50 CT cases (for lung segmentation and infection learning)
[93] Intensity normalization and segmentation + Q-deformed entropy + ANOVA+ LSTM Cvd.Acc: 99.68 N: 107,C-19: 118,P: 96 3
[209] Modified Alexnet model Cvd.Acc: 94.75, Cvd.Sen: 93.22, Cvd.Spe: 96.69, Cvd.PPV:97.27 C-19:3482,NC:2751 (Pvt.) 2
[210] Ensemble various models using majority voting scheme Cvd.Acc: 85.2, Cvd.Sen: 85.4, Cvd.Pre: 85.7,F-score: 0.852,AUC: 0.91 C-19 + ve: 349,C-19 -ve: 397 2
[211] ResNet50 Cvd.Acc: 82.91, Cvd.Sen: 77.66, Cvd.Spe: 87.62 C-19:345,NC:397 2
[99] CNN model with MODE Cvd.Acc: outperforms competitive models by 1.9789% 2
[212] Ensemble is built using ResNet152V2, DenseNet201, and VGG16 Cvd.Acc: 98.83, Cvd.Sen: 98.83, Cvd.Spe: 98.82,F-measure: 98.30,AUC: 98.28 N:3038,C-19:2373,P: 2890
TB: 3193
4
[36] eXplainable Deep Learning approach (xDNN) F1-score: 97.31 SARS-CoV-2: 1252
Non SARS-CoV-2: 1230
2
[35] Multi-task and self-supervised learning Cvd.Acc: 89, F1- score: 0.90, AUC: 0.98 C-19:349,NC: 463 2
[213] Semi-Inf-Net Cvd.Sen: 0.725, Cvd.Spe: 0.960, Dice: 0.739 100 images from 19 patients (Pvt) C-19 lung Seg.
[214] 3D CNN model Cvd.Acc: 87.50, Cvd.Sen: 86.90, Cvd.Spe: 90.10,F1-score: 82,AUC: 94.40 Train: 2186, Test: 2796 (Pvt.) 2 (CAP,C-19)
[215] CNN model Cvd.Acc (avg): 94.03, Cvd.Sen(avg.): 94.44, Cvd.Spe (avg.): 93.63 N: 320, C-19: 320 (Pvt.) 2
[92] AlexNet + Guided WOA Cvd.Acc: 87.50, AUC: 99.50 C-19: 334, NC-19: 794 2
[216] Multi-task multi-slice deep learning system Cvd.Acc: 95.21 N: 251,C-19: 245,H1N1: 105
CAP: 123 (Pvt.)
4
[217] LBP and statistical features + ReliefF and NCA + DNN Cvd.Acc: 95.84 N: 397,C-19: 349 2
[218] Region growing + deep CNN model (ResNet101 as its backbone) Cvd.Acc: 94.9 Total: 1110 patients with 5 classes 5
[219] Radiomic features + mRMR + XGBoost AUC: 0.95 ± 0.02 Total: 152 Patients
[220] Segmentation of infectious lung as ResNet50 backbone
[221] DTCT and GLCM + RF Cvd.Acc (avg.): 72.2, Cvd.Sen(avg.): 77, Cvd.Spe(avg.): 68,AUROC (avg.): 0.8 C-19: 291, P: 279 (Pvt.) 2
[222] ResGNet (Graphs are generated using ResNet101-C features) Cvd.Acc (avg.): 96.62, Cvd.Sen(avg.): 97.33, Cvd.Spe(avg.): 95.91, Cvd.Pre(avg.): 96.21,F1-Score(avg.): 0.9665 N:148,C-19: 148 (Pvt.) 2
[223] CNN model (DenseNet201) + ELM Cvd.Acc: 98.36, Cvd.Sen: 98.28, Cvd.Spe: 98.44, Cvd.Pre: 98.22,F1-Score: 98.25,
AUC: 98.36
C-19: 349,NC: 397 2
[224] M 2 UNet (Multi-task multi-instance deep network) Cvd.Acc (avg.): 98.5, Cvd.Sen(avg.): 95.2, Cvd.Pre(avg.): 97.5,F1-Score(avg.): 0.963
AUC(avg.): 0.991
S:51,NS: 191(Pvt.) 2
[225] Dual-branch combination network (using UNet + ResNet50) Cvd.Acc: 96.74, Cvd.Sen: 97.91, Cvd.Spe: 96.00,AUC: 0.9864 N: 75 scans, C-19: 48 scans (Pvt.) 2
[226] Majority voting scheme with ResNet50 Cvd.Acc: 96, Cvd.Sen:100, Cvd.Spe: 96,AUC: 0.90 Two public datasets are used 2
[227] HE + WF + AlexNet + SVM Cvd.Acc: 96.69, Cvd.Sen: 96, Cvd.Spe: 98 N:500,C-19:488, P:500 3
[228] DenseNet-201 Cvd.Acc: 97.8, Cvd.Sen: 98.1, Cvd.Spe: 97.3, Cvd.Pre: 98.4, F1-score: 98.25 C-19: 1500, NC: 1500 2
[229] CLAHE + VGG-19 model Cvd.Acc: 95.75, Cvd.Sen: 97.13,F1- score: 95.75, ROC-AUC: 99.30 C-19 +ve: 1252, C-19 -ve: 1230 2
[230] VGG16 model and ensemble learning Cvd.Acc: 93.57, Cvd.Sen: 94.21, Cvd.Spe: 93.93, Cvd.Pre: 89.4,F1-score: 91.74 N: 243,C-19: 790,P: 384 3
[61] Z-score normalization and KF+CNN + fuzzy c-means + LDN Cvd.Pre: 96, Cvd.Sen: 97, F-score: 97 and volume overlap error (VOE) of 5.6 ± 1:2%.
[231] Golden Key Tool + VGG model Cvd.Acc: 100 DS1- N: 55, C-19: 349 2
Cvd.Acc: 93.478, Cvd.Pre: 97.33, F1-score: 87.5 DS2- N: 55, C-19: 349, NC: 20 3
Cvd.Acc: 90.12, Cvd.Pre: 90.6 DS3- C-19: 349, NC: 396 2
[232] PatchShuffle Stochastic Pooling Neural Network (PSSPNN) F1-score(avg.): 95.79 Total:521 4(N,C-19, P, TB)
[233] Clinical information and chest CT features + XGBoost Cvd.Sen: 90.91, Cvd.Spec: 97.96, AUC: 0.924 Total: 198 2 (M,S)
[234] 3D CU-Net DSC: 0.960, 0.963, 0.771, Cvd.Sen: 0.969, 0.966, 0.837, Cvd.Spe: 0.998, 0.998, 0.998 C-19: 70 for detecting C-19 infection
[235] Tensor + COVID-19-Net (VGG16) + Transfer-Net (ResNet50) Cvd.Acc: 94, Cvd.Sen: 96, Cvd.Spe: 92 N: 700, C-19: 700 2
[236] Ensemble model (using Resnet18, Densenet201, Mobilenetv2 and Shufflenet) Cvd.Acc: 96.51, Cvd.Sen: 96.96, Cvd.Spe: 96.00,F1-Score: 0.97,AUC: 0.99 C-19: 349,NC: 397 2
[237] LungINFseg, model for segmentation Cvd.Acc (avg.): 98.92, Cvd.Sen(avg.): 83.10, Cvd.Spe(avg.): 99.52, DSC(avg.):80.34
intersection over union (IoU) (avg.): 0.6877
20 labeled COVID-19 CT scans (1800 + annotated
Slices)
[238] Feature Pyramid Network(FPN) DenseNet201 for detection Cvd.Sen: 98.3 (m), Cvd.Sen: 71.2(mod), Cvd.Sen: 77.8(s), Cvd.Sen: 100(cr) 1110 subjects Severity classification
[239] Volume of interest based DenseNet-201 Cvd.Acc: 88.88, Cvd.Sen:89.77, Cvd.Spe: 94.73, F1-Score: 88.88 C-19: -moderate risk:40
severe risk:40 extreme risk:40
3
[240] Various deep network architectures are analyzed using publicly available two COVID-19 CT datasets 2
[241] UNet F1-Score, improvement of 5.394 ± 3.015%. +ve:492. -ve: 447
[242] Stationary wavelets + CNN model (Best: ResNet18) Cvd.Acc: 99.4, Cvd.Sen: 100, Cvd.Spe: 98.6,AUC: 0.9965 C-19:349, NC:397 2
[243] Gabor filter + convolution and pooling layers + RF F1 score: 0.99 C-19: 349,NC: 397 2
[244] Stacked autoencoder detector model Cvd.Acc(avg.):94.7, Cvd.Sen(avg.):94.1, Cvd.Pre(avg.):96.54, F1-score (avg.):94.8 C-19: 275,NC: 195 2
[245] DenseNet201 model + k-NN Cvd.Acc, Cvd.Sen, Cvd.Pre, & F1-score:100 C-19:2740,Suspected Cases: 2740 (Private) 2
[246] CNN model + MI and Relief-F and DA +SVM Cvd.Acc: 98.39, Cvd.Sen: 97.78, Cvd.Pre: 98.21, F1-score: 0.98, AUC: 0.9952 SARS-CoV-2: 1252
Non SARS-CoV-2: 1230
2
Cvd.Acc: 90.0, Cvd.Sen: 84.06, Cvd.Pre: 93.55,F1-score: 0.8855, AUC: 0.9414 C-19:349, NC: 463
[247] VGG19 model Cvd.Acc: 94.52 C-19: 349,NC: 463 2
[248] VGG16 model Cvd.Acc: 98.0, Cvd.Sen: 99.0, Cvd.Spe: 94.9 N: 275, C-19: 195 2
[249] Radiological features + Chi-square test + Ensemble classifier Cvd.Acc: 91.94, Cvd.Sen: 93.54, Cvd.Spe: 90.32,AUC: 0.965 C-19: 306,non-COVID-19 pneumonia: 306 (Pvt.) 2
[250] Various CNN and texture based approaches Cvd.Acc (avg.): 95.99, Cvd.Sen(avg.): 94.04, Cvd.Spe(avg.): 99.01,F1-score(avg.): 0.9284, AUC (avg.): 0.9903 COVID-19: 386, NC: 1010 2
[251] Worried deep neural network + pre-trained models (InceptionV3, ResNet50, and VGG19) Cvd.Acc: 99.04, Cvd.Prec: 98.68, Cvd.Rec: 99.11,F-score: 98.90 Total: 2623 (Pvt.) 2(I,NI)
[252] Density peak clustering approach Structural similarity index (SSIM): 89 Total images: 12 (Pvt.) C-19 Seg.
[253] EfficientNet-b0 model Cvd.Acc: 99.83, Cvd.Sen: 92.86, Cvd.Spe: 98.32, Cvd.PPV:91.92 Total images: 107,675 (Pvt.) 2(C-19,NC)
Cvd.Acc: 97.32, Cvd.Sen: 99.71, Cvd.Spe: 95.98, Cvd.PPV: 93.26 2 (C-19,P)
[254] EfficientNetB3 Cvd.Sen: 97.2, Cvd.Spe: 96.8,F1-score: 0.970, AUC: 0.997 N:105,C-19:143,P:147 (Pvt.) 3
Cvd.Sen: 92.4, Cvd.Spe: 98.3,F1-score: 0.953,AUC: 0.989 N: 121,C-19: 119, P: 117(Pvt.) 3
Cvd.Sen: 93.9, Cvd.Spe: 83.1,AUC: 0.954 C-19: 856,Non-P: 254 (Pvt.) 2
[255] COVID Segnet For COVID-19 segmentation: Dice Score: 0.726, Cvd.Sen.: 0.751, Cvd.Pre.: 0.726 Train: 731 Test: 130 patients (Pvt.) Lung and infected regions seg.
For lung segmentation: Dice Score: 0.987, Cvd.Sen.: 0.986, Cvd.Pre.: 0.990
[256] Anam-Net Dice Score: 0.956, Cvd.Acc.: 98.5, Cvd.Sen.: 92.7, Cvd.Spe.: 99.8 N:929, AB:880 Anomalies seg.
Table 4.

State-of-the-art AI techniques to detect COVID-19 using lung US imagery.

Paper Method Used: Preprocessing + Segmentation + Feature Extraction + Feature Selection + Classification or CNN + Classification Result Obtained Dataset
(Most Are Public)
No. of Classes
[257] Features from various layers deep CNN model is fused Cvd.Acc (avg.): 92.5, Cvd.Sen(avg.): 93.2, Cvd.Pre(avg.): 91.8 N: 53 + 15,C-19: 45+18,BP: 23 + 7 3
[258] Autoencoder network and separable convolutional branches attached with a modified DenseNet201 17% more than the traditional DenseNet Convex:38, Linear: 20
Score 0 (healthy) to Score 3 (worst-case)
4
[39] Frame- and video-based CNN models (Best: VGG) Cvd.Sen: 0.90 ± 0.08, Cvd.Spe: 0.96 ± 0.04 N: 90,C-19:92, BP: 73,VP: 6
(It includes videos and images)
3
Table 5.

State-of-the-art AI techniques to detect COVID-19 using X-ray and CT scans.

Paper Method Used: Preprocessing + Segmentation + Feature Extraction + Feature Selection + Classification or CNN + Classification Result Obtained Dataset
(Most Are Public)
No. of Classes
[259] VGG19 model Cvd.Acc: 89.47, Cvd.Sen: 76.19, Cvd.Spe: 97.22 X-ray: 673 radiology images of 342 patients 2(N,C-19)
Cvd.Acc: 95.61, Cvd.Sen: 96.55, Cvd.Spe: 95.29 SARS-CoV-2 CT: C-19:1252, NC: 1230 2(C-19,P)
Cvd.Acc: 95, Cvd.Sen: 94.04, Cvd.Spe: 95.86 X-ray: 5856 images 2(C-19,NC)
[260] VGG19 + CNN model Cvd.Acc: 98.05, Cvd.Spe: 99.5, Cvd.Rec: 98.05, Cvd.Pre: 98.43,
F1-Score: 98.24,AUC: 99.66
Total images: 33,676 4(N,C-19,P,LC)
[65] LBP and MFrLFM + SFS Cvd.Acc: 99.3±0.2, F1-score: 93.1±0.2, AUC: 94.9±0.1 Chest X-ray: 1926 2(C-19,NC)
Cvd.Acc: 93.2±0.3, F1- score: 92.1±0.3,AUC: 93.2±0.3 CT scan: 2482
[261] COVID-ResNet53 Cvd.Acc: 97.1, Cvd.Sen: 98.9, Cvd.Spe: 95.7, Cvd.Pre: 94.5 X-ray: C-19: 4045, NC: 5500 2(C-19,NC)
Cvd.Acc: 97.7, Cvd.Sen: 98.7, Cvd.Spe: 95.6, Cvd.Pre: 97.9 CT: C-19: 5427, NC: 2628
[262] CNN model Cvd.Acc: 96.68, Cvd.Sen: 96.24, Cvd.Spe: 95.65 N: 7021,C-19: 1066, P:7021 3(N,C-19, P)
[263] PF+ GraphCovidNet Cvd.Acc, Cvd.Pre, Cvd.Sen,F1- score:100 SARS-CoV-2 CT
N: 1229, C-19:1252
2
Cvd.Acc, Cvd.Pre, Cvd.Sen,F1- score:100 CT: N: 407, C-19: 349 2
Cvd.Acc, Cvd.Pre, Cvd.Sen,F1- score: 99.84 X-ray: N: 1592,C-19:504,P: 4343 3
[264] HE and WF + Haralick texture feature and VGG16 model Cvd.Acc: 93, Cvd.Sen: 90, Cvd.Pre: 91 N: 1349,C-19: 407,BP: 2538,VP: 1345 4
[265] HE and WF + DenseNet103 + Haralick texture feature and ResNet101 model Cvd.Acc: 94.9, Cvd. Sen: 93, Cvd. Pre: 93 Total images: 12,520, N: 4100, C-19: 220
P: 4100,Lung opacity: 4100
4
[266] DenseNet121 + Bagging tree classifier Cvd.Acc: 99 Total images: 274 2(N,C-19)
[267] Contrastive multi-task convolutional neural network (CMT-CNN)
CNN Model: EfficientNet
Cvd.Acc (avg.): 93.46, Cvd.Sen (avg.): 90.57, Cvd.Spe (avg.): 90.84
AUC (avg.): 89.33 (2-class)
CT scan: N: 1164,C-19: 1980,P:1614 2(C-19,O)
3(N,C-19,P)
Cvd.Acc (avg.): 91.45 (3-class)
Cvd.Acc (avg.): 97.23, Cvd.Sen (avg.): 92.97, Cvd.Spe (avg.): 91.91
AUC (avg.): 92.13 (2-class)
X-ray: N: 1583, C-19: 231,P: 4007
Cvd.Acc (avg.): 93.49 (3-class)
[268] Contextual features reduced by convolutional filters (CFRCF) Cvd.Acc: 94.23 CT: C-19: 349, NC: 397 2(C-19,NC)
X-ray: C-19: 187, NC: 73
[269] CNN model Cvd. Sen: 97.92, Cvd.Spe: 94.64, Cvd. Pre: 94.81,AUC: 0.9808 Total images: 672 (X-ray:336 and CT:336) 2(C-19,NC)
[270] VGG16 + InceptionV3 models Cvd.Sen: 100, Cvd.Pre: 0.97, F1: 0.98 CT: 746
X-ray: 268
2(N,C-19)
[271] CovidNet model Cvd. Acc: 100, Cvd. Sen: 100 CT: C-19: 1252, NC: 1230 2
Cvd. Acc: 96.84, Cvd. Sen: 92.19 X-ray: N: 445, C-19:321, P:500 3
Using all X-ray, CT, and US imageries
[272] Pre-trained deep learning models: DenseNet-161, ResNet-34, VGG-16 and MobileNet-V2 are used Cvd.Sen: 97.91, Cvd.Spe: 99.57, Cvd.Pre: 99.57,F1-score: 98.73 X-ray: C-19: 234, NC:234 2
Cvd.Acc: 64.41, Cvd.Sen: 66.28, Cvd.Spe: 62.93, Cvd.Pre:58.67,F1-Score: 0.6225 CT: C-19: 392, NC:392
Cvd.Acc: 99.36, Cvd.Sen: 98.74, Cvd.Spe: 100, Cvd.Pre:100,F1-Score: 0.9973 US: C-19:19, NC:14
[273] VGG19 model Cvd.Pre: 86 X-ray: N: 60,361,C-19:140,P:322 3
Cvd.Pre: 84 CT: C-19: 349, NC: 397 2
Cvd.Pre: 100 US: N: 235,C-19: 399,P: 277 3

4. Results

From our extensive literature review, it was observed that many of the CAD tools in the area of several medical fields have used accuracy, sensitivity or recall, specificity, positive predictive value (PPV) or precision, F-measure or F-score, and area under the curve (AUC) to evaluate the performance of the system [274,275,276]. Similarly, the performance of the CAD tool for the identification of COVID-19 was also evaluated using the same performance parameters as mentioned above. Let TP, TN, FP, and FN indicate true positive, true negative, false positive and false negative, respectively. They are given by the following equations:

Accuracy = (TP + TN)⁄(TP + TN + FP + FN) (1)
Sensitivity = TP⁄(TP + FN) (2)
Specificity = TN⁄(TN + FP) (3)
PPV = TP⁄(TP + FP) (4)
F1-score = 2TP/(2TP + FP + FN) (5)

In all performance measures, the higher the value, the better the performance of the model. The developed AI models for COVID-19 detection using various medical images, such as X-ray, CT, and US, can be categorized into 2, 3, 4, and 5 classes per imaging modality, as shown in Figure 5.

Figure 5.

Figure 5

Percentage of various classes in the assessment of COVID-19 by imaging modalities (X-ray, CT, and X-ray and CT).

Figure 5 reveals that two-class classification (Healthy vs. COVID-19 or COVID-19 vs. NonCOVID-19) was the most frequently reported among the different imaging modalities. Combinations of different class categorizations were also observed in CADTs which used X-ray images. Table 6 conveys the average performance outcomes of the systems considered in the present review irrespective of the number of cases. Many of the studies used publicly available datasets and achieved comparable results.

Table 6.

Average (Avg.) performance of COVID-19 detection systems.

X-ray
Class Cvd.Acc (%) Cvd.Sen (%) Cvd.Spe (%) F1-score (%) AUC (%)
2 97.05 95.37,086 94.79 96.11 95.45
3 94.78 95.63,542 97.10 85.71 93.55
4 91.69 94.335 97.16 83.32 64.74
5 92.41 82.96 95.24 81.49 88.1
CT
Class Cvd.Acc (%) Cvd.Sen (%) Cvd.Spe (%) F1-score (%) AUC (%)
2 92.99 92.61,897 93.28 94.57 91.40
3 94.55 95.016 95.55 92.08 99.3
4 97.02 98.83 98.82 97.9 98.28
5 -- -- -- -- 94.9
X-ray and CT
Class Cvd.Acc (%) Cvd.Sen (%) Cvd.Spe (%) F1-score (%) AUC (%)
2 96.54 94.35 95.81 97.38 93.87
3 94.99 94.21 95.65 99.84
4 95.52 94.75 -- 98.24 99.66

It is observed from Table 6 that the systems developed with X-ray and CT images had five-class classification and achieved a Cvd.Acc (avg.) of 92.41% using X-ray images. It is also observed that the two-class models are no longer valid when other diseases with similar symptoms were presented [178]. It is noted from Table 2, Table 3, Table 4 and Table 5 that few studies have performed four-class (normal vs. COVID-19 vs. viral pneumonia (VP) vs. bacterial pneumonia (BP)) classification [114,118,138,154,161,179,189,194,264]. They have obtained the Cvd.Acc (avg.) of 89.91%. Hence, for further analysis of the system we considered the model which can categorize three or more images. Box plot analysis was carried out to obtain the overall performance of the three-class classification system used in COVID-19 detection. Figure 6 shows the box plots for Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC values of the reported AI methods in the three-class classification scenario. Box plots represent the distribution characteristics of performance measures based on minimum, first quartile, median, third quartile, and maximum.

Figure 6.

Figure 6

Comparison of Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC of AI techniques to detect COVID-19 using box plots.

It is noted from Figure 6 that AI techniques using X-ray imagery had acceptable performance when compared to other medical images. For the three-class scenario, the method achieved Cvd.Acc (avg.) of 94.78%, 94.55%, and 94.99% using X-ray, CT, and the system with both X-ray and CT, respectively, by considering all state-of-the-art techniques. Further, we also analyzed the systems which can categorize three or more classes. It is observed from Table 2 that ResNet50 with DWT and GLCM [114], customized CNN [118,154,179,189], GoogLeNet [138], InceptionNet [141], AlexNet [160], a combination of DenseNet103 and ResNet18 [148], an ensemble of various models such as InceptionResNetV2, ResNet152V2, VGG16, and DenseNet201 [153], and a grouping of MobileNet and InceptionV3 [161] were effectively used for four-class classification using X-ray images. Further, the authors also used CNN models for five-class classification using X-ray images [129,168,177]. From Table 2, it is also noted that only RF [114], SVM [179], and ensemble of classifiers [194] have achieved comparable results for four-class categorization. Herein, the RF classifier shows its suitability multiclass categorization by achieving Cvd.Acc of 98.48%. From Table 3, it is observed that grouping of ResNet152V2, DenseNet201, and VGG16 [212], deep learning model [216], and PSSPNN [232] were used by the authors to categorize four-class CT images. The combination of various DNN models achieved a Cvd.Acc of 98.83% [212]. From Table 4 it is noted that minimal work has been reported using lung US imagery. In [258] the autoencoder and modified DenseNet201 is used for four-class classification, and achieved a better result, by over 17%, compared to traditional DenseNet. In [260,264], the system is tested with X-ray and CT modalities, and achieved better classification for four classes. The usage of VGG19 [260] and VGG16 [264] have shown their significance in four-class classification, as noted in Table 5. In [265], a combination of DenseNet103 with Haralick textural features and the ResNet101 model also showed promising performance. It is furthermore observed that for all modalities, only the VGG19 model is used for three-class categorization [273]. It achieved better result for US images, when compared to X-ray and CT.

5. Discussion

Investigators have developed many models to detect COVID-19 during the past two years and have shown that there is a role for AI in detecting COVID-19 [19,21,22,23,24,25,26,27,28,29,277,278,279,280,281]. The 184 technical papers reviewed in this study provide up-to-date knowledge on the usage of AI techniques in detecting COVID-19. The developed models were categorized based on DNN, HCFL, and hybrid methodologies. The number of articles based on the three methodologies are highlighted in Figure 7.

Figure 7.

Figure 7

Various methodologies adopted by state-of-the-art techniques using different modalities.

It is observed from Figure 7 that 70% of the papers reported the use of a DNN-based approach, which included pre-trained networks and customized CNNs. Very few papers were developed to quantify the severity of COVID-19 [282,283,284,285,286]. It is also noted that the computational cost of various deep learning approaches is high [287,288]. From Figure 5, 40%, 78.26%, and 50% of the papers using X-ray, CT, and all modalities, respectively, reported only two-class classification. However, it is difficult to show its significance level in real-time to categorize multiple classes with similar symptoms. It is also observed from Table 6 that, for four-class classification, the Cvd.Sen and Cvd.Spe of the methods increased 4.5% and 1.66%, respectively, using CT images, when compared to X-ray images. In most of the cases, CNNs were able to successfully extract significant information from lung tissue with pneumonia, (i.e., BP and VP). Pre-trained networks such as ResNet, DenseNet, and VGG were successfully used in all of the modalities for greater than three-class categorization. However, the comparison of the pre-trained networks for binary classification may not be as useful, since it may fail to distinguish diseases which have similar symptoms with COVID-19.

In short, it is very difficult to make a comprehensive comparison of methodologies in this present situation because the methods were evaluated using various datasets of different sizes. Hence, the general opinion on the algorithm may be reduced. Few investigators performed k-fold cross validation and in most of the cases the hold-out method was used. Therefore, it is difficult to observe the consistency in the developed models.

Although several models have been developed to detect COVID-19, there are many factors involved in the analysis of COVID-19 imagery, which are listed as follows:

Implementation of multiclass categorization models: Many of the studies implemented two-class categorization; however, these are restricted to only understanding the features of normal and COVID-19 images. For disease symptoms similar to COVID-19, there is a need for algorithms which can discriminate among various classes, such as normal, COVID-19, pneumonia, BP, VP, tuberculosis, and lung opacity. Hence, there is a need for models which can understand the inherent characteristics of various diseases and predict the severity level. Investigators should therefore concentrate on the generalization aspects of the developed models by considering all image modalities.

Implementation aspects: State-of-the-art techniques have trained models using a transfer learning approach. Although the results are promising, the primary architecture has been developed to handle real-world color images. Hence, there is a need for DNNs which are trained from scratch using real medical images. In addition, the selection of appropriate hyper parameters to obtain improved accuracy will play a significant role in training networks developed in the future. The discrimination power of AI techniques can be improved by training the system with multiple views of medical images, which, however, requires extra time. Hence, there is a need for compact featuring to represent COVID-19 and other similar diseases to handle huge datasets.

CADTs to analyze prognosis of COVID-19: Researchers should exploit the hybrid methodology to help medical doctors to understand the treatment outcomes for COVID-19. It is important to develop models to assess the health condition of post-COVID-19 patients for better health and management of the system.

5.1. Future Trends

Since the onset of the COVID-19 pandemic, home isolation and quarantine have been implemented by governments across the world to control the spread of the pandemic [289,290]. In addition, risk factors such as fever, weakness, heart disease, and dry cough, are the most critical issues in the mortality of patients [291]. A person who has tested positive for COVID-19 or who has been in close contact with a confirmed COVID-19 person has to undergo a period of quarantine. In cases where home quarantine is required, especially in rural areas of developing countries, the hospital may require frequent health updates from the patient. This can be done via smartphone where the patient monitors his/her own temperature and/or SpO2 level and reports the results to the medical doctor. In this way the doctor is able to monitor patient health remotely and provide suitable prescriptions or medications when required. There is also a chance that the results obtained from the antigen rapid self-test kit may be negative, despite the patient showing symptoms of COVID-19 disease. In addition, there may be other issues such as people with disabilities and elderly people dependent on them. Considering all of these issues, the best solution would be to remotely monitor the patient without the need for frequent visits to the hospital.

Recent advancements in the Internet of Things (IoT) have paved the way for providing improved healthcare support services [292]. In the future, a cloud-based wireless healthcare system can be used to control the observation of COVID-19 epidemiologically, as shown in Figure 8. X-ray images of the patient’s chest can be taken at selected rural hospitals. X-ray imaging is a fast, inexpensive, and minimally invasive procedure, and X-ray units are available in most rural hospitals. Before collecting the data, the institute’s ethical committee approval should be granted, and the imaging data should be collected after obtaining written consent from the patients. The collected data are stored in a secured cloud-based server with unique identification number for each patient. X-ray images are then analyzed using a cloud-based system, and observations are sent to the medical doctors. On close examination of the imagery, the doctor provides suitable advice to the patient along with prescriptions and treatment instructions. Hence, medical doctors and their patients can interact remotely for any further treatment even in rural communities.

Figure 8.

Figure 8

IoT-based smart healthcare system to detect COVID-19.

5.2. Limitations of the Review

1. This review considered only manuscripts written in English.

2. In this review process, many databases were explored using different search queries; thus a few relevant works may have been neglected in the search. The review process was performed based on technical papers to detect COVID-19 rather than on clinical studies.

3. The present work provides a systematic review of AI techniques, analysis, and its advancement. However, the transformation before and after COVID-19 is not assigned great importance in this study.

The scope of this review was the comprehension of the AI techniques using different imaging modalities. It is observed that the CT scan, which is the faster and more feasible method, has been proven to be the most sensitive tool in the diagnosis of COVID-19 compared to the RT-PCR test [293]. However, the technique involves a high dose of radiation and is not available in the rural health care sectors in developing countries [294,295]. In contrast, the chest X-ray is a universally available technique with 30–70 times lower radiation exposure, and the test is performed during the initial investigational process for COVID-19 [296]. However, lung US is an alternative mode that produces results similar to those of the chest CT and is considered to be superior to the chest X-ray in the diagnosis of lung pathology in COVID-19 infection. Nonetheless, this modality is not useful when the pleura is spared from the pneumonic pathology during the early course of the disease [297]. Recent developments in the diagnosis of COVID-19 using signals such as respiratory sounds, speech signals, and coughing sounds, have also attracted many researchers [298,299]. Furthermore, in the future, this can be combined with other imaging modalities and signals to enhance the performance of the system using various deep learning approaches.

6. Conclusions

AI techniques do not substitute for medical doctors and expert radiologists. However, they can efficiently and automatically impact the analysis of medical imagery. The development of CAD tools to detect COVID-19 have grown significantly in recent years, contributing to the body of clinical and medical research. The early detection of COVID-19 using AI techniques would be helpful to prevent the progression of the pandemic by enabling rapid decision-making. This study aimed to observe and analyze the growth and improvement in AI techniques for the detection of COVID-19. In this review, 184 papers were selected and summarized. The results showed that all DNN, HCFL, and hybrid approaches have high a potential to predict COVID-19 cases. The classification, segmentation, and quantification of the severity level of COVID-19 on heterogeneous datasets can be improved if medical experts play a significant role in building the framework for AI techniques, providing significant knowledge of image features and real-world requirements.

Author Contributions

Conceptualization, A.G., U.R. and U.R.A.; Methodology, A.G. and U.R.; Validation and review, C.P.O., W.Y.C., N.A.K., K.H., P.D.B. and S.C.; Review and editing, C.P.O., E.J.C. and U.R.A.; Original draft preparation, A.G. and U.R., S.N., M.R.G., C.D., J.S.; Visualization, U.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research work is funded by Ministry of Higher Education, Malaysia (grant number MRUN2019-3D).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Zhu N., Zhang D., Wang W., Li X., Yang B., Song J., Zhao X., Huang B., Shi W., Lu R., et al. A novel coronavirus from patients with pneumonia in China, 2019. N. Engl. J Med. 2020;382:727–733. doi: 10.1056/NEJMoa2001017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Li Q., Guan X., Wu P., Wang X., Zhou L., Tong Y., Ren R., Leung K.S.M., Lau E.H.Y., Wong J.Y., et al. Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus–Infected Pneumonia. N. Engl. J. Med. 2020;382:1199–1207. doi: 10.1056/NEJMoa2001316. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Zheng M., Gao Y., Wang G., Song G., Liu S., Sun D., Xu Y., Tian Z. Functional exhaustion of antiviral lymphocytes in COVID-19 patients. Cell. Mol. Immunol. 2020;17:533–535. doi: 10.1038/s41423-020-0402-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Zhang J., Litvinova M., Wang W., Wang Y., Deng X., Chen X., Li M., Zheng W., Yi L., Chen X., et al. Evolving epidemiology and transmission dynamics of coronavirus disease 2019 outside Hubei province, China: A descriptive and modelling study. Lancet Infect. Dis. 2020;20:793–802. doi: 10.1016/S1473-3099(20)30230-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Cascella M., Rajnik M., Aleem A., Dulebohn S.C., Di Napoli R. Features, Evaluation and Treatment Coronavirus (COVID-19) Stat Pearls Publishing; Treasure Island, FL, USA: 2020. [PubMed] [Google Scholar]
  • 6.Wan Y., Shang J., Graham R., Baric R.S., Li F. Receptor recognition by novel coronavirus from Wuhan: An analysis based on decade-long structural studies of SARS. J. Virol. 2020;94:e00127-20. doi: 10.1128/JVI.00127-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hoffmann M., Kleine-Weber H., Schroeder S., Krüger N., Herrler T., Erichsen S., Schiergens T.S., Herrler G., Wu N.H., Nitsche A., et al. SARS-CoV-2 cell entry depends on ACE2 and TMPRSS2 and is blocked by a clinically proven protease inhibitor. Cell. 2020;181:271–280. doi: 10.1016/j.cell.2020.02.052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Sims A.C., Baric R.S., Yount B., Burkett S.E., Collins P.L., Pickles R.J. Severe acute respiratory syndrome coronavirus infection of human ciliated airway epithelia: Role of ciliated cells in viral spread in the conducting airways of the lungs. J. Virol. 2005;79:15511–15524. doi: 10.1128/JVI.79.24.15511-15524.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Tang N.L.-S., Chan P.K.-S., Wong C.-K., To K.-F., Wu A.K.-L., Sung Y.-M., Hui D.S.-C., Sung J.J.-Y., Lam C.W.-K. Early Enhanced Expression of Interferon-Inducible Protein-10 (CXCL-10) and Other Chemokines Predicts Adverse Outcome in Severe Acute Respiratory Syndrome. Clin. Chem. 2005;51:2333–2340. doi: 10.1373/clinchem.2005.054460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Xu Z., Shi L., Wang Y., Zhang J., Huang L., Zhang C., Liu S., Zhao P., Liu H., Zhu L., et al. Pathological findings of COVID-19 associated with acute respiratory distress syndrome. Lancet Respir. Med. 2020;8:420–422. doi: 10.1016/S2213-2600(20)30076-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Yuki K., Fujiogi M., Koutsogiannaki S. COVID-19 pathophysiology: A review. Clin. Immunol. 2020;215:108427. doi: 10.1016/j.clim.2020.108427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Donnelly C.A., Ghani A.C., Leung G.M., Hedley A.J., Fraser C., Riley S., Abu-Raddad L.J., Ho L.M., Thach T.Q., Chau P., et al. Epidemiological determinants of spread of causal agent of severe acute respiratory syndrome in hong kong. Lancet. 2003;361:1761–1766. doi: 10.1016/S0140-6736(03)13410-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Goyal P., Choi J.J., Pinheiro L.C., Schenck E.J., Chen R., Jabri A., Satlin M.J., Campion T.R., Jr., Nahid M., Ringel J.B., et al. Clinical Characteristics of Covid-19 in New York City. N. Engl. J. Med. 2020;382:2372–2374. doi: 10.1056/NEJMc2010419. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Guan W.-J., Ni Z.-Y., Hu Y., Liang W.-H., Ou C.-Q., He J.-X., Liu L., Shan H., Lei C.-L., Hui D.S.C., et al. Clinical Characteristics of Coronavirus Disease 2019 in China. N. Engl. J. Med. 2020;382:1708–1720. doi: 10.1056/NEJMoa2002032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Young B.E., Ong S., Kalimuddin S., Low J.G., Tan S.Y., Loh J., Ng O.T., Marimuthu K., Ang L.W., Mak T.M., et al. Epidemiologic features and clinical course of patients infected with SARS-CoV-2 in Singapore. JAMA. 2020;323:1488–1494. doi: 10.1001/jama.2020.3204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Cheung K.S., Hung I., Chan P., Lung K.C., Tso E., Liu R., Ng Y.Y., Chu M.Y., Chung T., Tam A.R., et al. Gastrointestinal manifestations of SARS-CoV-2 infection and virus load in fecal samples from the hong kong cohort and systematic review and meta-analysis. Gastroenterology. 2020;159:81–95. doi: 10.1053/j.gastro.2020.03.065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. [(accessed on 15 June 2020)]. Available online: https://www.webmd.com/lung/COVID19-digestive-symptoms.
  • 18.Liu R., Han H., Liu F., Lv Z., Wu K., Liu Y., Feng Y., Zhu C. Positive rate of RT–PCR detection of SARS-CoV-2 infection in 4880 cases from one hospital in Wuhan, China, from Jan to Feb 2020. Clin. Chim. Acta. 2020;505:172–175. doi: 10.1016/j.cca.2020.03.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Kugunavar S., Prabhakar C.J. Convolutional neural networks for the diagnosis and prognosis of the coronavirus disease pandemic. Vis. Comput. Ind. Biomed. Art. 2021;4:12. doi: 10.1186/s42492-021-00078-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Shuja J., Alanazi E., Alasmary W., Alashaikh A. COVID-19 open source data sets: A comprehensive survey. Appl. Intell. 2021;51:1296–1325. doi: 10.1007/s10489-020-01862-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Rehman A., Iqbal M., Xing H., Ahmed I. COVID-19 Detection Empowered with Machine Learning and Deep Learning Techniques: A Systematic Review. Appl. Sci. 2021;11:3414. doi: 10.3390/app11083414. [DOI] [Google Scholar]
  • 22.Low W.C.S., Chuah J.H., Tee C.A.T.H., Anis S., Shoaib M.A., Faisal A., Khalil A., Lai K.W. An Overview of Deep Learning Techniques on Chest X-ray and CT Scan Identification of COVID-19. Comput. Math. Methods Med. 2021;2021:5528144. doi: 10.1155/2021/5528144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ghaderzadeh M., Asadi F. Deep Learning in the Detection and Diagnosis of COVID-19 Using Radiology Modalities: A Systematic Review. J. Health Eng. 2021;2021:6677314. doi: 10.1155/2021/6677314. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Ozsahin I., Sekeroglu B., Musa M.S., Mustapha M.T., Ozsahin D.U. Review on Diagnosis of COVID-19 from Chest CT Images Using Artificial Intelligence. Comput. Math. Methods Med. 2020;2020:9756518. doi: 10.1155/2020/9756518. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Aishwarya T., Kumar V.R. Machine Learning and Deep Learning Approaches to Analyze and Detect COVID-19: A Review. SN Comput. Sci. 2021;2:1–9. doi: 10.1007/s42979-021-00605-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Nayak J., Naik B., Dinesh P., Vakula K., Dash P.B., Pelusi D. Significance of deep learning for Covid-19: State-of-the-art review. Res. Biomed. Eng. 2021:1–24. doi: 10.1007/s42600-021-00135-6. [DOI] [Google Scholar]
  • 27.Alballa N., Al-Turaiki I. Machine Learning Approaches in COVID-19 Diagnosis, Mortality, and Severity Risk Prediction: A Review. Inform. Med. Unlocked. 2021;24:100564. doi: 10.1016/j.imu.2021.100564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Bhattacharya S., Maddikunta P.K.R., Pham Q.-V., Gadekallu T.R., Chowdhary C.L., Alazab M., Piran J. Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. Sustain. Cities Soc. 2021;65:102589. doi: 10.1016/j.scs.2020.102589. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Tayarani M.H. Applications of artificial intelligence in battling against covid-19: A literature review. Chaos Solitons Fractals. 2021;142:110338. doi: 10.1016/j.chaos.2020.110338. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Moher D., Liberati A., Tetzlaff J., Altman D.G., The PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2010;8:336–341. doi: 10.1016/j.ijsu.2010.02.007. [DOI] [PubMed] [Google Scholar]
  • 31.Huang P., Liu T., Huang L., Liu H., Lei M., Xu W., Hu X., Chen J., Liu B. Use of Chest CT in Combination with Negative RT-PCR Assay for the 2019 Novel Coronavirus but High Clinical Suspicion. Radiology. 2020;295:22–23. doi: 10.1148/radiol.2020200330. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Cellina M., Martinenghi C., Marino P., Oliva G. COVID-19 pneumonia—ultrasound, radiographic, and computed tomography findings: A comprehensive pictorial essay. Emerg. Radiol. 2021;28:519–526. doi: 10.1007/s10140-021-01905-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Yan T. Ccap: A Chest Ct Dataset. 2020. [(accessed on 29 November 2021)]. Available online: https://ieee-dataport.org/authors/tao-yan.
  • 34. [(accessed on 5 October 2020)]. Available online: https://iclus-web.bluetensor.ai/
  • 35.Zhao J., Zhang Y., He X., Xie P. COVID-CT-Dataset: A CT scan dataset about COVID-19. arXiv. 20202003.13865 [Google Scholar]
  • 36.Soares E., Angelov P., Biaso S., Froes M.H., Abe D.K. SARSCov-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-cov-2 identification. MedRxiv. 2020:20078584. doi: 10.1101/2020.04.24.20078584. [DOI] [Google Scholar]
  • 37.Morozov S.P., Andreychenko A.E., Pavlov N.A., Vladzymyrskyy A.V., Ledikhova N.V., Gombolevskiy V.A., Blokhin I.A., Gelezhe P.B., Gonchar A.V., Chernina V.Y. Mosmeddata: Chest ct scans with covid-19 related findings dataset. arXiv. 20202005.06465 [Google Scholar]
  • 38.Ma J., Cheng G., Wang Y., An X., Gao J., Yu Z., Zhang M., Liu X., Deng X., Cao S., et al. COVID-19 CT Lung and Infection Segmentation Dataset. Zenodo. 2020;20:3757476. doi: 10.5281/zenodo.3757476. [DOI] [Google Scholar]
  • 39.Born J., Wiedemann N., Cossio M., Buhre C., Brändle G., Leidermann K., Aujayeb A., Moor M., Rieck B., Borgwardt K. Accelerating Detection of Lung Pathologies with Explainable Ultrasound Image Analysis. Appl. Sci. 2021;11:672. doi: 10.3390/app11020672. [DOI] [Google Scholar]
  • 40.Karel Z. Contrast Limited Adaptive Histograph Equalization. Graphic Gems IV. Academic Press Professional; San Diego, CA, USA: 1994. pp. 474–485. [Google Scholar]
  • 41.Kirsch R. Computer determination of the constituent structure of biological images. Comput. Biomed. Res. 1971;4:315–328. doi: 10.1016/0010-4809(71)90034-6. [DOI] [PubMed] [Google Scholar]
  • 42.Gonzales R.C., Richard E. Woods. Digital Image Processing. 2nd ed. Englewood Cliffs; Prentice Hall, NJ, USA: 2002. [Google Scholar]
  • 43.Charles P. Digital Video and HDTV Algorithms and Interfaces. Morgan Kaufman Publishers; San Francisco, CA, USA: 2003. [Google Scholar]
  • 44.Haralick R.M., Shapiro L.G. Image segmentation techniques. Comput. Vis. Graph. Image Process. 1985;29:100–132. doi: 10.1016/S0734-189X(85)90153-7. [DOI] [Google Scholar]
  • 45.Meyer F. Topographic distance and watershed lines. Signal Process. 1994;38:113–125. doi: 10.1016/0165-1684(94)90060-4. [DOI] [Google Scholar]
  • 46.Ronneberger O., Fischer P., Brox T. U-net: Convolutional networks for biomedical image segmentation; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Munich, Germany. 5–9 October 2015; Berlin/Heidelberg, Germany: Springer; 2015. pp. 234–241. [Google Scholar]
  • 47.Chaurasia A., Culurciello E. LinkNet: Exploiting encoder representations for efficient semantic segmentation; Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP); Saint Petersburg, FL, USA. 10–13 December 2017; pp. 1–4. [Google Scholar]
  • 48.Selvan R., Dam E.B., Detlefsen N.S., Rischel S., Sheng K., Nielsen M., Pai A. Lung segmentation from chest X-rays using variational data imputation. arXiv. 20202005.10052 [Google Scholar]
  • 49.Rahman S., Sarker S., Al Miraj A., Nihal R.A., Haque A.K.M.N., Al Noman A. Deep Learning–Driven Automated Detection of COVID-19 from Radiography Images: A Comparative Analysis. Cogn. Comput. 2021:1–30. doi: 10.1007/s12559-020-09779-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Chandra T.B., Verma K., Singh B.K., Jain D., Netam S.S. Coronavirus disease (COVID-19) detection in Chest X-ray images using majority voting based classifier ensemble. Expert Syst. Appl. 2021;165:113909. doi: 10.1016/j.eswa.2020.113909. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Chawla N.V., Bowyer K.W., Hall L.O., Kegelmeyer W.P. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res. 2002;16:321–357. doi: 10.1613/jair.953. [DOI] [Google Scholar]
  • 52.Radford A., Metz L., Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks; Proceedings of the 4th International Conference on Learning Representations, ICLR 2016—Conference Track Proceedings; San Juan, PR, USA. 2–4 May 2016. [Google Scholar]
  • 53.Mehdi M., Osindero S. Conditional generative adversarial nets. arXiv. 20141411.1784 [Google Scholar]
  • 54.Waheed A., Goyal M., Gupta D., Khanna A., Al-Turjman F., Pinheiro P.R. CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection. IEEE Access. 2020;8:91916–91923. doi: 10.1109/ACCESS.2020.2994762. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Mallat S.G. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989;11:674–693. doi: 10.1109/34.192463. [DOI] [Google Scholar]
  • 56.Haralick R.M., Shanmugam K., Dinstein I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973;SMC-3:610–621. doi: 10.1109/TSMC.1973.4309314. [DOI] [Google Scholar]
  • 57.Górszczyk A., Malinowski M., Bellefleur G. Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform. Geophys Prospect. 2015;63:903–918. doi: 10.1111/1365-2478.12234. [DOI] [Google Scholar]
  • 58.Tuncer T., Dogan S., Ozyurt F. An automated Residual Exemplar Local Binary Pattern and iterative ReliefF based COVID-19 detection method using chest X-ray image. Chemom. Intell. Lab. Syst. 2020;203:104054. doi: 10.1016/j.chemolab.2020.104054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Dalal N., Triggs B. Histograms of Oriented Gradients for Human Detection; Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05); San Diego, CA, USA. 20–25 June 2005; [DOI] [Google Scholar]
  • 60.Yin M., Liu W., Zhao X., Guo Q.-W., Bai R.-F. Image denoising using trivariate prior model in nonsubsampled dual-tree complex contourlet transform domain and non-local means filter in spatial domain. Optik. 2013;124:6896–6904. doi: 10.1016/j.ijleo.2013.05.132. [DOI] [Google Scholar]
  • 61.Ranjbarzadeh R., Ghoushchi S.J., Bendechache M., Amirabadi A., Ab Rahman M.N., Saadi S.B., Aghamohammadi A., Forooshani M.K. Lung Infection Segmentation for COVID-19 Pneumonia Based on a Cascade Convolutional Network from CT Images. BioMed Res. Int. 2021;2021:5544742. doi: 10.1155/2021/5544742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. [(accessed on 20 May 2020)]. Available online: https://pillow.readthedocs.io/en/3.1.x/reference/Image.html.
  • 63.Elaziz M.A., Hosny K.M., Salah A., Darwish M.M., Lu S., Sahlol A.T. New machine learning method for image-based diagnosis of COVID-19. PLoS ONE. 2020;15:e0235187. doi: 10.1371/journal.pone.0235187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Ojala T., Pietikainen M., Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002;24:971–987. doi: 10.1109/TPAMI.2002.1017623. [DOI] [Google Scholar]
  • 65.Hosny K.M., Darwish M.M., Li K., Salah A. COVID-19 diagnosis from CT scans and chest X-ray images using low-cost Raspberry Pi. PLoS ONE. 2021;16:e0250688. doi: 10.1371/journal.pone.0250688. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.He K., Zhang X., Ren S., Sun J. Deep Residual Learning for Image Recognition; Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA. 27–30 June 2016; pp. 770–778. [DOI] [Google Scholar]
  • 67.Krizhevsky A., Sutskever I., Hinton G.E. ImageNet Classification with Deep Convolutional Neural Net Works. Commun. ACM. 2017;60:84–90. doi: 10.1145/3065386. [DOI] [Google Scholar]
  • 68.Huang G., Liu Z., van der Maaten L., Kilian Q. Weinberger; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Honolulu, HI, USA. 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  • 69.Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv. 20141409.1556 [Google Scholar]
  • 70.Zhou B., Lapedriza A., Torralba A., Oliva A. Places: An Image Database for Deep Scene Understanding. arXiv. 2016 doi: 10.1167/17.10.296.1610.02055 [DOI] [Google Scholar]
  • 71.Sandler M., Howard A., Zhu M., Zhmoginov A., Chen L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA. 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  • 72.Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojna Z. Rethinking the inception architecture for computer vision; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA. 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  • 73.Iandola F.N., Han S., Moskewicz M.W., Ashraf K., Dally W.J., Keutzer K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv. 20161602.07360 [Google Scholar]
  • 74.Russakovsky O., Deng J., Su H., Krause J., Satheesh S., Ma S., Huang Z., Karpathy A., Khosla A., Bernstein M., et al. ImageNet Large Scale Visual Recognition Challenge. arXiv. 2015 doi: 10.1007/s11263-015-0816-y.1409.0575 [DOI] [Google Scholar]
  • 75.Chollet F. Xception: Deep Learning with Depthwise Separable Convolutions; Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Honolulu, HI, USA. 21–26 July 2017; pp. 1800–1807. [DOI] [Google Scholar]
  • 76.Ahsan M., Alam T.E., Trafalis T., Huebner P. Deep MLP-CNN Model Using Mixed-Data to Distinguish between COVID-19 and Non-COVID-19 Patients. Symmetry. 2020;12:1526. doi: 10.3390/sym12091526. [DOI] [Google Scholar]
  • 77.Ahmed S., Hossain T., Hoque O.B., Sarker S., Rahman S., Shah F.M. Automated COVID-19 Detection from Chest X-ray Images: A High-Resolution Network (HRNet) Approach. SN Comput. Sci. 2021;2:1–17. doi: 10.1007/s42979-021-00690-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Guyon I., Elisseeff A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003;3:1157–1182. [Google Scholar]
  • 79.Mirjalili S., Gandomi A.H., Mirjalili S.Z., Saremi S., Faris H., Mirjalili S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng Softw. 2017;114:163–191. doi: 10.1016/j.advengsoft.2017.07.002. [DOI] [Google Scholar]
  • 80.El-Kenawy E.-S.M., Mirjalili S., Ibrahim A., Alrahmawy M., El-Said M., Zaki R.M., Eid M.M. Advanced Meta-Heuristics, Convolutional Neural Networks, and Feature Selectors for Efficient COVID-19 X-ray Chest Image Classification. IEEE Access. 2021;9:36019–36037. doi: 10.1109/ACCESS.2021.3061058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Geem Z.W., Kim J.H., Loganathan G. A new heuristic optimization algorithm: Harmony search. Simulation. 2001;76:60–68. doi: 10.1177/003754970107600201. [DOI] [Google Scholar]
  • 82.Tuncer T., Akbal E., Dogan S. An automated snoring sound classification method based on local dual octal pattern and iterative hybrid feature selector, Biomed. Signal Process Contr. 2021;63:102173. doi: 10.1016/j.bspc.2020.102173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Mirjalili S., Mirjalili S.M., Lewis A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014;69:46–61. doi: 10.1016/j.advengsoft.2013.12.007. [DOI] [Google Scholar]
  • 84.Singh A.K., Kumar A., Mahmud M., Kaiser M.S., Kishore A. COVID-19 Infection Detection from Chest X-ray Images Using Hybrid Social Group Optimization and Support Vector Classifier. Cogn. Comput. 2021:1–13. doi: 10.1007/s12559-021-09848-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Sahlol A.T., Yousri D., Ewees A.A., Al-Qaness M.A.A., Damasevicius R., Elaziz M.A. COVID-19 image classification using deep features and fractional-order marine predators algorithm. Sci. Rep. 2020;10:1–15. doi: 10.1038/s41598-020-71294-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Peng H., Long F., Ding C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005;27:1226–1238. doi: 10.1109/TPAMI.2005.159. [DOI] [PubMed] [Google Scholar]
  • 87.van der Maaten L., Hinton G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008;9:2579–2605. [Google Scholar]
  • 88.Krzanowski W.J. Principles of Multivariate Analysis. Oxford University Press; Oxford, UK: 1988. [Google Scholar]
  • 89.Kojadinovic I. Relevance measures for subset variable selection in regression problems based on k-additive mutual information. Comput. Stat. Data Anal. 2005;49:1205–1227. doi: 10.1016/j.csda.2004.07.026. [DOI] [Google Scholar]
  • 90.Spolaor N., Cherman E.A., Monard M.C., Lee H.D. ReliefF for Multi-label Feature Selection; Proceedings of the 2013 Brazilian Conference on Intelligent Systems; Fortaleza, Brazil. 19–24 October 2013; pp. 6–11. [Google Scholar]
  • 91.Mirjalili S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016;27:1053–1073. doi: 10.1007/s00521-015-1920-1. [DOI] [Google Scholar]
  • 92.El-Kenawy E.-S.M., Ibrahim A., Mirjalili S., Eid M.M., Hussein S.E. Novel Feature Selection and Voting Classifier Algorithms for COVID-19 Classification in CT Images. IEEE Access. 2020;8:179317–179335. doi: 10.1109/ACCESS.2020.3028012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Hasan A.M., Al-Jawad M.M., Jalab H.A., Shaiba H., Ibrahim R.W., Al-Shamasneh A.R. Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features. Entropy. 2020;22:517. doi: 10.3390/e22050517. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Kingma D.P., Ba J. Adam: A method for stochastic optimization; Proceedings of the International Conference Learn Represent (ICLR); San Diego, CA, USA. 5–8 May 2015. [Google Scholar]
  • 95.Tieleman T., Hinton G. Divide the gradient by a running average of its recent magnitude. coursera: Neural Networks Machine Learning. Tech. Rep. 2012;4:26–31. [Google Scholar]
  • 96.Mirjalili S., Lewis A. The whale optimization algorithm. Adv. Eng. Softw. 2016;95:51–67. doi: 10.1016/j.advengsoft.2016.01.008. [DOI] [Google Scholar]
  • 97.Pathan S., Siddalingaswamy P., Ali T. Automated Detection of Covid-19 from Chest X-ray scans using an optimized CNN architecture. Appl. Soft Comput. 2021;104:107238. doi: 10.1016/j.asoc.2021.107238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98.Simon D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008;12:702–713. doi: 10.1109/TEVC.2008.919004. [DOI] [Google Scholar]
  • 99.Singh D., Kumar V., Kaur M. Classification of COVID-19 patients from chest CT images using multi-objective differential evolution–based convolutional neural networks. Eur. J. Clin. Microbiol. Infect. Dis. 2020;39:1379–1389. doi: 10.1007/s10096-020-03901-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Ho T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998;20:832–844. doi: 10.1109/34.709601. [DOI] [Google Scholar]
  • 101.Sain S.R., Vapnik V.N. The Nature of Statistical Learning Theory. Technometrics. 1996;38:409. doi: 10.1080/00401706.1996.10484565. [DOI] [Google Scholar]
  • 102.Breiman L. Bagging predictors. Mach. Learn. 1996;24:123–140. doi: 10.1007/BF00058655. [DOI] [Google Scholar]
  • 103.Larose D.T. Data Mining Methods and Models. Wiley; Hoboken, NJ, USA: 2005. [Google Scholar]
  • 104.Breiman L., Friedman J.H., Olshen R.A., Stone C.J. Classification and Regression Trees. Chapman & Hall; Boca Raton, FL, USA: 1984. [Google Scholar]
  • 105.Schutze H., Manning C.D., Raghavan P. Introduction to Information Retrieval. Cambridge University Press; Cambridge, UK: 2008. [Google Scholar]
  • 106.Hopfield J.J. Artificial neural networks. IEEE Circuits Devices Mag. 1988;4:3–10. doi: 10.1109/101.8118. [DOI] [Google Scholar]
  • 107.Wasserman P.D. Advanced Methods in Neural Computing. Van Nostrand Reinhold; New York, NY, USA: 1993. [Google Scholar]
  • 108.Trevor T., Robert T., Jerome F. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer; New York, NY, USA: 2009. [Google Scholar]
  • 109.Donald F. Probabilistic neural networks. Neural Netw. 1990;3:109–118. [Google Scholar]
  • 110.Huang G.-B., Zhou H., Ding X., Zhang R. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Trans. Syst Man Cybern B Cybern. 2012;42:513–529. doi: 10.1109/TSMCB.2011.2168604. [DOI] [PubMed] [Google Scholar]
  • 111.Freund Y., Schapire R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997;55:119–139. doi: 10.1006/jcss.1997.1504. [DOI] [Google Scholar]
  • 112.Chen T., Guestrin C. Xgboost: A scalable tree boosting system; Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining; San Francisco, CA, USA. 13–17 August 2016; pp. 785–794. [Google Scholar]
  • 113.Wright R. Logistic regression. In: Grimm L.C., Yarnold P.R., editors. Reading & Understanding Multivariate Statistics. American Psychological Association; Washington, DC, USA: 1995. [Google Scholar]
  • 114.Mostafiz R., Uddin M.S., Nur-A-Alam R.M., Rahman M.M. Covid-19 detection in chest X-ray through random forest classifier using a hybridization of deep CNN and DWT optimized features. J. King Saud Univ.-Comput. Inf. Sci. 2020 doi: 10.1016/j.jksuci.2020.12.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Brunese L., Martinelli F., Mercaldo F., Santone A. Machine learning for coronavirus covid-19 detection from chest X-rays. Procedia Comput. Sci. 2020;176:2212–2221. doi: 10.1016/j.procs.2020.09.258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.Islam Z., Islam M., Asraf A. A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images. Inform. Med. Unlocked. 2020;20:100412. doi: 10.1016/j.imu.2020.100412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 117.Rahimzadeh M., Attar A. A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. Inform. Med. Unlocked. 2020;19:100360. doi: 10.1016/j.imu.2020.100360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118.Khan A.I., Shah J.L., Bhat M.M. CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Comput. Methods Programs Biomed. 2020;196:105581. doi: 10.1016/j.cmpb.2020.105581. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Dorr F., Chaves H., Serra M.M., Ramirez A., Costa M.E., Seia J., Cejas C., Castro M., Eyheremendy E., Slezak D.F., et al. COVID-19 pneumonia accurately detected on chest radiographs with artificial intelligence. Intell. Med. 2020;3:100014. doi: 10.1016/j.ibmed.2020.100014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 120.Ozturk T., Talo M., Yildirim E.A., Baloglu U.B., Yildirim O., Acharya U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020;121:103792. doi: 10.1016/j.compbiomed.2020.103792. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121.Altan A., Karasu S. Recognition of COVID-19 disease from X-ray images by hybrid model consisting of 2D curvelet transform, chaotic salp swarm algorithm and deep learning technique. Chaos Solitons Fractals. 2020;140:110071. doi: 10.1016/j.chaos.2020.110071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 122.Brunese L., Mercaldo F., Reginelli A., Santone A. Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays. Comput. Methods Programs Biomed. 2020;196:105608. doi: 10.1016/j.cmpb.2020.105608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123.Jain G., Mittal D., Thakur D., Mittal M.K. A deep learning approach to detect Covid-19 coronavirus with X-ray images. Biocybern. Biomed. Eng. 2020;40:1391–1405. doi: 10.1016/j.bbe.2020.08.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Heidari M., Mirniaharikandehei S., Khuzani A.Z., Danala G., Qiu Y., Zheng B. Improving the performance of CNN to predict the likelihood of COVID-19 using chest X-ray images with preprocessing algorithms. Int. J. Med. Inform. 2020;144:104284. doi: 10.1016/j.ijmedinf.2020.104284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 125.Minaee S., Kafieh R., Sonka M., Yazdani S., Soufi G.J. Deep-COVID: Predicting COVID-19 from chest X-ray images using deep transfer learning. Med. Image Anal. 2020;65:101794. doi: 10.1016/j.media.2020.101794. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 126.Afshar P., Heidarian S., Naderkhani F., Oikonomou A., Plataniotis K.N., Mohammadi A. COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images. Pattern Recognit. Lett. 2020;138:638–643. doi: 10.1016/j.patrec.2020.09.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 127.Panwar H., Gupta P., Siddiqui M.K., Morales-Menendez R., Singh V. Application of deep learning for fast detection of COVID-19 in X-rays using nCOVnet. Chaos Solitons Fractals. 2020;138:109944. doi: 10.1016/j.chaos.2020.109944. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 128.Azemin M.Z.C., Hassan R., Tamrin M.I.M., Ali M.A.M. COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-ray Images as Training Data: Preliminary Findings. Int. J. Biomed. Imaging. 2020;2020:8828855. doi: 10.1155/2020/8828855. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 129.Sharma A., Rani S., Gupta D. Artificial Intelligence-Based Classification of Chest X-ray Images into COVID-19 and Other Infectious Diseases. Int. J. Biomed. Imaging. 2020;2020:8889023. doi: 10.1155/2020/8889023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 130.Haque K.F., Abdelgawad A. A Deep Learning Approach to Detect COVID-19 Patients from Chest X-ray Images. AI. 2020;1:418–435. doi: 10.3390/ai1030027. [DOI] [Google Scholar]
  • 131.Civit-Masot J., Luna-Perejón F., Morales M.D., Civit A. Deep Learning System for COVID-19 Diagnosis Aid Using X-ray Pulmonary Images. Appl. Sci. 2020;10:4640. doi: 10.3390/app10134640. [DOI] [Google Scholar]
  • 132.Duran-Lopez L., Dominguez-Morales J., Corral-Jaime J., Vicente-Diaz S., Linares-Barranco A. COVID-XNet: A Custom Deep Learning System to Diagnose and Locate COVID-19 in Chest X-ray Images. Appl. Sci. 2020;10:5683. doi: 10.3390/app10165683. [DOI] [Google Scholar]
  • 133.Misra S., Jeon S., Lee S., Managuli R., Jang I.-S., Kim C. Multi-Channel Transfer Learning of Chest X-ray Images for Screening of COVID-19. Electronics. 2020;9:1388. doi: 10.3390/electronics9091388. [DOI] [Google Scholar]
  • 134.Tartaglione E., Barbano C., Berzovini C., Calandri M., Grangetto M. Unveiling COVID-19 from CHEST X-ray with Deep Learning: A Hurdles Race with Small Data. Int. J. Environ. Res. Public Health. 2020;17:6933. doi: 10.3390/ijerph17186933. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 135.Khan I., Aslam N. A Deep-Learning-Based Framework for Automated Diagnosis of COVID-19 Using X-ray Images. Information. 2020;11:419. doi: 10.3390/info11090419. [DOI] [Google Scholar]
  • 136.Lee K.-S., Kim J., Jeon E.-T., Choi W., Kim N., Lee K. Evaluation of Scalability and Degree of Fine-Tuning of Deep Convolutional Neural Networks for COVID-19 Screening on Chest X-ray Images Using Explainable Deep-Learning Algorithm. J. Pers. Med. 2020;10:213. doi: 10.3390/jpm10040213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 137.De Moura J., Ramos J.J.D.M., Vidal P.L., Novo J., Ortega A.M. Analysis of Separability of COVID-19 and Pneumonia in Chest X-ray Images by Means of Convolutional Neural Networks. Proceedings. 2020;54:31. doi: 10.3390/proceedings2020054031. [DOI] [Google Scholar]
  • 138.Loey M., Smarandache F., Khalifa N.E.M. Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on GAN and Deep Transfer Learning. Symmetry. 2020;12:651. doi: 10.3390/sym12040651. [DOI] [Google Scholar]
  • 139.Zulkifley M.A., Abdani S.R., Zulkifley N.H. COVID-19 Screening Using a Lightweight Convolutional Neural Network with Generative Adversarial Network Data Augmentation. Symmetry. 2020;12:1530. doi: 10.3390/sym12091530. [DOI] [Google Scholar]
  • 140.Apostolopoulos I.D., Mpesiana T.A. Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020;43:635–640. doi: 10.1007/s13246-020-00865-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 141.Das D., Santosh K.C., Pal U. Truncated inception net: COVID-19 outbreak screening using chest X-rays. Phys. Eng. Sci. Med. 2020;43:915–925. doi: 10.1007/s13246-020-00888-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 142.Pratiwi N.C., Ibrahim N., Fu’Adah Y.N., Masykuroh K. Computer-Aided Detection (CAD) for COVID-19 based on Chest X-ray Images using Convolutional Neural Network. IOP Conf. Ser. Mater. Sci. Eng. 2020;982:012004. doi: 10.1088/1757-899X/982/1/012004. [DOI] [Google Scholar]
  • 143.Hirano H., Koga K., Takemoto K. Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks. PLoS ONE. 2020;15:e0243963. doi: 10.1371/journal.pone.0243963. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 144.Wang L., Wong A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. arXiv. 2020 doi: 10.1038/s41598-020-76550-z.2003.09871 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Wang D., Mo J., Zhou G., Xu L., Liu Y. An efficient mixture of deep and machine learning models for COVID-19 diagnosis in chest X-ray images. PLoS ONE. 2020;15:e0242535. doi: 10.1371/journal.pone.0242535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Li T., Han Z., Wei B., Zheng Y., Hong Y., Cong J. Robust screening of covid-19 from chest X-ray via discriminative cost-sensitive learning. arXiv. 20202004.12592 [Google Scholar]
  • 147.Sekeroglu B., Ozsahin I. Detection of COVID-19 from Chest X-ray Images Using Convolutional Neural Networks. SLAS Technol. Transl. Life Sci. Innov. 2020;25:553–565. doi: 10.1177/2472630320958376. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 148.Oh Y., Park S., Ye J.C. Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets. IEEE Trans. Med. Imaging. 2020;39:2688–2700. doi: 10.1109/TMI.2020.2993291. [DOI] [PubMed] [Google Scholar]
  • 149.Sahinbas K., Catak F.O. Data Science for COVID-19. Elsevier; Amsterdam, The Netherlands: 2021. Transfer learning-based convolutional neural network for COVID-19 detection with X-ray images; pp. 451–466. [Google Scholar]
  • 150.Chowdhury M.E.H., Rahman T., Khandakar A., Mazhar R., Kadir M.A., Bin Mahbub Z., Islam K.R., Khan M.S., Iqbal A., Al Emadi N., et al. Can AI Help in Screening Viral and COVID-19 Pneumonia? IEEE Access. 2020;8:132665–132676. doi: 10.1109/ACCESS.2020.3010287. [DOI] [Google Scholar]
  • 151.Sakib S., Tazrin T., Fouda M.M., Fadlullah Z.M., Guizani M. DL-CRC: Deep Learning-Based Chest Radiograph Classification for COVID-19 Detection: A Novel Approach. IEEE Access. 2020;8:171575–171589. doi: 10.1109/ACCESS.2020.3025010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 152.Shah P.M., Ullah F., Shah D., Gani A., Maple C., Wang Y., Shahid A., Abrar M., Islam S.U. Deep GRU-CNN model for COVID-19 detection from chest X-rays data. IEEE Access. 2021 doi: 10.1109/ACCESS.2021.3077592. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 153.Iskanderani A.I., Mehedi I.M., Aljohani A.J., Shorfuzzaman M., Akther F., Palaniswamy T., Latif S.A., Latif A., Alam A. Artificial Intelligence and Medical Internet of Things Framework for Diagnosis of Coronavirus Suspected Cases. J. Health Eng. 2021;2021:3277988. doi: 10.1155/2021/3277988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 154.Wang W., Li Y., Li J., Zhang P., Wang X. Detecting COVID-19 in Chest X-ray Images via MCFF-Net. Comput. Intell. Neurosci. 2021;2021:3604900. doi: 10.1155/2021/3604900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 155.Shorfuzzaman M., Masud M., Alhumyani H., Anand D., Singh A. Artificial Neural Network-Based Deep Learning Model for COVID-19 Patient Detection Using X-ray Chest Images. J. Health Eng. 2021;2021:5513679. doi: 10.1155/2021/5513679. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 156.Reshi A.A., Rustam F., Mehmood A., Alhossan A., Alrabiah Z., Ahmad A., Alsuwailem H., Choi G.S. An Efficient CNN Model for COVID-19 Disease Detection Based on X-ray Image Classification. Complex. 2021;2021:6621607. doi: 10.1155/2021/6621607. [DOI] [Google Scholar]
  • 157.Alruwaili M., Shehab A., El-Ghany S.A. COVID-19 Diagnosis Using an Enhanced Inception-ResNetV2 Deep Learning Model in CXR Images. J. Health Eng. 2021;2021:6658058. doi: 10.1155/2021/6658058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 158.Shukla P.K., Sandhu J.K., Ahirwar A., Ghai D., Maheshwary P., Shukla P.K. Multiobjective Genetic Algorithm and Convolutional Neural Network Based COVID-19 Identification in Chest X-ray Images. Math. Probl. Eng. 2021;2021:7804540. doi: 10.1155/2021/7804540. [DOI] [Google Scholar]
  • 159.Taresh M.M., Zhu N., Ali T.A.A., Hameed A.S., Mutar M.L. Transfer Learning to Detect COVID-19 Automatically from X-ray Images Using Convolutional Neural Networks. Int. J. Biomed. Imaging. 2021;2021:8828404. doi: 10.1155/2021/8828404. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Kaur M., Kumar V., Yadav V., Singh D., Kumar N., Das N.N. Metaheuristic-based Deep COVID-19 Screening Model from Chest X-ray Images. J. Health Eng. 2021;2021:8829829. doi: 10.1155/2021/8829829. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 161.Ahmad F., Farooq A., Ghani M.U. Deep Ensemble Model for Classification of Novel Coronavirus in Chest X-ray Images. Comput. Intell. Neurosci. 2021;2021:8890226. doi: 10.1155/2021/8890226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 162.Hasan M.D.K., Ahmed S., Abdullah Z.M.E., Khan M.M., Anand D., Singh A., AlZain M., Masud M. Deep Learning Approaches for Detecting Pneumonia in COVID-19 Patients by Analyzing Chest X-ray Images. Math. Probl. Eng. 2021;2021:9929274. doi: 10.1155/2021/9929274. [DOI] [Google Scholar]
  • 163.Muhammad Y., Alshehri M.D., Alenazy W.M., Hoang T.V., Alturki R. Identification of Pneumonia Disease Applying an Intelligent Computational Framework Based on Deep Learning and Machine Learning Techniques. Mob. Inf. Syst. 2021;2021:9989237. doi: 10.1155/2021/9989237. [DOI] [Google Scholar]
  • 164.Pandit M., Banday S., Naaz R., Chishti M. Automatic detection of COVID-19 from chest radiographs using deep learning. Radiography. 2021;27:483–489. doi: 10.1016/j.radi.2020.10.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 165.Kedia P., Katarya R. CoVNet-19: A Deep Learning model for the detection and analysis of COVID-19 patients. Appl. Soft Comput. 2021;104:107184. doi: 10.1016/j.asoc.2021.107184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 166.Saha P., Sadi M.S., Islam M. EMCNet: Automated COVID-19 diagnosis from X-ray images using convolutional neural network and ensemble of machine learning classifiers. Inform. Med. Unlocked. 2021;22:100505. doi: 10.1016/j.imu.2020.100505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 167.Panahi A.H., Rafiei A., Rezaee A. FCOD: Fast COVID-19 Detector based on deep learning techniques. Inform. Med. Unlocked. 2021;22:100506. doi: 10.1016/j.imu.2020.100506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 168.Xu Y., Lam H.-K., Jia G. MANet: A two-stage deep learning method for classification of COVID-19 from Chest X-ray images. Neurocomputing. 2021;443:96–105. doi: 10.1016/j.neucom.2021.03.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 169.Karakanis S., Leontidis G. Lightweight deep learning models for detecting COVID-19 from chest X-ray images. Comput. Biol. Med. 2021;130:104181. doi: 10.1016/j.compbiomed.2020.104181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 170.Alorf A. The Practicality of Deep Learning Algorithms in COVID-19 Detection: Application to Chest X-ray Images. Algorithms. 2021;14:183. doi: 10.3390/a14060183. [DOI] [Google Scholar]
  • 171.Ben Jabra M., Koubaa A., Benjdira B., Ammar A., Hamam H. COVID-19 Diagnosis in Chest X-rays Using Deep Learning and Majority Voting. Appl. Sci. 2021;11:2884. doi: 10.3390/app11062884. [DOI] [Google Scholar]
  • 172.Ahrabi S.S., Scarpiniti M., Baccarelli E., Momenzadeh A. An Accuracy vs. Complexity Comparison of Deep Learning Architectures for the Detection of COVID-19 Disease. Computation. 2021;9:3. doi: 10.3390/computation9010003. [DOI] [Google Scholar]
  • 173.Luján-García J., Villuendas-Rey Y., López-Yáñez I., Camacho-Nieto O., Yáñez-Márquez C. NanoChest-Net: A Simple Convolutional Network for Radiological Studies Classification. Diagnostics. 2021;11:775. doi: 10.3390/diagnostics11050775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 174.Karbhari Y., Basu A., Geem Z.W., Han G.-T., Sarkar R. Generation of Synthetic Chest X-ray Images and Detection of COVID-19: A Deep Learning Based Approach. Diagnostics. 2021;11:895. doi: 10.3390/diagnostics11050895. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 175.Khishe M., Caraffini F., Kuhn S. Evolving Deep Learning Convolutional Neural Networks for Early COVID-19 Detection in Chest X-ray Images. Mathematics. 2021;9:1002. doi: 10.3390/math9091002. [DOI] [Google Scholar]
  • 176.Alam N.A., Ahsan M., Based A., Haider J., Kowalski M. COVID-19 Detection from Chest X-ray Images Using Feature Fusion and Deep Learning. Sensors. 2021;21:1480. doi: 10.3390/s21041480. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Vantaggiato E., Paladini E., Bougourzi F., Distante C., Hadid A., Taleb-Ahmed A. COVID-19 Recognition Using Ensemble-CNNs in Two New Chest X-ray Databases. Sensors. 2021;21:1742. doi: 10.3390/s21051742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 178.Muñoz-Saavedra L., Civit-Masot J., Luna-Perejón F., Domínguez-Morales M., Civit A. Does Two-Class Training Extract Real Features? A COVID-19 Case Study. Appl. Sci. 2021;11:1424. doi: 10.3390/app11041424. [DOI] [Google Scholar]
  • 179.Barua P., Gowdh N.M., Rahmat K., Ramli N., Ng W., Chan W., Kuluozturk M., Dogan S., Baygin M., Yaman O., et al. Automatic COVID-19 Detection Using Exemplar Hybrid Deep Features with X-ray Images. Int. J. Environ. Res. Public Health. 2021;18:8052. doi: 10.3390/ijerph18158052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 180.Abbas A., Abdelsamea M.M., Gaber M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021;51:854–864. doi: 10.1007/s10489-020-01829-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 181.Gaur L., Bhatia U., Jhanjhi N.Z., Muhammad G., Masud M. Medical image-based detection of COVID-19 using Deep Convolution Neural Networks. Multimed. Syst. 2021:1–10. doi: 10.1007/s00530-021-00794-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 182.Luz E., Silva P., Silva R., Silva L., Guimarães J., Miozzo G., Moreira G., Menotti D. Towards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images. Res. Biomed. Eng. 2021:1–14. doi: 10.1007/s42600-021-00151-6. [DOI] [Google Scholar]
  • 183.Narin A., Kaya C., Pamuk Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021;24:1207–1220. doi: 10.1007/s10044-021-00984-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 184.Rajagopal R. Comparative Analysis of COVID-19 X-ray Images Classification Using Convolutional Neural Network, Transfer Learning, and Machine Learning Classifiers Using Deep Features. Pattern Recognit. Image Anal. 2021;31:313–322. doi: 10.1134/S1054661821020140. [DOI] [Google Scholar]
  • 185.Castiglioni I., Ippolito D., Interlenghi M., Monti C.B., Salvatore C., Schiaffino S., Polidori A., Gandola D., Messa C., Sardanelli F. Machine learning applied on chest X-ray can aid in the diagnosis of COVID-19: A first experience from Lombardy, Italy. Eur. Radiol. Exp. 2021;5:1–10. doi: 10.1186/s41747-020-00203-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 186.Sarkar A., Vandenhirtz J., Nagy J., Bacsa D., Riley M. Identification of Images of COVID-19 from Chest X-rays Using Deep Learning: Comparing COGNEX VisionPro Deep Learning 1.0™ Software with Open Source Convolutional Neural Networks. SN Comput. Sci. 2021;2:1–16. doi: 10.1007/s42979-021-00496-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 187.De Sousa P.M., Carneiro P.C., Oliveira M.M., Pereira G.M., Junior C.A.D.C., De Moura L.V., Mattjie C., Da Silva A.M.M., Patrocinio A.C. COVID-19 classification in X-ray chest images using a new convolutional neural network: CNN-COVID. Res. Biomed. Eng. 2021:1–11. doi: 10.1007/s42600-020-00120-5. [DOI] [Google Scholar]
  • 188.Turkoglu M. COVIDetectioNet: COVID-19 diagnosis system based on X-ray images using features selected from pre-learned deep features ensemble. Appl. Intell. 2021;51:1213–1226. doi: 10.1007/s10489-020-01888-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 189.Umer M., Ashraf I., Ullah S., Mehmood A., Choi G.S. COVINet: A convolutional neural network approach for predicting COVID-19 from chest X-ray images. J. Ambient. Intell. Humaniz. Comput. 2021:1–13. doi: 10.1007/s12652-021-02917-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 190.Madaan V., Roy A., Gupta C., Agrawal P., Sharma A., Bologa C., Prodan R. XCOVNet: Chest X-ray Image Classification for COVID-19 Early Detection Using Convolutional Neural Networks. New Gener. Comput. 2021;39:583–597. doi: 10.1007/s00354-021-00121-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 191.Das A.K., Ghosh S., Thunder S., Dutta R., Agarwal S., Chakrabarti A. Automatic COVID-19 detection from X-ray images using ensemble learning with convolutional neural network. Pattern Anal. Appl. 2021;24:1111–1124. doi: 10.1007/s10044-021-00970-4. [DOI] [Google Scholar]
  • 192.Agrawal T., Choudhary P. FocusCovid: Automated COVID-19 detection using deep learning with chest X-ray images. Evol. Syst. 2021:1–15. doi: 10.1007/s12530-021-09385-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 193.Swapnarekha H., Behera H.S., Roy D., Das S., Nayak J. Competitive Deep Learning Methods for COVID-19 Detection using X-ray Images. J. Inst. Eng. Ser. B. 2021;102:1177–1190. doi: 10.1007/s40031-021-00589-3. [DOI] [Google Scholar]
  • 194.Aradhya V.N.M., Mahmud M., Guru D.S., Agarwal B., Kaiser M.S. One-shot Cluster-Based Approach for the Detection of COVID–19 from Chest X–ray Images. Cogn. Comput. 2021;13:873–881. doi: 10.1007/s12559-020-09774-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 195.Jain R., Gupta M., Taneja S., Hemanth D.J. Deep learning based detection and analysis of COVID-19 on chest X-ray images. Appl. Intell. 2021;51:1690–1700. doi: 10.1007/s10489-020-01902-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 196.Maior C.B.S., Santana J.M.M., Lins I.D., Moura M.J.C. Convolutional neural network model based on radiological images to support COVID-19 diagnosis: Evaluating database biases. PLoS ONE. 2021;16:e0247839. doi: 10.1371/journal.pone.0247839. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 197.Salam M.A., Taha S., Ramadan M. COVID-19 detection using federated machine learning. PLoS ONE. 2021;16:e0252573. doi: 10.1371/journal.pone.0252573. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 198.Ahsan M., Ahad T., Soma F.A., Paul S., Chowdhury A., Luna S.A., Yazdan M.M.S., Rahman A., Siddique Z., Huebner P. Detecting SARS-CoV-2 From Chest X-ray Using Artificial Intelligence. IEEE Access. 2021;9:35501–35513. doi: 10.1109/ACCESS.2021.3061621. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 199.Zhou J., Jing B., Wang Z., Xin H., Tong H. SODA: Detecting COVID-19 in Chest X-rays with Semi-supervised Open Set Domain Adaptation. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021:1. doi: 10.1109/TCBB.2021.3066331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 200.Mohagheghi S., Alizadeh M., Safavi S.M., Foruzan A.H., Chen Y.-W. Integration of CNN, CBMIR, and Visualization Techniques for Diagnosis and Quantification of Covid-19 Disease. IEEE J. Biomed. Health Inform. 2021;25:1873–1880. doi: 10.1109/JBHI.2021.3067333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 201.Oyelade O.N., Ezugwu A.E.-S., Chiroma H. CovFrameNet: An Enhanced Deep Learning Framework for COVID-19 Detection. IEEE Access. 2021;9:77905–77919. doi: 10.1109/ACCESS.2021.3083516. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 202.Abbas A., Abdelsamea M.M., Gaber M.M. 4S-DT: Self-Supervised Super Sample Decomposition for Transfer Learning With Application to COVID-19 Detection. IEEE Trans. Neural Netw. Learn. Syst. 2021;32:2798–2808. doi: 10.1109/TNNLS.2021.3082015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 203.Zhou C., Song J., Zhou S., Zhang Z., Xing J. COVID-19 Detection Based on Image Regrouping and Resnet-SVM Using Chest X-ray Images. IEEE Access. 2021;9:81902–81912. doi: 10.1109/ACCESS.2021.3086229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 204.Dong S., Yang Q., Fu Y., Tian M., Zhuo C. RCoNet: Deformable Mutual Information Maximization and High-Order Uncertainty-Aware Learning for Robust COVID-19 Detection. IEEE Trans. Neural Netw. Learn. Syst. 2021;32:3401–3411. doi: 10.1109/TNNLS.2021.3086570. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 205.Ardakani A.A., Kanafi A.R., Acharya U.R., Khadem N., Mohammadi A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput. Biol. Med. 2020;121:103795. doi: 10.1016/j.compbiomed.2020.103795. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 206.Silva P., Luz E., Silva G., Moreira G., Silva R., Lucio D., Menotti D. COVID-19 detection in CT images with deep learning: A voting-based scheme and cross-datasets analysis. Inform. Med. Unlocked. 2020;20:100427. doi: 10.1016/j.imu.2020.100427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 207.Li T., Wei W., Cheng L., Zhao S., Xu C., Zhang X., Zeng Y., Gu J. Computer-Aided Diagnosis of COVID-19 CT Scans Based on Spatiotemporal Information Fusion. J. Health Eng. 2021;2021:6649591. doi: 10.1155/2021/6649591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 208.Zhang P., Zhong Y., Deng Y., Tang X., Li X. CoSinGAN: Learning COVID-19 Infection Segmentation from a Single Radiological Image. Diagnostics. 2020;10:901. doi: 10.3390/diagnostics10110901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 209.Wan Y., Zhou H., Zhang X. An Interpretation Architecture for Deep Learning Models with the Application of COVID-19 Diagnosis. Entropy. 2021;23:204. doi: 10.3390/e23020204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 210.Gifani P., Shalbaf A., Vafaeezadeh M. Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans. Int. J. Comput. Assist. Radiol. Surg. 2021;16:115–123. doi: 10.1007/s11548-020-02286-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 211.Loey M., Manogaran G., Khalifa N.E.M. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput. Appl. 2020:1–13. doi: 10.1007/s00521-020-05437-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 212.Singh D., Kumar V., Kaur M. Densely connected convolutional networks-based COVID-19 screening model. Appl. Intell. 2021;51:3044–3051. doi: 10.1007/s10489-020-02149-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 213.Fan D.-P., Zhou T., Ji G.-P., Zhou Y., Chen G., Fu H., Shen J., Shao L. Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images. IEEE Trans. Med. Imaging. 2020;39:2626–2637. doi: 10.1109/TMI.2020.2996645. [DOI] [PubMed] [Google Scholar]
  • 214.Ouyang X., Huo J., Xia L., Shan F., Liu J., Mo Z., Yan F., Ding Z., Yang Q., Song B., et al. Dual-Sampling Attention Network for Diagnosis of COVID-19 From Community Acquired Pneumonia. IEEE Trans. Med. Imaging. 2020;39:2595–2605. doi: 10.1109/TMI.2020.2995508. [DOI] [PubMed] [Google Scholar]
  • 215.Zhang Y.-D., Satapathy S.C., Zhu L.-Y., Gorriz J.M., Wang S.-H. A seven-layer convolutional neural network for chest CT based COVID-19 diagnosis using stochastic pooling. IEEE Sensors J. 2020:1. doi: 10.1109/JSEN.2020.3025855. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 216.Qian X., Fu H., Shi W., Chen T., Fu Y., Shan F., Xue X. M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia Screening From CT Imaging. IEEE J. Biomed. Health Inform. 2020;24:3539–3550. doi: 10.1109/JBHI.2020.3030853. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 217.Ozyurt F., Tuncer T., Subasi A. An automated COVID-19 detection based on fused dynamic exemplar pyramid feature extraction and hybrid feature selection using deep learning. Comput. Biol. Med. 2021;132:104356. doi: 10.1016/j.compbiomed.2021.104356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 218.Rohila V.S., Gupta N., Kaul A., Sharma D.K. Deep learning assisted COVID-19 detection using full CT-scans. Internet Things. 2021;14:100377. doi: 10.1016/j.iot.2021.100377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 219.Shiri I., Sorouri M., Geramifar P., Nazari M., Abdollahi M., Salimi Y., Khosravi B., Askari D., Aghaghazvini L., Hajianfar G., et al. Machine learning-based prognostic modeling using clinical data and quantitative radiomic features from chest CT images in COVID-19 patients. Comput. Biol. Med. 2021;132:104304. doi: 10.1016/j.compbiomed.2021.104304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 220.Priya C., Fathima S.S.S., Kirubanandasarathy N., Valanarasid A., Begam M.S., Aiswarya N. Automatic Optimized CNN Based COVID-19 Lung Infection Segmentation from CT Image. Elsevier BV; Amsterdam, The Netherlands: 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 221.Wu Z., Li L., Jin R., Liang L., Hu Z., Tao L., Han Y., Feng W., Zhou D., Li W., et al. Texture feature-based machine learning classifier could assist in the diagnosis of COVID-19. Eur. J. Radiol. 2021;137:109602. doi: 10.1016/j.ejrad.2021.109602. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 222.Yu X., Lu S., Guo L., Wang S.-H., Zhang Y.-D. ResGNet-C: A graph convolutional neural network for detection of COVID-19. Neurocomputing. 2021;452:592–605. doi: 10.1016/j.neucom.2020.07.144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 223.Turkoglu M. COVID-19 Detection System Using Chest CT Images and Multiple Kernels-Extreme Learning Machine Based on Deep Neural Network. IRBM. 2021;42:207–214. doi: 10.1016/j.irbm.2021.01.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 224.He K., Zhao W., Xie X., Ji W., Liu M., Tang Z., Shi Y., Shi F., Gao Y., Liu J., et al. Synergistic learning of lung lobe segmentation and hierarchical multi-instance classification for automated severity assessment of COVID-19 in CT images. Pattern Recognit. 2021;113:107828. doi: 10.1016/j.patcog.2021.107828. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 225.Gao K., Su J., Jiang Z., Zeng L.-L., Feng Z., Shen H., Rong P., Xu X., Qin J., Yang Y., et al. Dual-branch combination network (DCN): Towards accurate diagnosis and lesion segmentation of COVID-19 using CT images. Med. Image Anal. 2021;67:101836. doi: 10.1016/j.media.2020.101836. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 226.Serte S., Demirel H. Deep learning for diagnosis of COVID-19 using 3D CT scans. Comput. Biol. Med. 2021;132:104306. doi: 10.1016/j.compbiomed.2021.104306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 227.Perumal V., Narayanan V., Rajasekar S.J.S. Prediction of COVID-19 with Computed Tomography Images using Hybrid Learning Techniques. Dis. Markers. 2021;2021:5522729. doi: 10.1155/2021/5522729. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 228.Helwan A., Ma’Aitah M.K.S., Hamdan H., Ozsahin D.U., Tuncyurek O. Radiologists versus Deep Convolutional Neural Networks: A Comparative Study for Diagnosing COVID-19. Comput. Math. Methods Med. 2021;2021:5527271. doi: 10.1155/2021/5527271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 229.Lawton S., Viriri S. Detection of COVID-19 from CT Lung Scans Using Transfer Learning. Comput. Intell. Neurosci. 2021;2021:5527923. doi: 10.1155/2021/5527923. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 230.Li X., Tan W., Liu P., Zhou Q., Yang J. Classification of COVID-19 Chest CT Images Based on Ensemble Deep Learning. J. Health Eng. 2021;2021:5528441. doi: 10.1155/2021/5528441. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 231.Salamh A.B.S., Salamah A.A., Akyüz H.I. A Study of a New Technique of the CT Scan View and Disease Classification Protocol Based on Level Challenges in Cases of Coronavirus Disease. Radiol. Res. Pract. 2021;2021:5554408. doi: 10.1155/2021/5554408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 232.Wang S.-H., Zhang Y., Cheng X., Zhang X., Zhang Y.-D. PSSPNN: PatchShuffle Stochastic Pooling Neural Network for an Explainable Diagnosis of COVID-19 with Multiple-Way Data Augmentation. Comput. Math. Methods Med. 2021;2021:6633755. doi: 10.1155/2021/6633755. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 233.Zheng B., Cai Y., Zeng F., Lin M., Zheng J., Chen W., Qin G., Guo Y. An Interpretable Model-Based Prediction of Severity and Crucial Factors in Patients with COVID-19. BioMed Res. Int. 2021;2021:8840835. doi: 10.1155/2021/8840835. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 234.Zheng R., Zheng Y., Dong-Ye C. Improved 3D U-Net for COVID-19 Chest CT Image Segmentation. Sci. Program. 2021;2021:9999368. doi: 10.1155/2021/9999368. [DOI] [Google Scholar]
  • 235.Oluwasanmi A., Aftab M.U., Qin Z., Ngo S.T., Van Doan T., Nguyen S.B. Transfer Learning and Semisupervised Adversarial Detection and Classification of COVID-19 in CT Images. Complexity. 2021;2021:6680455. doi: 10.1155/2021/6680455. [DOI] [Google Scholar]
  • 236.Manzo M., Pellino S. Fighting Together against the Pandemic: Learning Multiple Models on Tomography Images for COVID-19 Diagnosis. AI. 2021;2:261–273. doi: 10.3390/ai2020016. [DOI] [Google Scholar]
  • 237.Singh V.K., Abdel-Nasser M., Pandey N., Puig D. LungINFseg: Segmenting COVID-19 Infected Regions in Lung CT Images Based on a Receptive-Field-Aware Deep Learning Framework. Diagnostics. 2021;11:158. doi: 10.3390/diagnostics11020158. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 238.Qiblawey Y., Tahir A., Chowdhury M., Khandakar A., Kiranyaz S., Rahman T., Ibtehaz N., Mahmud S., Maadeed S., Musharavati F., et al. Detection and Severity Classification of COVID-19 in CT Images Using Deep Learning. Diagnostics. 2021;11:893. doi: 10.3390/diagnostics11050893. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 239.Chatzitofis A., Cancian P., Gkitsas V., Carlucci A., Stalidis P., Albanis G., Karakottas A., Semertzidis T., Daras P., Giannitto C., et al. Volume-of-Interest Aware Deep Neural Networks for Rapid Chest CT-Based COVID-19 Patient Risk Assessment. Int. J. Environ. Res. Public Health. 2021;18:2842. doi: 10.3390/ijerph18062842. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 240.Alshazly H., Linse C., Barth E., Martinetz T. Explainable COVID-19 Detection Using Chest CT Scans and Deep Learning. Sensors. 2021;21:455. doi: 10.3390/s21020455. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 241.Voulodimos A., Protopapadakis E., Katsamenis I., Doulamis A., Doulamis N. A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images. Sensors. 2021;21:2215. doi: 10.3390/s21062215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 242.Ahuja S., Panigrahi B.K., Dey N., Rajinikanth V., Gandhi T.K. Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices. Appl. Intell. 2021;51:571–585. doi: 10.1007/s10489-020-01826-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 243.Garain A., Basu A., Giampaolo F., Velasquez J.D., Sarkar R. Detection of COVID-19 from CT scan images: A spiking neural network-based approach. Neural Comput. Appl. 2021;33:12591–12604. doi: 10.1007/s00521-021-05910-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 244.Li D., Fu Z., Xu J. Stacked-autoencoder-based model for COVID-19 diagnosis on CT images. Appl. Intell. 2021;51:2805–2817. doi: 10.1007/s10489-020-02002-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 245.Rezaeijo S.M., Ghorvei M., Abedi-Firouzjah R., Mojtahedi H., Zarch H.E. Detecting COVID-19 in chest images based on deep transfer learning and machine learning algorithms. Egypt. J. Radiol. Nucl. Med. 2021;52:1–12. doi: 10.1186/s43055-021-00524-y. [DOI] [Google Scholar]
  • 246.Sen S., Saha S., Chatterjee S., Mirjalili S., Sarkar R. A bi-stage feature selection approach for COVID-19 prediction using chest CT images. Appl. Intell. 2021:1–16. doi: 10.1007/s10489-021-02292-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 247.Shah V., Keniya R., Shridharani A., Punjabi M., Shah J., Mehendale N. Diagnosis of COVID-19 using CT scan images and deep learning techniques. Emerg. Radiology. 2021;28:497–505. doi: 10.1007/s10140-020-01886-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 248.Tan W., Liu P., Li X., Liu Y., Zhou Q., Chen C., Gong Z., Yin X., Zhang Y. Classification of COVID-19 pneumonia from chest CT images based on reconstructed super-resolution images and VGG neural network. Health Inf. Sci. Syst. 2021;9:1–12. doi: 10.1007/s13755-021-00140-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 249.Ardakani A.A., Acharya U.R., Habibollahi S., Mohammadi A. COVIDiag: A clinical CAD system to diagnose COVID-19 pneumonia based on CT findings. Eur. Radiol. 2021;31:121–130. doi: 10.1007/s00330-020-07087-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 250.Yasar H., Ceylan M. A novel comparative study for detection of Covid-19 on CT lung images using texture analysis, machine learning, and deep learning methods. Multimed. Tools Appl. 2021;80:5423–5447. doi: 10.1007/s11042-020-09894-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 251.Elmuogy S., Hikal N.A., Hassan E. An efficient technique for CT scan images classification of COVID-19. J. Intell. Fuzzy Syst. 2021;40:5225–5238. doi: 10.3233/JIFS-201985. [DOI] [Google Scholar]
  • 252.Elaziz M.A., Al-Qaness M.A.A., Zaid E.O.A., Lu S., Ibrahim R.A., Ewees A.A. Automatic clustering method to segment COVID-19 CT images. PLoS ONE. 2021;16:e0244416. doi: 10.1371/journal.pone.0244416. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 253.Han C.H., Kim M., Kwak J.T. Semi-supervised learning for an improved diagnosis of COVID-19 in CT images. PLoS ONE. 2021;16:e0249450. doi: 10.1371/journal.pone.0249450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 254.Yousefzadeh M., Esfahanian P., Movahed S.M.S., Gorgin S., Rahmati D., Abedini A., Nadji S.A., Haseli S., Karam M.B., Kiani A., et al. ai-corona: Radiologist-assistant deep learning framework for COVID-19 diagnosis in chest CT scans. PLoS ONE. 2021;16:e0250952. doi: 10.1371/journal.pone.0250952. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 255.Yan Q., Wang B., Gong D., Luo C., Zhao W., Shen J., Ai J., Shi Q., Zhang Y., Jin S., et al. COVID-19 Chest CT Image Segmentation Network by Multi-Scale Fusion and Enhancement Operations. IEEE Trans. Big Data. 2021;7:13–24. doi: 10.1109/TBDATA.2021.3056564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 256.Paluru N., Dayal A., Jenssen H.B., Sakinis T., Cenkeramaddi L.R., Prakash J., Yalavarthy P.K. Anam-Net: Anamorphic Depth Embedding-Based Lightweight CNN for Segmentation of Anomalies in COVID-19 Chest CT Images. IEEE Trans. Neural Netw. Learn. Syst. 2021;32:932–946. doi: 10.1109/TNNLS.2021.3054746. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 257.Muhammad G., Hossain M.S. COVID-19 and Non-COVID-19 Classification using Multi-layers Fusion From Lung Ultrasound Images. Inf. Fusion. 2021;72:80–88. doi: 10.1016/j.inffus.2021.02.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 258.Dastider A.G., Sadik F., Fattah S.A. An integrated autoencoder-based hybrid CNN-LSTM model for COVID-19 severity prediction from lung ultrasound. Comput. Biol. Med. 2021;132:104296. doi: 10.1016/j.compbiomed.2021.104296. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 259.Panwar H., Gupta P., Siddiqui M.K., Morales-Menendez R., Bhardwaj P., Singh V. A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-Scan images. Chaos Solitons Fractals. 2020;140:110190. doi: 10.1016/j.chaos.2020.110190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 260.Ibrahim D.M., Elshennawy N.M., Sarhan A.M. Deep-chest: Multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer chest diseases. Comput. Biol. Med. 2021;132:104348. doi: 10.1016/j.compbiomed.2021.104348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 261.Elpeltagy M., Sallam H. Automatic prediction of COVID− 19 from chest images using modified ResNet50. Multimed. Tools Appl. 2021:1–13. doi: 10.1007/s11042-021-10783-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 262.Gilanie G., Bajwa U.I., Waraich M.M., Asghar M., Kousar R., Kashif A., Aslam R.S., Qasim M.M., Rafique H. Coronavirus (COVID-19) detection from chest radiology images using convolutional neural networks. Biomed. Signal Process. Control. 2021;66:102490. doi: 10.1016/j.bspc.2021.102490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 263.Saha P., Mukherjee D., Singh P.K., Ahmadian A., Ferrara M., Sarkar R. GraphCovidNet: A graph neural network based model for detecting COVID-19 from CT scans and X-rays of chest. Sci. Rep. 2021;11:1–16. doi: 10.1038/s41598-021-87523-1. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 264.Perumal V., Narayanan V., Rajasekar S.J.S. Detection of COVID-19 using CXR and CT images using Transfer Learning and Haralick features. Appl. Intell. 2021;51:341–358. doi: 10.1007/s10489-020-01831-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 265.Lascu M.-R. Deep Learning in Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs on CXR and CT Images. J. Med. Biol. Eng. 2021;10:1–9. doi: 10.1007/s40846-021-00630-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 266.Kassania S.H., Kassanib P.H., Wesolowskic M.J., Schneidera K.A., Detersa R. Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning Based Approach. Biocybern. Biomed. Eng. 2021;41:867–879. doi: 10.1016/j.bbe.2021.05.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 267.Li J., Zhao G., Tao Y., Zhai P., Chen H., He H., Cai T. Multi-task contrastive learning for automatic CT and X-ray diagnosis of COVID-19. Pattern Recognit. 2021;114:107848. doi: 10.1016/j.patcog.2021.107848. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 268.Imani M. Automatic diagnosis of coronavirus (COVID-19) using shape and texture characteristics extracted from X-ray and CT-Scan images. Biomed. Signal Process. Control. 2021;68:102602. doi: 10.1016/j.bspc.2021.102602. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 269.Mukherjee H., Ghosh S., Dhar A., Obaidullah S.M., Santosh K.C., Roy K. Deep neural network to detect COVID-19: One architecture for both CT Scans and Chest X-rays. Appl. Intell. 2021;51:2777–2789. doi: 10.1007/s10489-020-01943-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 270.Pal B., Gupta D., Rashed-Al-Mahfuz M., Alyami S., Moni M. Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images. Appl. Sci. 2021;11:4233. doi: 10.3390/app11094233. [DOI] [Google Scholar]
  • 271.Aminu M., Ahmad N.A., Noor M.H.M. Covid-19 detection via deep neural network and occlusion sensitivity maps. Alex. Eng. J. 2021;60:4829–4855. doi: 10.1016/j.aej.2021.03.052. [DOI] [Google Scholar]
  • 272.Mittal H., Pandey A.C., Pal R., Tripathi A. A new clustering method for the diagnosis of CoVID19 using medical images. Appl. Intell. 2021;51:2988–3011. doi: 10.1007/s10489-020-02122-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 273.Horry M.J., Chakraborty S., Paul M., Ulhaq A., Pradhan B., Saha M., Shukla N. COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data. IEEE Access. 2020;8:149808–149824. doi: 10.1109/ACCESS.2020.3016780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 274.Gudigar A., Raghavendra U., Hegde A., Menon G., Molinari F., Ciaccio E., Acharya U. Automated Detection and Screening of Traumatic Brain Injury (TBI) Using Computed Tomography Images: A Comprehensive Review and Future Perspectives. Int. J. Environ. Res. Public Health. 2021;18:6499. doi: 10.3390/ijerph18126499. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 275.Raghavendra U., Gudigar A., Rao N., Ciaccio J., Ng E.Y.K. Rajendra Acharya, Computer-aided di-agnosis for the identification of breast cancer using thermogram images: A comprehensive review. Infrared Phys. Technol. 2019;102:103041. doi: 10.1016/j.infrared.2019.103041. [DOI] [Google Scholar]
  • 276.Gudigar A., Raghavendra U., Hegde A., Kalyani M., Ciaccio E.J., Rajendra A.U. Brain pathology identi-fication using computer aided diagnostic tool: A systematic review. Comput. Methods Programs Biomed. 2020;187:105205. doi: 10.1016/j.cmpb.2019.105205. [DOI] [PubMed] [Google Scholar]
  • 277.Abilash V., Geoffrey V., Krishna S.B.R. Coronavirus Pandemic Analysis Using Deep Learning Techniques A Study; Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI); Tirunelveli, India. 3–5 June 2021; pp. 875–880. [Google Scholar]
  • 278.Sri R.S., Pushpa A.M. Systematic Study on Diagnosis of Lung Disorders using Machine Learning and Deep Learning Algorithms; Proceedings of the 2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII); Kalavakkam, Tamil Nadu, India. 25–27 March 2021; pp. 1–8. [Google Scholar]
  • 279.Mohammad-Rahimi H., Nadimi M., Ghalyanchi-Langeroudi A., Taheri M., Ghafouri-Fard S. Application of Machine Learning in Diagnosis of COVID-19 Through X-ray and CT Images: A Scoping Review. Front. Cardiovasc. Med. 2021;8:185. doi: 10.3389/fcvm.2021.638011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 280.Desai S.B., Pareek A., Lungren M.P. Deep learning and its role in COVID-19 medical imaging. Intell. Med. 2020;3:100013. doi: 10.1016/j.ibmed.2020.100013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 281.Gazzah S., Bencharef O. A Survey on how computer vision can response to urgent need to contribute in COVID-19 pandemics; Proceedings of the 2020 International Conference on Intelligent Systems and Computer Vision (ISCV); Fez, Morocco. 9–11 June 2020; pp. 1–5. [Google Scholar]
  • 282.Signoroni A., Savardi M., Benini S., Adami N., Leonardi R., Gibellini P., Vaccher F., Ravanelli M., Borghesi A., Maroldi R., et al. BS-Net: Learning COVID-19 pneumonia severity on a large chest X-ray dataset. Med. Image Anal. 2021;71:102046. doi: 10.1016/j.media.2021.102046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 283.Zhang H.-T., Zhang J.-S., Nan Y.-D., Zhao Y., Fu E.-Q., Xie Y.-H., Liu W., Li W.-P., Zhang H.-J., Jiang H., et al. Automated detection and quantification of COVID-19 pneumonia: CT imaging analysis by a deep learning-based software. Eur. J. Nucl. Med. Mol. Imaging. 2020;47:2525–2532. doi: 10.1007/s00259-020-04953-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 284.Goncharov M., Pisov M., Shevtsov A., Shirokikh B., Kurmukov A., Blokhin I., Chernina V., Solovev A., Gombolevskiy V., Morozov S., et al. CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification. Med. Image Anal. 2021;71:102054. doi: 10.1016/j.media.2021.102054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 285.Oulefki A., Agaian S., Trongtirakul T., Laouar A.K. Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images. Pattern Recognit. 2021;114:107747. doi: 10.1016/j.patcog.2020.107747. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 286.Giordano F., Ippolito E., Quattrocchi C., Greco C., Mallio C., Santo B., D’Alessio P., Crucitti P., Fiore M., Zobel B., et al. Radiation-Induced Pneumonitis in the Era of the COVID-19 Pandemic: Artificial Intelligence for Differential Diagnosis. Cancers. 2021;13:1960. doi: 10.3390/cancers13081960. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 287.Shoeibi A., Khodatars M., Alizadehsani R., Ghassemi N., Jafari M., Moridian P., Khadem A., Sadeghi D., Hussain S., Zare A., et al. Automated detection and forecasting of covid-19 using deep learning techniques: A review. arXiv. 20202007.10785 [Google Scholar]
  • 288.Sharifrazi D., Alizadehsani R., Roshanzamir M., Joloudari J.H., Shoeibi A., Jafari M., Hussain S., Sani Z.A., Hasanzadeh F., Khozeimeh F., et al. Fusion of convolution neural network, support vector machine and Sobel filter for accurate detection of COVID-19 patients using X-ray images. Biomed. Signal Process. Control. 2021;68:102622. doi: 10.1016/j.bspc.2021.102622. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 289.Jahmunah V., Sudarshan V.K., Oh S.L., Gururajan R., Gururajan R., Zhou X., Tao X., Faust O., Ciaccio E.J., Ng K.H., et al. Future IoT tools for COVID-19 contact tracing and prediction: A review of the state-of-the-science. Int. J. Imaging Syst. Technol. 2021;31:455–471. doi: 10.1002/ima.22552. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 290.Basiri M.E., Nemati S., Abdar M., Asadi S., Acharrya U.R. A novel fusion-based deep learning model for sentiment analysis of COVID-19 tweets. Knowl.-Based Syst. 2021;228:107242. doi: 10.1016/j.knosys.2021.107242. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 291.Alizadehsani R., Sani Z.A., Behjati M., Roshanzamir Z., Hussain S., Abedini N., Hasanzadeh F., Khosravi A., Shoeibi A., Roshanzamir M., et al. Risk factors prediction, clinical outcomes, and mortality in COVID-19 patients. J. Med. Virol. 2021;93:2307–2320. doi: 10.1002/jmv.26699. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 292.Taiwo O., Ezugwu A.E. Smart healthcare support for remote patient monitoring during covid-19 quarantine. Inform. Med. Unlocked. 2020;20:100428. doi: 10.1016/j.imu.2020.100428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 293.Ardakani A.A., Kwee R.M., Mirza-Aghazadeh-Attari M., Castro H.M., Kuzan T.Y., Altintoprak K.M., Besutti G., Monelli F., Faeghi F., Acharya U.R., et al. A practical artificial intelligence system to diagnose COVID-19 using computed tomography: A multinational external validation study. Pattern Recognit. Lett. 2021;152:42–49. doi: 10.1016/j.patrec.2021.09.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 294.Ai T., Yang Z., Hou H., Zhan C., Chen C., Lv W., Tao Q., Sun Z., Xia L. Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases. Radiology. 2020;296:E32–E40. doi: 10.1148/radiol.2020200642. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 295.Fang Y., Zhang H., Xie J., Lin M., Ying L., Pang P., Ji W. Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR. Radiology. 2020;296:E115–E117. doi: 10.1148/radiol.2020200432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 296.Lin E.C. Radiation Risk from Medical Imaging. Mayo Clin. Proc. 2010;85:1142–1146. doi: 10.4065/mcp.2010.0260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 297.Peng Q.-Y., Wang X.-T., Zhang L.-N., Chinese Critical Care Ultrasound Study Group (CCUSG) Findings of lung ultrasonography of novel corona virus pneumonia during the 2019–2020 epidemic. Intensiv. Care Med. 2020;46:849–850. doi: 10.1007/s00134-020-05996-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 298.Despotovic V., Ismael M., Cornil M., Mc Call R., Fagherazzi G. Detection of COVID-19 from voice, cough and breathing patterns: Dataset and preliminary results. Comput. Biol. Med. 2021;138:104944. doi: 10.1016/j.compbiomed.2021.104944. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 299.Kobat M.A., Kivrak T., Barua P.D., Tuncer T., Dogan S., Tan R.-S., Ciaccio E.J., Acharya U.R. Automated COVID-19 and Heart Failure Detection Using DNA Pattern Technique with Cough Sounds. Diagnostics. 2021;11:1962. doi: 10.3390/diagnostics11111962. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES