Abstract
Coronavirus disease (COVID‐19) has had a major and sometimes lethal effect on global public health. COVID‐19 detection is a difficult task that necessitates the use of intelligent diagnosis algorithms. Numerous studies have suggested the use of artificial intelligence (AI) and machine learning (ML) techniques to detect COVID‐19 infection in patients through chest X‐ray image analysis. The use of medical imaging with different modalities for COVID‐19 detection has become an important means of containing the spread of this disease. However, medical images are not sufficiently adequate for routine clinical use; there is, therefore, an increasing need for AI to be applied to improve the diagnostic performance of medical image analysis. Regrettably, due to the evolving nature of the COVID‐19 global epidemic, the systematic collection of a large data set for deep neural network (DNN)/ML training is problematic. Inspired by these studies, and to aid in the medical diagnosis and control of this contagious disease, we suggest a novel approach that ensembles the feature selection capability of the optimized artificial immune networks (opt‐aiNet) algorithm with deep learning (DL) and ML techniques for better prediction of the disease. In this article, we experimented with a DNN, a convolutional neural network (CNN), bidirectional long‐short‐term memory, a support vector machine (SVM), and logistic regression for the effective detection of COVID‐19 in patients. We illustrate the effectiveness of this proposed technique by using COVID‐19 image datasets with a variety of modalities. An empirical study using the COVID‐19 image dataset demonstrates that the proposed hybrid approaches, named COVID‐opt‐aiNet, improve classification accuracy by up to 98%–99% for SVM, 96%–97% for DNN, and 70.85%–71% for CNN, to name a few examples. Furthermore, statistical analysis ensures the validity of our proposed algorithms. The source code can be downloaded from Github: https://github.com/faizakhan1925/COVID-opt-aiNet.
Keywords: bidirectional long‐short‐term memory, clinical decision support system, convolution neural network, COVID‐19, deep learning neural network, feature selection, optimized artificial immune network, support vector machine
1. INTRODUCTION
Nowadays, it is common knowledge that the first source of coronavirus outbreak began at the end of 2019 and discovered in Wuhan, China. According to the Center for Disease Control and Prevention (CDC), coronavirus disease (COVID‐19) has some resemblance to the severe acute respiratory syndrome coronavirus (SARS‐COV) and the Middle East respiratory syndrome coronavirus (MERS‐COV). 1 , 2 These viruses spread from one person to another through respirational dewdrops. Fever, cough, and shortness of breath are few of the indications of COVID‐19, which can appear anywhere within 2–14 days after infection. Human‐to‐human contact has been the main cause of its spread throughout the world, as it has been increasing exponentially. As such, social distancing has been considered essential to prevent the spreading of COVID‐19. 3 Other preventive measures taken by the authorities around the world include lock‐down, forbidding international as well as local movements among different regions or cities within a country, the isolation and hospitalization of infected persons, and the closing of schools, offices, shopping malls, and restaurants. Since COVID‐19 can spread rapidly, timely detection and monitoring of this disease are vital. Consequently, from the beginning of the outbreak, authorities have carried out checks on a huge scale. In the meantime, experts have been working around the clock in attempting to develop an effective vaccine against this dreadful virus. 4 , 5
As of February 13, 2021, there have been 108 986 901 people (+442 098) confirmed cases worldwide. Researchers have hassled to comprehend the first‐wave space–time dynamic of the virus and to measure the possessions of containment dimensions. Traditional diagnostic and mathematical models in epidemiology were employed. 6 A powerful tool to combat COVID‐19 is accurate clinical screening, which enables infected individuals to be easily detected, treated, and quarantined, thus preventing the spread of virus. There are different types of screening techniques that have been used for detecting COVID‐19; these are radiography examinations, where chest radiography images, such as chest X‐ray images (CXR) or computed tomographic images (CT), are examined by radiologists for the graphical signs related to the SARS‐COV virus. Previous studies have found that irregularities in chest radiography images were signs of COVID‐19 infection, while other researchers have claimed that radiography inspection has been used as the main instrument for COVID‐19 screening in infected areas. radiography is particularly suitable for REVERSE transcription polymerase chain reaction (RT‐PCR) testing because the images can be examined quickly. Moreover, in modern healthcare systems, the necessary equipment is generally readily available and moveable. However, one of the significant challenges is that expert radiologists must completely comprehend the radiography images, as visual markers are often overstated. One significant challenge, however, is that expert radiologists must completely comprehend radiography images, as visual markers are frequently exaggerated. Thus, the development of intelligent computer‐aided diagnostic systems that can assist radiologists in interpreting radiography images and thus aid in the faster and more precise detection of COVID‐19 events is necessary. 7 , 8 , 9 The study proposes the use of deep learning (DL) and machine learning (ML) algorithms to analyze medical images from multiple modalities to improve the assessment of suspected or confirmed COVID‐19 patients. Motivated by a desire to aid in the fight against the COVID‐19 virus through the development of a more accurate method of detecting/predicting COVID‐19 patients via medical image analysis. Having been encouraged by the research community's earlier efforts, we are now primarily interested in determining the utility of artificial intelligence (AI) systems for providing a convenient and accessible medical diagnosis. This study presents an optimized artificial immune networks (opt‐aiNet)‐optimized deep neural network (DNN), convolutional neural network (CNN), and bidirectional long short‐term memory (BDLSTM) architecture for detecting COVID‐19 cases in open‐source and publicly accessible medical images.
This research makes the following significant contributions:
A novel COVID‐19 detection framework that combines DL/ML techniques, such as DNN, CNN, two‐dimensional CNN (2DCNN), BDLSTM, support vector machine (SVM), and logistic regression (LR) with opt‐aiNet for better COVID 19 detection from medical images.
A further application of our proposed novel framework is on three COVID‐19 medical image datasets for the first time.
The comparison of our results (in terms of accuracy) with those of other state‐of‐the‐art approaches, using the same COVID‐19 datasets.
The rest of the article is structured as follows. Section 2 discusses related work; Section 3 describes the proposed technique; and Section 4 presents the experimental work whereas Section 5 summarizes the results and discussion. Finally, the conclusion and possible future endeavors are described in Section 6.
2. LITERATURE REVIEW
Over the past few months, several studies have contributed to understanding of the indicators of coronavirus infection; these studies were based on medical image patterns. Recently, one of the most major studies to effectively predict COVID‐19 infection was conducted by Shan et al. 4 where the author has used a DL‐based system for the automatic segmentation of COVID‐19 CT images. To speed up the manual processing of CT images for training purposes, a human‐in‐the‐loop (HITL) approach was developed to help radiologists improve the automatic interpretation of an individual case. The system produced high dice similarity coefficients for automatic and manual lung segmentation. Bandyopadhyay et al. 5 used DL approach for confirmation of COVID‐19 cases. The recurrent neural network (RNN) was used to estimate the number of confirmed, negative, recuperated, and death cases of COVID‐19. In this work, three proposed models, long short‐term memory (LSTM)‐RNN, GRU‐RNN, and LSTM‐GRU‐RNN were compared. The researchers found that the LSTM‐GRU‐RNN produced better results than the LSTM‐RNN and GRU‐RNN. Wang et al. 7 used a deep CNN (COVID‐Net) to detect COVID‐19 cases from a CXR image dataset. They reported that COVID‐Net achieved an accuracy of 93.3%. Abbas et al. 9 used a deep CNN to classify COVID‐19 CXR images, known as DeTraC (Decompose, Transfer, and Compose). They found that DeTraC achieved an accuracy of 95.12% in detecting COVID‐19 in X‐ray images.
Khalifa et al. 10 investigated generative adversarial networks (GAN) with deep transfer learning (DTL), such as AlexNet, GoogleNet, Squeeznet, and Resnet18, for detecting pneumonia in the CXR image dataset. The experimental results showed that Resnet18 achieved a 99% accuracy when used with GAN as an image augmenter; hence, compared with the others. Barstugan et al. 11 applied feature extraction (FE) approaches, such as a Gray Level Co‐occurrence Matrix (GLCM), a Local Directional Pattern (LDP), a Gray Level Run Length Matrix (GLRLM), a Gray‐Level Size Zone Matrix (GLSZM), and a Discrete Wavelet Transform (DWT), to detect COVID‐19 in four different datasets. Moreover, an SVM has been applied for classification using 2‐fold, 5‐fold, and 10‐fold cross‐validation. They reported that GLSZM was the best FE method and the SVM obtained a 99.68% accuracy with 10‐fold cross‐validation. Hemdan et al. 12 used a DL method, COVIDX‐Net, to detect COVID‐19 in X‐ray images. The COVIDX‐Net framework contained seven different elements of the deep CNN model, which were comprised of the revised Visual Geometry Group Network (VGG19) and the Google MobileNet. Each DNN model was capable to investigate the controlled intensities of the X‐ray images to determine whether the patient is negative or positive for COVID‐19. Hemdan et al. reported that the VGG19 and Dense Convolutional Network (DenseNet) models performed well in terms of automatic COVID‐19 classification with F1‐scores of 0.89 and 0.91 for negative or positive COVID‐19 cases, respectively. Narin et al. 13 used three CNN models, namely, ResNet50, InceptionV3, and InceptionResNetV2, to detect patients infected with pneumonia. They used a dataset of 50 CXR images of COVID‐19 patients and 50 non‐COVID‐19 ones. They reported that the ResNet50 model provided a 98% accuracy, thereby outperforming the other two models. Li et al. 14 developed a DL model for COVID‐19 detection (COVNet), to extract particular features from chest CT scans to detect COVID‐19. Community‐acquired pneumonia (CAP) and other non‐pneumonia CT scans were used to test the model. The researchers reported that the sensitivity and the specificities were 90% and 96%, respectively, for COVID‐19; when detecting CAP, the DL model had an 87% sensitivity and a 92% specificity. The AUC values for COVID‐19 and CAP were 0.96 and 0.95, respectively.
Apostolopoulos et al. 15 investigated a CNN model for the automatic detection of COVID‐19 cases, using a dataset of X‐ray images of various patients that had shared bacterial pneumonia, which was confirmed to be COVID‐19 and was normal cases. They found that CNN with X‐ray images achieved the best accuracy of 96.78%. Farooq et al. 16 investigated the already existing CNN architecture but then proposed a variation of the residual neural network, using a total of 50 layers called ResNet50, named COVID‐ResNet. They reported that, compared with the original results for the COVIDx dataset, they achieved a 96.23% accuracy, which is a significant 13% improvement in performance compared with that of COVID‐Net. Kassani et al. 17 investigated DL‐based methods for COVID‐19 prediction by extracting the most precise features from COVID‐19 datasets, using MobileNet, DenseNet, Xception, ResNet, InceptionV3, InceptionResNetV2, VGGNet, and NASNet. According to the researchers, the DenseNet121 feature extractor combined with a bagging tree classifier achieved a classification accuracy of 99%, which was the best result of all. Zhao et al. 18 developed an open‐source dataset, COVID‐CT, that included 349 COVID‐19 CT images and 463 non‐COVID‐19 CT images. Zhao et al. performed experiments on this dataset, causing the formation of an AI‐based COVID‐19 analysis model. They claimed that for the COVID‐CT dataset, they acquired promising results. Alom et al. 19 investigated Inception Residual Recurrent CNN with the transfer learning (TL) approach for COVID‐19 detection. They used both X‐ray and CT scan images. Their model performed disease detection with 84.67% accuracy for X‐ray images and 98.78% accuracy for CT images.
Hu et al. 20 investigated a weakly supervised DL strategy for detecting and classifying COVID‐19 infection from CT images. They reported an accuracy of 96.2%, a precision of 97.3%, a sensitivity of 94.5%, a specificity of 95.3%, and an AUC of 0.970. Voulodimos et al. 21 investigated DL models (U‐Nets and Fully CNN's) for segmenting the infected region of interest. They reported that, despite the class imbalances on the dataset, fully CNNs are capable of precise segmentation. Hussain et al. 22 investigated a novel CNN model, called CoroDet, for the automatic detection of COVID‐19 using raw chest X‐ray and CT images. They reported an accuracy of 99.1% for 2 class classification, 94.2% for 3 class classification, and 91.2% for 4 class classification. Alshazly et al. 23 investigated advanced deep network architectures and introduced a TL strategy that utilized customized feedback adapted per each deep network architecture to achieve better COVID detection from CT images analysis.
Sahlol et al. 24 investigated an improved hybrid classification method for COVID‐19 images by merging the strengths of CNNs to extract features and a swarm‐based feature selection algorithm that is Marine Predators Algorithm to select the most appropriate features. The proposed method was assessed on two public COVID‐19 X‐ray datasets They have reported that both methods had achieved high performance and reduced computational complexity. And had achieved the best accuracy on these datasets when compared with recent feature selection algorithms. Yousri et al. 25 investigated an enhanced cuckoo search optimization algorithm (CS) using fractional‐order calculus (FO) and four different heavy‐tailed distributions on COVID‐19 multiclass classification that were comprised of three classes, that is, normal patients, COVID‐19 infected patients, and pneumonia patients. The distributions types used were Mittag–Leffler distribution, Cauchy distribution, Pareto distribution, and Weibull distribution. The proposed FO‐CS variations were validated with 18 UCI data sets. They have reported that the proposed FOCS based on heavy‐tailed distributions provide better results than other algorithms for both UCI datasets and COVID‐19 X‐ray images. Ismael et al. 26 investigated DL‐based methods, that is, deep FE, fine‐tuning of pretrained convolutional neural networks (CNNs), and end‐to‐end training of a developed CNN model, that have been used to categorize COVID‐19 and normal chest X‐ray images. They have reported that that DL had potential in the detection of COVID‐19 based on chest X‐ray images and DL methods are efficient when compared with the local texture descriptors for the detection of COVID‐19 based on chest X‐ray images. Panwar et al. 27 investigated a DTL algorithm for the detection of COVID‐19 cases using X‐ray and CT‐Scan images of the chest. They reported that the proposed DL model can detect the COVID‐19 positive cases in less than 2 s. Khan et al. 28 investigated an optimized DL to separate COVID‐19‐infected patients from normal patients from CT scans. A pretrained DenseNet‐201 DL model is then trained using TL. Two fully connected layers and an average pool are used for FE. The extracted deep features are then optimized with a Firefly algorithm to select the most optimal learning features. They have reported that average classification accuracy of 94.76% with the proposed approach was obtained. Khan et al. 29 investigated a multi‐type feature fusion and selection to predict COVID‐19 infections on chest CT scans. They have reported that the best predictive accuracy achieved was 93.9% for the ELM classifier and the method was effective. Shui‐Hua et al. 30 investigated a deep rank‐based average pooling network (DRAPNet) model for COVID‐19 recognition. They have reported that DRAPNet had achieved an F1 score of 95.49% and the DRAPNet is effective in detecting COVID‐19 and other chest infectious ailments. Zhang et al. 31 investigated a DL method to attain better performance. They have used the pseudo‐Zernike moment (PZM) as a feature extractor. They have reported that the PZM‐DSSAE method had achieved an accuracy of 92.31% and AUC of 0.9576.
Numerous studies on COVID‐19 are currently being conducted, and the research findings are widely available to members of the research community. To our knowledge, this is the first study to combine opt‐aiNet with ML and DL classifiers to improve the analysis of the COVID‐19 mage datasets. Furthermore, the majority of studies on chest radiography imaging (e.g., CXR) use chest X‐ray datasets for COVID‐19 patient prediction/detection, whereas there is a little research on CT image analysis.
3. BACKGROUND
In this section, a brief introduction to the different techniques has been provided that has been utilized to perform the experiments. We experimented with DL as well as with some of the benchmark ML classifiers. We applied opt‐aiNet for the feature selection, as discussed later.
3.1. Deep neural network
A typical workflow of a DNN (applied in our experiments) with the input data X = X 1, X 2, X 3, …, X m and corresponding weights w = w 1, w 2 , w 3 , …, w m and biased term b are depicted in Figure 1. Further details can be found in Refs. 32, 33, 34, 35.
3.2. Convolutional neural network
A CNN or ConvNet is a multilayered structural neural network, comprised of numerous different layers, such as convolutional layers, pooling layers, and classification layers, where each layer performs some predetermined function on its input data. 36 , 37 CNNs differ from each other regarding the way that these central layers are connected and bundled and the method used for training the network. We applied an 8‐layer CNN. When the convolution is applied in a two‐dimensional space, it is known as a two‐dimensional CNN (2DCNN).
3.3. Bidirectional long‐short‐term memory
BDLSTMs is an extension of conventional LSTMs that can significantly increase model efficiency when used to classify sequences. In the LSTM, the present state can be reassembled only according to the backward context, whereas, the forward context is also presented with a relationship with the definite state, which is not measured in the LSTM model. 38 , 39
3.4. Support vector machines
SVMs are a subset of kernel‐based techniques that effectively address classification and regression problems. 40 Given a dataset X i , i = 1, 2, …, n, with corresponding class labels Y i , where Y i = 1 relates to class 1 and Y i = −1 relates to class 2. Furthermore, SVM has a set of kernels that assists in mapping the input space to higher dimensions. We applied the linear kernel given in Equation (1) as:
(1) |
where C is the optimal constant. The linear kernel works well in linearly separable case features. Secondly, we applied the polynomial kernel with two degrees, given in Equation (2) as:
(2) |
where and C are adjustable constants. Our third experiment was with the Radial Basis Function kernel, as given in Equation (3) by:
(3) |
3.5. Logistic regression
For any given image X i , and corresponding weights W i , i = 1, 2, …, n, the logit model provides the value for the observation that can be used with the logistic cumulative density function to find the probability that Y = 1 (presence of COVID‐19) for the given image. 41 A simple LR formula is given below in Equation (4) and in Equation (5):
(4) |
(5) |
Y (output) is either 0 (no COVID) or 1 (COVID).
3.6. Optimized artificial immune network
De Castro and Timmis 42 , 43 , 44 initially suggested an artificial immune network algorithm (aiNet), derived from the principle of the Immune Network. Immune networks are idiotypic and intricate paratopic networks that detect antibodies (Ag). 38 A modified version of the aiNet, known as opt‐aiNet, was introduced for solving optimization problems by De Castro and Timmis. 38 The pseudocode of the opt‐aiNet is shown in Figure 2.
The opt‐aiNet optimization technique is summarized below 45 :
Step 1: Create an initial population of size N.
Step 2: Antigen depiction: Until the stopping criterion is achieved.
Cloning: Numerous clones (Nc) of a single cell are generated in proportion to its affinity.
Mutation: The mutation of an individual clone is inversely proportional to its fitness.
Affinity Measurement: Calculate the affinity between cells.
Suppression: Retain the higher affinity cells only.
Step 3: End: Exit the loop.
The following section describes the proposed COVID‐opt‐aiNet CDSS architecture in depth.
4. METHODOLOGY
The workflow of the proposed COVID‐opt‐aiNet CDSS is explained in this section and Figure 3 depicts the framework of the proposed technique. The novelty of the model lies in the ensemblement of the opt‐aiNet with state‐of‐the‐art DL and ML classifiers for better prediction.
4.1. COVID‐19 dataset
We have demonstrated the performance of our proposed algorithm on three COVID datasets. The first dataset was comprised CT images categorized as being positive or negative for COVID‐19 symptoms. These images were taken from 216 patient cases. Figure 4 illustrates some sample COVID‐19 CT images. 18
Our second dataset was a COVID‐19 Radiography Dataset, 46 which consisted of 3616 infected with COVID‐19, 10 192 normal, and 1345 pneumonia cases. The third dataset we exploited for our experiments was the CXR dataset. 47 The dataset contains 683 COVID images and 3094 normal images as well as 4172 pneumonia images. We successfully solved the problem of class imbalance by applying random oversampling for data augmentation. Table 1 provides a summary of the datasets used for the experiment. We have applied binary classification so far to distinguish between infected and normal cases. The experiments can further be extended for multiclass prediction.
TABLE 1.
4.2. Data segmentation
The images were resized to 150 × 150 and normalized for further processing. We segmented the data in COVID‐19 images to “extract” the lungs using the ITK‐SNAP tool. This segmentation was based on the thresholding of histogram features and the gray level. The sample segmented CT images of COVID and non‐COVID are shown in Figure 5 below.
4.3. Feature selection using Opt‐aiNet
We selected some resilient features using opt‐aiNet and fed them to DNN, CNN, 2DCNN, BDLSTM as well as to SVM and LR algorithms (in separate experiments) for a better prediction. The overall process for the COVID‐opt‐aiNet based feature selection (FS) is depicted in Figure 3.
4.3.1. Generate random initial antibodies population
Firstly, a random antibodies (Ab) population was initialized with a set of integer‐valued vector C which encapsulates the number of selected features into a cohesive viable solution.
The affinity value (AV), also called fitness, was calculated as the classification accuracy of the specific algorithm with the selected features. The AV is given as:
(6) |
(7) |
4.3.2. Train the classifier
For each cell C, the training dataset was used to train the specific classifier. The classification was conducted using the DL and ML classifiers namely DNN, CNN, 2DCNN, BDLSTM, SVM, and LR. The parameters of these classifiers are given in Table 2 below.
TABLE 2.
Dataset | DL techniques | Initial learning rate | Batch size | Number of epochs | Activation Function | No. of trainable parameters without FS | No. of trainable parameters with FS | Training time without FS | Training time with FS |
---|---|---|---|---|---|---|---|---|---|
DB1 | DNN | 0.0001 | 100 | 50 | Relu | 2 591 201 | 29 859 | 20 s 80 ms | 20 s 40 ms |
CNN | 0.0001 | 100 | 50 | Relu | 12 520 601 | 45 693 | 6 ms | 2.44 ms | |
BDLSTM | 1e − 3 | 100 | 50 | Sigmoid | 16 118 657 | 124 545 | 15 ms | 9 ms | |
2DCNN | 0.0001 | 100 | 50 | Relu | 2 424 065 | 875 777 | 108 s 3150 ms | 60 s 2600 ms | |
DB2 | DNN | 0.0001 | 100 | 50 | Relu | 6 423 001 | 797 501 | 35 s 79 ms | 10 s 35 ms |
CNN | 0.0001 | 100 | 50 | Relu | 12 520 601 | 45 693 | 30 s 60 ms | 20 s 44 ms | |
BDLSTM | 1e − 3 | 100 | 50 | Sigmoid | 8 187 521 | 124 545 | 5 s 521 ms | 4 s 4 ms | |
2DCNN | 0.0001 | 100 | 50 | Relu | 22 430 849 | 875 777 | 145 s 115 ms | 800 s 40 ms | |
DB3 | DNN | 0.0001 | 100 | 50 | Relu | 6 755 201 | 5 452 401 | 30 s 70 ms | 22 s 370 ms |
CNN | 0.0001 | 100 | 50 | Relu | 12 520 601 | 45 693 | 6 s 700 ms | 4 s 400 ms | |
BDLSTM | 1e‐3 | 100 | 50 | Sigmoid | 8 738 945 | 2 614 401 | 2 s 75 ms | 1 s 40 ms | |
2DCNN | 0.0001 | 100 | 50 | Relu | 2 424 065 | 187 649 | 20 min 128 ms | 15 min 400 ms |
The optimal accuracy achieved for each population was considered and then further iterated until the specified number of generations was reached.
4.3.3. Cloning and mutation
Given cell C, a number (Nc) of clones are produced. Further, the mutation C′ of the cloned cells has been performed as represented in Equations (8) and (9):
(8) |
(9) |
where C′ is a cell that has mutated, N (0, 1) is a Gaussian random number with a mean of zero and a standard deviation of one. β is a control parameter that alters the mutation spectrum, adjusts the decay of the inverse exponential function. We considered β = 100 where α is the affinity proportional and f is the optimal affinity value of the parent cell and f* is the affinity value of a single cell in the populace. A mutated cell is recognized only if the C′ lies within the population space range. The higher affinity value cells are selected for further iterations.
The cells with the same fitness are suppressed. Following the principle of evolutionary theory, a mutated offspring is selected over the parent cell only if the affinity value is higher. This ensures the cells at the same peak of their affinity value are ignored. The following Table 3 provides the value of different parameters for the opt‐aiNet model.
TABLE 3.
Parameters | Default values |
---|---|
Suppression threshold | 0.1 |
Number of clones generated (Nc) | 5, 10, 50 |
Number of clones multiplier (N) | 50 |
The decay of inverse exponential function (β) | 100 |
Maximum number of generations | 100 |
4.3.4. Population diversity
To diversify the search space population Ab, a randomized set of novel cells is incorporated. 43
4.3.5. Suppress cells
If the fitness value does not stabilize, we suppress cells with the same fitness value. The algorithm uses evolutionary principles to select mutated offspring in such a way that the parent cell survives unless it is suppressed by one of the offspring. It is a novel method for suppressing cells and avoiding the possibility of selecting multiple cells at the same fitness peak. When deciding whether to suppress a cell, the method takes its fitness into account. Given the cells C 1 and C 2 from the search space population P c , we let q 1 = (C 1, fitness [C 1]), q 2 = (C 2 , fitness [C 2]) and q′ = projection of q 1/2 onto q 2, where q 1/2 = (q 1 + q 2)/2. The interaction/affinity between the cells is computed based on the distance between q 1/2 and a point q, which is computed as:
(10) |
where and . In this context, if is less than threshold σ s , the cell with the worst fitness value is removed on the basis that the fitness is closer together.
4.3.6. Termination criteria
The COVID‐opt‐aiNet runs for several generations or before affinity values stabilize. We, therefore, assume that we have achieved the optimal solution if one of these conditions is met.
Once the optimum number of features is designated for the specific classifier, these models are validated and the average accuracy is obtained.
5. RESULTS AND DISCUSSIONS
In this section, we provide a summary of the results obtained using the hybrid approach that selects an optimum number of features using an opt‐aiNet algorithm and feeds them to DL as well as ML classifiers for the detection of COVID‐19 patients. To avoid overfitting, the dataset was separated into two groups: 70% for training the model and 30% for evaluating the accuracy of the classification with DNN‐opt‐aiNet (DNN with opt‐aiNet), CNN‐opt‐aiNet (CNN with opt‐aiNet), BDLSTM‐opt‐aiNet (BDLSTM with opt‐aiNet), SVM‐opt‐aiNet (SVM linear with opt‐aiNet), SVMP‐opt‐aiNet (SVM polynomial with opt‐aiNet), SVMRBF‐opt‐aiNet (SVM RBF with opt‐aiNet), and LR‐opt‐aiNet (LR with opt‐aiNet). The experiments were performed for different numbers of epochs 6 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 and population sizes [5–100]. The details of the parameters of COVID‐opt‐aiNet, DNN, CNN, 2DCNN, and BDLSTM are presented in Table 2, where for ML classifiers we used the default parameters. Our experimental results indicated that COVID‐opt‐aiNet had improved the prediction accuracy as a result of the feature selection. The documented results were obtained using a population size of 50 cells and 50 epochs.
It is obvious from the results in Table 2 that the prediction algorithm takes less training time with selected features. The time taken by the DL algorithm is directly proportional to the dataset size as well as to the number of layers incorporated, hence the number of trainable parameters. Decreasing the training time, due to selecting the resilient features is an achievement, further assists in maintaining the same or higher performance.
Table 4 displays the averaged performance over 10‐fold cross‐validation by applying the DL as well as the ML algorithms with and without feature selection. We achieved 75% accuracy; an AUC value of 0.68 with DNN (DB1). This accuracy was further improved to 83.66% (AUC = 0.79) by using the DNN‐opt‐aiNet algorithm. Promising results were achieved by applying the CNN‐opt‐aiNet, namely 74.3% accuracy and an AUC value of 0.68, while the results with 2DCNN were the least significant.
TABLE 4.
DL approaches | ||||||||
---|---|---|---|---|---|---|---|---|
Dataset | 2DCNN | 2DCNN‐opt‐aiNet | DNN | DNN‐opt‐aiNet | CNN | CNN‐opt‐aiNet | BDLSTM | BDLSTM‐opt‐aiNet |
DB1 | ||||||||
Accuracy | 47% | 0.4710 | 75% | 82.66% | 68% | 74.3% | 60% | 63.68% |
Loss | 0.4213 | 0.4987 | 0.792 | 0.349 | 0.43 | 0.59 | 0.63 | 0.64 |
AUC | 0.445 | 0.4610 | 0.68 | 0.79 | 0.51 | 0.68 | 0.48 | 0.61 |
DB2 | ||||||||
Accuracy | 50% | 51.01% | 96.55% | 97.11% | 69.34% | 71.3% | 65.81% | 66.68% |
Loss | 0.5129 | 0.4550 | 0.089 | 0.084 | 0.64 | 0.45 | 0.6221 | 0.615 |
AUC | 0.511 | 0.491 | 0.96 | 0.96 | 0.6712 | 0.6801 | 0.6271 | 0.651 |
DB3 | ||||||||
Accuracy | 48% | 51% | 75% | 82% | 70.85% | 71% | 50% | 52% |
Loss | 0.6229 | 0.565 | 0.6754 | 0.3966 | 0.5684 | 0.4545 | 0.6782 | 0.561 |
AUC | 0.4910 | 0.501 | 0.7461 | 0.811 | 0.6952 | 0.6971 | 0.4912 | 0.510 |
ML approaches | ||||||||
---|---|---|---|---|---|---|---|---|
SVM linear | SVM‐opt‐aiNet | SVM polynomial | SVMP‐opt‐aiNet | SVM RBF | SVMRBF‐opt‐aiNet | Logistic regression | LR‐opt‐aiNet | |
DB1 | ||||||||
Accuracy | 80% | 82% | 68% | 69% | 80% | 82% | 74% | 76% |
Loss | 0.3214 | 0.4512 | 0.678 | 0.61 | 0.4590 | 0.3412 | 0.3401 | 0.3512 |
AUC | 0.8010 | 0.8120 | 0.691 | 0.701 | 0.7912 | 0.8113 | 0.7390 | 0.7601 |
Training time | 10.07 ms | 8.42 ms | 5.4 s | 3.44 s | 24.45 ms | 15.27 ms | 3808 ms | 4561 ms |
DB2 | ||||||||
Accuracy | 98% | 99.01% | 89% | 91% | 95% | 96% | 95% | 97% |
Loss | 0.032 | 0.045 | 0.345 | 0.312 | 0.23 | 0.2113 | 0.456 | 0.378 |
AUC | 0.9881 | 0.991 | 0.891 | 0.901 | 0.9431 | 0.9511 | 0.9423 | 0.9713 |
Training time | 65.12 ms | 54.61 ms | 145.11 | 111 ms | 156.23 | 112 ms | 973 ms | 657 ms |
DB3 | ||||||||
Accuracy | 77% | 77.01% | 83% | 84% | 83% | 84% | 81% | 83% |
Loss | 0.340 | 0.4450 | 0.291 | 0.209 | 0.4190 | 0.417 | 0.518 | 0.422 |
AUC | 0.7612 | 0.7680 | 0.821 | 0.8401 | 0.8400 | 0.8331 | 0.8011 | 0.8113 |
Training time | 44.67 ms | 56.261 ms | 51.60 | 46.56 ms | 158 ms | 112 ms | 8910 ms | 8123 ms |
Note: The significance of bold values are highest values.
Similarly, in the case of DB2, the best results were achieved by DNN, showing 96.55% of accuracy, and 0.96 of AUC. These results were further improved by the DNN‐opt‐aiNet with 97.11% accuracy and a 0.96 AUC value. The next best results were generated by CNN‐opt‐aiNet as 71.3% accuracy and 0.6801 of AUC value. For DB3, the higher accuracy was achieved by DNN; these were 75% and 0.7461 of AUC value. But these results were further improved by DNN‐opt‐aiNet with an 82% accuracy and a 0.811 AUC value. The CNN‐opt‐aiNet improved the results of simple CNN to 71% from 70.85%, and the training time was decreased with no loss in accuracy or performance.
Given the results in Table 3, it can be inferred that benchmark ML algorithms perform equivalently to the DL algorithms but with less time complexity. Finding the optimum features does not hinder their performance; in fact, it may help them achieve better results in most cases. Overall, the performance of SVM linear is generally better. In the case of DB1, an accuracy value of 80% (AUC value = 0.8010) was achieved, which was further enhanced by SVM‐opt‐aiNet to 82% (AUC = 0.812). The SVM RBF‐opt‐aiNet achieved an 82% accuracy with an AUC value of 0.8113, Further LR was also performed in conjunction with the opt‐aiNet, providing an accuracy of 76%, with an AUC value of 0.7601. The SVM polynomial showed overall a lesser performance.
For DB2, the finest performance was accomplished by the SVM linear and the SVM‐opt‐aiNet, which was around 98% (AUC = 0.98 ± 0.1). SVM RBF had a significant performance of 95% (AUC = 0.901), which was improved with SVM RBF‐opt‐aiNet to 96% with an AUC value of 0.9431. LR too had a reportable accuracy of 95% (AUC = 0.9432), which was improved by LR‐opt‐aiNet to 97% (0.9713). From the results, it can be seen that the opt‐aiNet assisted in reducing time, while simultaneously increasing prediction performance.
In Table 5 the Precision, Recall, and F‐scores of the corresponding datasets are shown with or without the FS. These findings clearly show that using a combination of DL and ML techniques offers a competitive advantage over the methods presented in the literature. It is because once resilient features are selected; they help in reducing the training time and enhance the prediction capability. Our experiments also show that DNN and SVM linear outperform overall. However, it cannot be claimed that it will work for any dataset, for it must still have been experimented extensively in the future.
TABLE 5.
DB1 | |||||||
---|---|---|---|---|---|---|---|
COVID | Simple algorithm | Algorithm with feature selection | |||||
Infection | Precision | Recall | F‐score | Precision | Recall | F‐score | |
DNN | 0 | 0.86 | 0.44 | 0.58 | 0.85 | 0.75 | 0.77 |
1 | 0.72 | 0.95 | 0.84 | 0.82 | 0.91 | 0.77 | |
CNN | 0 | 0.61 | 0.86 | 0.76 | 0.75 | 0.93 | 0.83 |
1 | 0.37 | 0.46 | 0.51 | 0.81 | 0.51 | 0.63 | |
2DCNN | 0 | 0.37 | 0.15 | 0.12 | 0.23 | 0.33 | 0.51 |
1 | 0.34 | 1.00 | 0.50 | 0.33 | 0.50 | 0.10 | |
BDLSTM | 0 | 0.52 | 0.92 | 0.71 | 0.62 | 1.00 | 0.77 |
1 | 0.82 | 0.36 | 0.50 | 0.00 | 0.00 | 0.00 | |
SVM linear | 0 | 0.69 | 0.74 | 0.72 | 0.71 | 0.80 | 0.76 |
1 | 0.86 | 0.84 | 0.85 | 0.89 | 0.84 | 0.86 | |
SVM RBF | 0 | 0.75 | 0.72 | 0.88 | 0.91 | 0.64 | 0.75 |
1 | 0.89 | 0.85 | 0.87 | 0.82 | 0.97 | 0.89 | |
SVM polynomial | 0 | 0.53 | 0.67 | 0.59 | 0.77 | 0.71 | 0.62 |
1 | 0.79 | 0.69 | 0.73 | 0.81 | 0.69 | 0.74 | |
Logistic regression | 0 | 0.71 | 0.67 | 0.69 | 0.73 | 0.75 | 0.74 |
1 | 0.81 | 0.89 | 0.89 | 0.75 | 0.74 | 0.74 |
DB2 | |||||||
---|---|---|---|---|---|---|---|
COVID | Simple algorithm | Algorithm with feature selection | |||||
Infection | Precision | Recall | F‐score | Precision | Recall | F‐score | |
DNN | 0 | 0.98 | 0.99 | 0.99 | 0.97 | 0.91 | 0.94 |
1 | 0.99 | 0.97 | 0.98 | 0.94 | 0.86 | 0.89 | |
CNN | 0 | 0.61 | 0.86 | 0.76 | 0.75 | 0.93 | 0.83 |
1 | 0.37 | 0.46 | 0.51 | 0.81 | 0.51 | 0.63 | |
2DCNN | 0 | 0.52 | 0.22 | 0.32 | 0.75 | 0.53 | 0.56 |
1 | 0.41 | 0.77 | 0.53 | 0.81 | 0.51 | 0.65 | |
BDLSTM | 0 | 0.52 | 0.22 | 0.62 | 0.62 | 1.00 | 0.77 |
1 | 0.41 | 0.77 | 0.53 | 0.00 | 0.00 | 0.00 | |
SVM linear | 0 | 0.98 | 0.99 | 0.99 | 0.98 | 0.99 | 0.99 |
1 | 0.99 | 0.97 | 0.98 | 0.99 | 0.96 | 0.99 | |
SVM RBF | 0 | 0.93 | 0.99 | 0.96 | 0.99 | 0.96 | 0.99 |
1 | 0.98 | 0.88 | 0.93 | 0.99 | 0.96 | 0.99 | |
SVM polynomial | 0 | 0.86 | 0.97 | 0.91 | 0.90 | 0.86 | 0.88 |
1 | 0.94 | 0.76 | 0.84 | 0.90 | 0.98 | 0.90 | |
Logistic regression | 0 | 0.94 | 0.95 | 0.94 | 0.98 | 0.98 | 0.98 |
1 | 0.93 | 0.90 | 0.91 | 0.99 | 0.99 | 0.97 |
DB3 | |||||||
---|---|---|---|---|---|---|---|
COVID | Simple algorithm | Algorithm with feature selection | |||||
Infection | Precision | Recall | F‐score | Precision | Recall | F‐score | |
DNN | 0 | 0.62 | 0.79 | 0.76 | 0.79 | 0.89 | 0.84 |
1 | 0.93 | 0.55 | 0.51 | 0.86 | 0.74 | 0.80 | |
CNN | 0 | 0.71 | 0.76 | 0.86 | 0.75 | 0.93 | 0.83 |
1 | 0.57 | 0.66 | 0.61 | 0.81 | 0.51 | 0.63 | |
2DCNN | 0 | 0.47 | 0.32 | 0.42 | 0.45 | 0.33 | 0.21 |
1 | 0.31 | 0.45 | 0.33 | 0.43 | 0.23 | 0.33 | |
BDLSTM | 0 | 0.55 | 0.37 | 0.12 | 0.60 | 0.56 | 0.51 |
1 | 0.31 | 0.14 | 0.13 | 0.45 | 0.50 | 0.44 | |
SVM linear | 0 | 0.77 | 0.76 | 0.76 | 0.78 | 0.77 | 0.75 |
1 | 0.77 | 0.78 | 0.77 | 0.77 | 0.77 | 0.78 | |
SVM RBF | 0 | 0.88 | 0.81 | 0.84 | 0.89 | 0.83 | 0.84 |
1 | 0.83 | 0.89 | 0.86 | 0.84 | 0.90 | 0.85 | |
SVM polynomial | 0 | 0.83 | 0.87 | 0.84 | 0.83 | 0.86 | 0.86 |
1 | 0.87 | 0.86 | 0.84 | 0.84 | 0.86 | 0.87 | |
Logistic regression | 0 | 0.83 | 0.94 | 0.85 | 0.85 | 0.93 | 0.91 |
1 | 0.69 | 0.52 | 0.58 | 0.72 | 0.53 | 0.61 |
The overall accuracy and loss obtained by Opt‐aiNet for 50 epochs on DB1 are depicted below in Figure 6A–C, and also Figure 6D displays the training and validation accuracy of the DDN‐opt‐aiNet over 20 epochs, and the confusion matrix for DB2 is shown in Figure 7. Figure 6D shows that over time the training, as well as the validation accuracy, stagnates. These results indicate that the DNN‐opt‐aiNet was more accurate in predicting COVID‐19 infected patients from CNN and BDLSTM, which in itself is an improvement. Thus overall, it can be said that accuracy is improved and loss is reduced by applying the DNN‐opt‐aiNet for detecting COVID‐19 infected patients.
The DL is anticipated to help radiologists provide precise diagnoses, by delivering a measurable examination of doubtful lesions, and may also permit a shorter time in the clinical workflow. DL has already shown equivalent performance to humans in recognition and computer vision tasks. These scientific changes make it practical to think that there might be some most important variations in clinical practices. When we consider the use of DL in medical imaging, we forestall this scientific modernization to aid as a collaborative medium in reducing the burden and distraction from many tiresome and routine tasks, rather than replacing surgeons. In the clinic, medical image analysis has mostly been accomplished by human experts such as radiologists and physicians. Still, due to large discrepancies in pathology and the potential weariness of human experts, researchers and doctors have lately begun to advantage from computer‐assisted interventions. Although compared with the developments in medical imaging technologies, it is late for the advances in computational medical image analysis, it has newly been improving with the help of DL methods. 58 , 59 This study was a limited effort to contribute to scientific research in the area of COVID‐19 prediction. There is plenty of opportunities to improve it. We had a limited dataset that could be supplemented with additional online data to improve efficiency. We experimented with the fundamental structure of the architect's DL and CNNs, while transitional learning can be used for better prediction in pretrained networks. The classifier parameters can be adjusted automatically to improve performance using other opt‐out techniques such as Genetic Algorithms, Ant Colony Optimization, and AIN. Additionally, other FE techniques can be used in conjunction with the proposed method to further improve the system's performance.
5.1. Statistical analysis of the results
The proposed hybrid method's efficiency was compared with the performance of benchmarked ML as well as the DL techniques using the t‐test. For this experiment, let be the mean performance accuracies for the opt‐aiNet used in conjunction with DL techniques‐based method results, respectively. We tested the hypothesis at the 0.05 level of significance:
(11) |
(12) |
The results have been reported in Table 6. In this case, since the p‐value was greater than the 0.05 level of confidence, the difference in classification accuracy was deemed significant. Thus, our proposed hybrid technique that combines Opt‐aiNet with DL and ML technology is far more effective than the simple techniques.
TABLE 6.
Method | Alternate hypothesis Ha | p‐value | t‐value | Null hypothesis H 0 | |
---|---|---|---|---|---|
DNN‐opt‐aiNet |
|
0.00043 | 1.85 | Rejected | |
CNN‐opt‐aiNet |
|
0.00019 | 1.79 | Rejected | |
BDLSTM‐opt‐aiNet |
|
0.0008 | 1.74 | Rejected | |
2DCNN‐opt‐aiNet |
|
0.0005 | 1.81 | Rejected | |
SVM‐opt‐aiNet |
|
0.0008 | 1.85 | Rejected | |
LR‐opt‐aiNet |
|
0.00017 | 1.76 | Rejected |
The t‐test results shown in Table 6 testify to the significance of our proposed approach.
5.2. Therapeutic benefit of the COVID‐opt‐aiNet
The field of research into creating more effective CDSSs for combating COVID‐19, as well as the world of computing, is constantly changing. At the intersection of medical research, patient care, and information technology, we see new scientific challenges continually emerging. Without the use of advanced computer concepts, modern medicine or medical science cannot be practiced.
Since COVID‐19 is a respirational disorder, many clinicians use CT imaging to evaluate patients who exhibit COVID‐19 indications while expecting RT‐PCR results or when RT‐PCR results are negative but the patient exhibits COVID‐19 symptoms. Once COVID‐19 is correctly identified, clinicians may schedule the required treatment and make more accurate predictions about the likelihood of recovery. Indeed, there is no universally accepted understanding of what causes symptomatic COVID‐19. The majority of COVID‐19 positive patients develop slight to moderate respiratory illness and recover without needing special care.
However, with careful and early identification of COVID‐19 via a CT scan, our COVID‐opt‐aiNet paradigm will assist in enhancing patient care. If the disease can be accurately predicted, then the appropriate laboratory tests and treatment can be promptly undertaken.
5.3. Comparison with the state‐of‐the‐art techniques
We have compared our results (in terms of accuracy) with those of other state‐of‐the‐art approaches, using the same COVID‐19 datasets. The summarized comparison is given in Table 7 below. This table shows that the proposed approach achieves greater accuracy than the techniques in most of the cases (marked as bold) listed by. 18 , 54 , 55
TABLE 7.
Paper references | Models | Accuracy% |
---|---|---|
Loey et al. 54 | AlexNet | 67.34 |
VGGNET16 | 72.36 | |
VGGNET19 | 76.88 | |
GoogleNet | 75.38 | |
ResNet50 | 76.38 | |
Pham 55 | AlexNet | 74.50 |
GoogleNet | 78.97 | |
SqueezeNet | 78.52 | |
ShuffleNet | 86.13 | |
Resnet‐18 | 90.16 | |
Resnet‐50 | 92.62 | |
Resnet‐101 | 89.71 | |
Xception | 85.68 | |
Inception‐v3 | 91.28 | |
Inception‐ResNet‐v2 | 86.35 | |
VGG‐16 | 78.52 | |
VGG‐19 | 83.22 | |
DenseNet‐201 | 91.72 | |
MobileNet‐v2 | 87.25 | |
NasNet‐Mobile | 83.45 | |
NasNet‐Large | 85.23 | |
Zhao et al. 18 | DenseNet‐169 | 79.5 |
ResNet‐50 | 77.4 | |
The proposed technique | DNN‐opt‐aiNet (DB1) | 82.66 |
DNN‐opt‐aiNet (DB2) | 97.11 | |
DNN‐opt‐aiNet (DB3) | 82 | |
2DCNN‐opt‐aiNet (DB1) | 47 | |
2DCNN‐opt‐aiNet (DB2) | 51 | |
2DCNN‐opt‐aiNet (DB3) | 51 | |
CNN‐opt‐aiNet (DB1) | 74.3 | |
CNN‐opt‐aiNet (DB2) | 71.3 | |
CNN‐opt‐aiNet (DB3) | 71 | |
BDLSTM‐opt‐aiNet (DB1) | 63.68 | |
BDLSTM‐opt‐aiNet (DB2) | 66.68 | |
BDLSTM‐opt‐aiNet (DB3) | 62 | |
SVMRBF‐opt‐aiNet (DB1) | 82 | |
SVMRBF‐opt‐aiNet (DB2) | 95 | |
SVMRBF‐opt‐aiNet (DB3) | 83 | |
SVMP‐opt‐aiNet (DB1) | 68 | |
SVMP‐opt‐aiNet (DB2) | 89 | |
SVMP‐opt‐aiNet (DB3) | 83 | |
LR‐opt‐aiNet (DB1) | 76 | |
LR‐opt‐aiNet (DB2) | 97 | |
LR‐opt‐aiNet (DB3) | 83 | |
SVM‐opt‐aiNet (DB1) | 82 | |
SVM‐opt‐aiNet (DB2) | 98 | |
SVM‐opt‐aiNet (DB3) | 77 |
Note: The significance of bold values are highest values.
Barstugan et al. 11 applied SVM and report 99.68% performance on data consisting of 150 CT abdominal images, which belong to the 53 infected cases, from the Societa Italiana di Radiologia Medica e Interventistica. The patch regions were cropped on 150 CT images. Other researchers chose various DNN models, but none had the model that we chose, the parameters of which are listed in Table 2 of the article. SVM outperforms both the article cited in the review and our experiments. None of the DL and CNN structures mentioned in the review section were applied to the same architecture that we used in our experiment. 22 made use of a 22‐layer CNN model that was proposed. Wang et al. 7 trained a deep CNN (COVID‐Net) on ImageNet first and was applied to a variety of datasets. Abbas et al. 9 classified COVID‐19 CXR images using a deep CNN, called DeTraC (Decompose, Transfer, and Compose). Hemdan et al. 12 detected COVID‐19 in X‐ray images using a DL method called COVIDX‐Net. The COVIDX‐Net framework is composed of seven distinct deep CNN elements, including the revised Visual Geometry Group Network (VGG19) and the Google MobileNet. Narin et al. 13 used three CNN models to detect pneumonia infection: ResNet50, InceptionV3, and InceptionResNetV2. None of the CNN or DNN architectures available in the literature match our defined architecture. As a result, performance changes. Additionally, as previously stated, SVM always outperforms any DL or CNN algorithm. Thus, in our case, we can state that the performance is different due to our algorithms and settings. SVM Linear is capable, but it is dependent on the type of features, as they outperform in this scenario if the features are linearly separable. Other algorithms may perform better in their context. We intended to boost the basic version's performance with the optimized version, and we appear to have done so. In the future, we may test it on other well‐defined structures to determine their effect.
6. CONCLUSION AND FUTURE WORK
The COVID‐19 virus is devastating people's lives worldwide. 57 The use of medical imaging with different modalities for COVID‐19 detection has become an important means of containing the spread of this disease. However, medical images are not sufficiently adequate for routine clinical use; there is, therefore, an increasing need for AI to be applied to improve the diagnostic performance of medical image analysis. Regrettably, due to the evolving nature of the COVID‐19 global epidemic, the systematic collection of a large data set for DNN/ML training is problematic. In this study, we introduced a hybrid approach to improving medical image analysis for the detection of patients infected with COVID‐19. To obtain better results, we also segmented the CT image dataset to extract information about the lungs. Our proposed technique used the opt‐aiNet as FS technique in conjunction with DL as well as ML classifiers to detect COVID‐19 infected patients. The results showed that the hybrid approach outperformed the simple algorithm while reducing the training time as well. In the future, we plan to extend this experimental work by confirming the method with larger datasets and by using a wider range of methods for the detection of patients infected with COVID‐19. Furthermore, the results from different classifiers can also be fused to generate better prediction accuracy.
CONFLICT OF INTEREST
The authors declare no conflicts of interest.
AUTHOR CONTRIBUTION
Summrina Kanwal and Faiza Khan perceived and planned the experiments. Summrina Kanwal carried out all the experiments and Faiza Khan performed the data segmentation. Summrina Kanwal, and Faiza Khan contributed to sample preparation. All authors contributed to the interpretation of the results. Summrina Kanwal took the lead in writing the manuscript. All authors provided critical feedback and helped shape the research, analysis and manuscript.
Kanwal S, Khan F, Alamri S, Dashtipur K, Gogate M. COVID‐opt‐aiNet: A clinical decision support system for COVID‐19 detection. Int J Imaging Syst Technol. 2022;32(2):444-461. doi: 10.1002/ima.22695
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are openly available in [GitHub] at [https://github.com/faizakhan1925/COVID-opt-aiNet].
REFERENCES
- 1. Omer SB, Malani P, Del Rio C. The COVID‐19 pandemic in the US: a clinical update. JAMA. 2020;323(18):1767‐1768. [DOI] [PubMed] [Google Scholar]
- 2. Khan MA, Alhaisoni M, Tariq U, et al. COVID‐19 case recognition from chest CT images by deep learning, entropy‐controlled firefly optimization, and parallel feature fusion. Sensors. 2021;21(21):7286. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Daniel J. Education and the COVID‐19 pandemic. Prospects. 2020;49(1):91‐96. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Shan F, Gao Y, Wang J, et al. Lung infection quantification of COVID‐19 in CT images with deep learning. arXiv. 2020;2003:04655. [Google Scholar]
- 5. Bandyopadhyay SK, Dutta S. Machine learning approach for confirmation of covid‐19 cases: positive, negative, death and release. medRxiv. 2020. [Google Scholar]
- 6. Griffith D, Li B. Spatial‐temporal modeling of initial COVID‐19 diffusion: the cases of the Chinese mainland and conterminous United States. Geo Spat Inf Sci. 2021;24:1‐23. [Google Scholar]
- 7. Wang L, Lin ZQ, Wong A. Covid‐net: a tailored deep convolutional neural network design for detection of covid‐19 cases from chest x‐ray images. Sci Rep. 2020;10(1):1‐12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Khan MA, Abbas S, Khan KM, Al Ghamdi MA, Rehman A. Intelligent forecasting model of COVID‐19 novel coronavirus outbreak empowered with deep extreme learning machine. Comput Mater Contin. 2020;64(3):1329‐1342. [Google Scholar]
- 9. Abbas A, Abdelsamea M, Gaber M. Classification of COVID‐19 in chest X‐ray images using DeTraC deep convolutional neural network. medRxiv. 2020;51(2):854‐864. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Khalifa NEM, Taha MHN, Hassanien AE, Elghamrawy S. Detection of coronavirus (covid‐19) associated pneumonia based on generative adversarial networks and a fine‐tuned deep transfer learning model using chest x‐ray dataset. arXiv. 2020;2004:01184. [Google Scholar]
- 11. Barstugan M, Ozkaya U, Ozturk S. Coronavirus (covid‐19) classification using ct images by machine learning methods. arXiv. 2020;2003:09424. [Google Scholar]
- 12. Hemdan EED, Shouman MA, Karar ME. Covidx‐net: a framework of deep learning classifiers to diagnose covid‐19 in x‐ray images. arXiv. 2020;2003:11055. [Google Scholar]
- 13. Narin A, Kaya C, Pamuk Z. Automatic detection of coronavirus disease (covid‐19) using x‐ray images and deep convolutional neural networks. Pattern Anal Appl. 2021;24:1‐14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Li L, Qin L, Xu Z, et al. Artificial intelligence distinguishes COVID‐19 from community acquired pneumonia on chest CT. Radiology. 2020;296:E65‐E71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Apostolopoulos ID, Mpesiana TA. Covid‐19: automatic detection from x‐ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med. 2020;43(2):635‐640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Farooq M, Hafeez A. Covid‐resnet: a deep learning framework for screening of covid19 from radiographs. arXiv. 2020;2003:14395. [Google Scholar]
- 17. Kassani SH, Kassasni PH, Wesolowski MJ, Schneider KA, Deters R. Automatic detection of coronavirus disease (covid‐19) in x‐ray and ct images: a machine learning‐based approach. arXiv. 2020;2004:10641. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Zhao J, Zhang Y, He X, Xie P. Covid‐ct‐dataset: a ct scan dataset about covid‐19. arXiv. 2020;2003:490. [Google Scholar]
- 19. Alom MZ, Rahman MM, Nasrin MS, Taha TM, Asari VK. Covid_mtnet: Covid‐19 detection with multi‐task deep learning approaches. arXiv. 2020. [Google Scholar]
- 20. Hu S, Gao Y, Niu Z, et al. Weakly supervised deep learning for covid‐19 infection detection and classification from ct images. IEEE Access. 2020;8:118869‐118883. [Google Scholar]
- 21. Voulodimos A, Protopapadakis E, Katsamenis I, Doulamis A, Doulamis N. Deep learning models for COVID‐19 infected area segmentation in CT images. medRxiv. 2020;170‐174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Hussain E, Hasan M, Rahman MA, Lee I, Tamanna T, Parvez MZ. CoroDet: a deep learning based classification for COVID‐19 detection using chest X‐ray images. Chaos Soliton Fract. 2021;142:110495. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Alshazly H, Linse C, Barth E, Martinetz T. Explainable COVID‐19 detection using chest CT scans and deep learning. Sensors. 2021;21(2):455. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Sahlol AT, Yousri D, Ewees AA, Al‐Qaness MA, Damasevicius R, Abd Elaziz M. COVID‐19 image classification using deep features and fractional‐order marine predators algorithm. Sci Rep. 2020;10(1):1‐15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Yousri D, Abd Elaziz M, Abualigah L, Oliva D, Al‐Qaness MA, Ewees AA. COVID‐19 X‐ray images classification based on enhanced fractional‐order cuckoo search optimizer using heavy‐tailed distributions. Appl Soft Comput. 2021;101:107052. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Ismael AM, Şengür A. Deep learning approaches for COVID‐19 detection based on chest X‐ray images. Expert Syst Appl. 2021;164:114054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Panwar H, Gupta PK, Siddiqui MK, Morales‐Menendez R, Bhardwaj P, Singh V. A deep learning and grad‐CAM based color visualization approach for fast detection of COVID‐19 cases using chest X‐ray and CT‐scan images. Chaos Soliton Fract. 2020;140:110190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Khan MA, Hussain N, Majid A, et al. Classification of positive COVID‐19 CT scans using deep learning. Comput Mater Contin. 2021;66:2923‐2938. [Google Scholar]
- 29. Khan MA, Majid A, Akram T, et al. Classification of COVID‐19 CT scans via extreme learning machine. Comput Mater Contin. 2021;68:1003‐1019. [Google Scholar]
- 30. Shui‐Hua W, Khan MA, Govindaraj V, Fernandes SL, Zhu Z, Yu‐Dong Z. Deep rank‐based average pooling network for COVID‐19 recognition. Comput Mater Contin. 2022;70:2797‐2813. [Google Scholar]
- 31. Zhang YD, Khan MA, Zhu ZQ, Wang SH. Pseudo zernike moment and deep stacked sparse autoencoder for COVID‐19 diagnosis. Comput Mater Contin. 2021;69:3145‐3162. [Google Scholar]
- 32. Nicholson C. A Beginner's Guide to Neural Networks and Deep Learning. Skymind; 2019. [Google Scholar]
- 33. Georgevici AI, Terblanche M. Neural Networks and Deep Learning: A Brief Introduction. Springer; 2019. [DOI] [PubMed] [Google Scholar]
- 34. Sun D, Wang M, Li A. A multimodal deep neural network for human breast cancer prognosis prediction by integrating multi‐dimensional data. IEEE/ACM Trans Comput Biol Bioinform. 2018;16(3):841‐850. [DOI] [PubMed] [Google Scholar]
- 35. Wajid SK, Hussain A, Luo B, Huang K. An investigation of machine learning and neural computation paradigms in the design of clinical decision support systems (CDSSs). International Conference on Brain Inspired Cognitive Systems. Springer; 2016:58‐67. [Google Scholar]
- 36. Gao XW, Hui R, Tian Z. Classification of CT brain images based on deep learning networks. Comput Methods Programs Biomed. 2017;138:49‐56. [DOI] [PubMed] [Google Scholar]
- 37. Liu YH. Feature extraction and image recognition with convolutional neural networks. J Phys Conf Ser. 2018;1087(6):062032. [Google Scholar]
- 38. de Casto LN, von Zuben FJ. An evolutionary immune network for data clustering. In Sixth Brazilian Symposium on Neural Networks; 2000:84‐89.
- 39. Brownlee J. Clever Algorithms: Nature‐Inspired Programming Recipes. Jason Brownlee; 2011. [Google Scholar]
- 40. Vapnik V. The Nature of Statistical Learning Theory. Springer Science & Business Media; 2013. [Google Scholar]
- 41. Cramer, J. S. (2002). The origins of logistic regression. 10.2139/ssrn.360300.
- 42. Zhu Q, Chen J, Zhu L, Duan X, Liu Y. Wind speed prediction with spatio–temporal correlation: a deep learning approach. Energies. 2018;11(4):705. [Google Scholar]
- 43. de Castro LN, von Zuben FJ. Artificial immune systems: part I–basic theory and applications. Universidade Estadual de Campinas, Dezembro de. Tech Rep. 1999;210(1). [Google Scholar]
- 44. Kanwal S, Hussain A, Huang K. Novel artificial immune networks‐based optimization of shallow machine learning (ML) classifiers. Expert Systems with Applications. 2021;165:113834. [Google Scholar]
- 45. Khan F, Kanwal S, Alamri S, Mumtaz B. Hyper‐parameter optimization of classifiers, using an artificial immune network and its application to software bug prediction. IEEE Access. 2020;8:20954‐20964. [Google Scholar]
- 46. Chowdhury ME, Rahman T, Khandakar A, et al. Can AI help in screening viral and COVID‐19 pneumonia? IEEE Access. 2020;8:132665‐132676. [Google Scholar]
- 47. Rahman S, Sarker S, Al Miraj MA, Nihal RA, Haque AN, Al Noman A. Deep learning–driven automated detection of COVID‐19 from radiography images: a comparative analysis. Cognit Comput. 2021;1‐30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Ali L, Khelil K, Wajid SK, et al. Machine learning based computer‐aided diagnosis of liver tumours. In 2017 IEEE 16th International Conference on Cognitive Informatics & Cognitive Computing (ICCI* CC); 2017:139–145).
- 49. Huang G, Liu Z, van der Maaten L, Weinberger KQ. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017:4700–4708.
- 50. Hua Y, Mou L, Zhu XX. Recurrently exploring class‐wise attention in a hybrid convolutional and bidirectional LSTM network for multi‐label aerial image classification. ISPRS J Photogramm Remote Sens. 2019;149:188‐199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. Jensen R, Shen Q. Computational Intelligence and Feature Selection: Rough and Fuzzy Approaches. Wiley‐IEEE Press; 2008. [Google Scholar]
- 52. Liu H, Motoda H, eds. Feature Extraction, Construction and Selection: A Data Mining Perspective. Vol 453. Springer Science & Business Media; 1998. [Google Scholar]
- 53. Sharma N, Aggarwal LM. Automated medical image segmentation techniques. J Med Phys. 2010;35(1):3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Loey M, Manogaran G, Khalifa NEM. A deep transfer learning model with classical data augmentation and cgan to detect covid‐19 from chest ct radiography digital images. Neural Comput Appl. 2020;1‐13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Pham TD. A comprehensive study on classification of COVID‐19 on computed tomography with pretrained convolutional neural networks. Sci Rep. 2020;10(1):1‐8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Hasan AM, Al‐Jawad MM, Jalab HA, Shaiba H, Ibrahim RW, AL‐Shamasneh AAR. Classification of covid‐19 coronavirus, pneumonia and healthy lungs in ct scans using q‐deformed entropy and deep learning features. Entropy. 2020;22(5):517. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Burdick H, Lam C, Mataraso S, et al. Prediction of respiratory decompensation in Covid‐19 patients using machine learning: the READY trial. Comput Biol Med. 2020;124:103949. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Khan MA, Akram T, Sharif M, Kadry S, Nam Y. Computer decision support system for skin cancer localization and classification. Comput Mater Contin. 2021;68:1041‐1064. [Google Scholar]
- 59. Rehman A, Khan MA, Saba T, Mehmood Z, Tariq U, Ayesha N. Microscopic brain tumor detection and classification using 3D CNN and feature selection architecture. Microsc Res Tech. 2021;84(1):133‐149. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data that support the findings of this study are openly available in [GitHub] at [https://github.com/faizakhan1925/COVID-opt-aiNet].