Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2021 Dec 28;141:105172. doi: 10.1016/j.compbiomed.2021.105172

Development of computer-aided model to differentiate COVID-19 from pulmonary edema in lung CT scan: EDECOVID-net

Elena Velichko a, Faridoddin Shariaty a,, Mahdi Orooji b, Vitalii Pavlov a, Tatiana Pervunina c, Sergey Zavjalov a, Razieh Khazaei d, Amir Reza Radmard d
PMCID: PMC8712746  PMID: 34973585

Abstract

The efforts made to prevent the spread of COVID-19 face specific challenges in diagnosing COVID-19 patients and differentiating them from patients with pulmonary edema. Although systemically administered pulmonary vasodilators and acetazolamide are of great benefit for treating pulmonary edema, they should not be used to treat COVID-19 as they carry the risk of several adverse consequences, including worsening the matching of ventilation and perfusion, impaired carbon dioxide transport, systemic hypotension, and increased work of breathing. This study proposes a machine learning-based method (EDECOVID-net) that automatically differentiates the COVID-19 symptoms from pulmonary edema in lung CT scans using radiomic features. To the best of our knowledge, EDECOVID-net is the first method to differentiate COVID-19 from pulmonary edema and a helpful tool for diagnosing COVID-19 at early stages. The EDECOVID-net has been proposed as a new machine learning-based method with some advantages, such as having simple structure and few mathematical calculations. In total, 13 717 imaging patches, including 5759 COVID-19 and 7958 edema images, were extracted using a CT incision by a specialist radiologist. The EDECOVID-net can distinguish the patients with COVID-19 from those with pulmonary edema with an accuracy of 0.98. In addition, the accuracy of the EDECOVID-net algorithm is compared with other machine learning methods, such as VGG-16 (Acc = 0.94), VGG-19 (Acc = 0.96), Xception (Acc = 0.95), ResNet101 (Acc = 0.97), and DenseNet20l (Acc = 0.97).

Abbreviations: CADs, Computer-Aided Detection system; NPV, Negative Predictive Value; CT, Computed Tomography; NTP, Number of True Positive; WHO, World Health Organization; NTN, Number of True Negative; RT-PCR, Real-Time Polymerase Chain Reaction; NFP, Number of False Positive; ML, Machine Learning; NFN, Number of False Negative; HRCT, High Resolution Computed Tomography; ROC, Receiver Operating Characteristic; CNN, Convolutional Neural Network; AUC, Area Under the Curve; PPV, Positive Predictive Value

Keywords: Pulmonary edema, COVID-19, Lung CT scans, Computer-aided detection system (CADs), Machine learning, CT images

1. Introduction

The coronavirus disease 2019 (COVID-19) originated in Wuhan in China and has spread rapidly worldwide since December 2019, causing a global panic. COVID-19 has been declared a Public Health Emergency of International Concern by the World Health Organization (WHO) [1,2]. According to the specifications published by the World Health Organization, as of May 16, 2021, more than 163 million laboratory-confirmed cases of COVID-19 had been documented globally [3], with patients often exhibiting pulmonary parenchymal opacity on chest radiography. In a relatively high proportion of individuals, COVID-19 has resulted in complications, such as acute respiratory diseases, heart difficulties, secondary infections, and considerable mortality.

Early detection and treatment in severe cases is the key to reducing mortality, yielding favorable outcomes, and mitigating the potential spread and impact of infections [1].

One of the significant challenges that have garnered significant attention is the difference between the lung injuries caused by COVID-19, pulmonary edema, and other cases in CT images. It was observed from early descriptions of COVID-19-related respiratory failure that some patients experienced hypoxemia disproportionate to the reported dyspnea or level of radiographic opacity, with greater typical respiratory system compliance and less work of breathing. One of the ideas that have attracted much attention, especially on social media and medicine, is the notion that COVID-19-induced lung injury is more similar to pulmonary edema and has led to further speculation that therapies commonly used to prevent and treat pulmonary edema, other acute altitude sicknesses, and may benefit patients with COVID-19-induced lung injuries. However, a review of the pathophysiology of pulmonary edema and a close examination of the mechanisms of action of the drugs used to treat pulmonary edema should make it clear that the COVID-19-induced lung injury is not comparable to pulmonary edema and that treatments used for pulmonary edema have no benefit for a patient with COVID-19 but make it worse [4,5].

However, pathological studies in pulmonary edema and studies of effective mechanisms and drugs for managing the pulmonary edema disease are not effective in patients with COVID-19, and some cases are even harmful, leading to injury to the patient. Therefore, it can be concluded that despite the similarities between pulmonary edema and COVID-19 in clinical characteristics, such as hypoxemia, radiography opacities, and modified lung compliance, the pathophysiological mechanisms of pulmonary edema and COVID-19 in the lungs are essentially different, and the diseases cannot be viewed as equivalent. As systemically administered pulmonary vasodilators and acetazolamide are effective in treating pulmonary edema, they should not be used to treat COVID-19 because they have a number of side effects, including worsening the matching of ventilation and perfusion, impaired carbon dioxide transport, systemic hypotension, and increased work of breathing [6].

Thousands of images per patient are generated in current clinical practices, making it cumbersome for physicians to analyze all the data [7,8]. In addition, human interpretation of medical images can produce errors so that not all information in the image is recognized. The advances made in computer systems allow drawing on the expertise of radiologists to extract information from medical images [9,10]. Given the rapid spread of COVID- 19, using machine learning algorithms to process chest CT scans might help identify and define the clinical characteristics and severity of the disease [11,12]. Although CT provides rich pathological information, only a qualitative assessment was made in the radiological reports, as there are no computer-aided tools for precise quantification of the infection regions and their longitudinal changes [13]. Developing computer vision systems supports medical applications, such as increasing image quality, organ segmentation, and organ texture classification [14,15]. Many papers have been published in recent years (2019–2021) about the automatic detection of COVID-19 using CT scan images and machine learning algorithms to distinguish patients with COVID-19 from non-infected patients [16].

This paper used computational methods and CT scan images of the lungs to differentiate the COVID-19 disease from lung edema. EDECOVID-net that automatically distinguishes the CT images containing COVID-19-induced infections from those induced by edema is proposed for this purpose. In addition, we used some machine learning algorithms, such as VGG-16, VGG-19, Xception, ResNet101, and DenseNet201, to classify these two diseases. Finally, we compared the performance of these algorithms in diagnosing the type of disease, examining the results in depth in the Discussion section (Fig. 1 ).

Fig. 1.

Fig. 1

Flowchart of this study.

2. Patients and methods

2.1. Patients

Patients with flu-like symptoms with the initial diagnosis of COVID-19 were selected for the study regardless of their demographic characteristics, such as age and sex. High-resolution CT (HRCT) scans of the patients were taken. Inclusion criteria for each patient were confirmation of COVID-19 by RTPCR test that distinguished them from patients with lung edema. Imaging was performed for patients with COVID-19 between 3 and 6 days after the disease onset, from early 2020 to April 2020. The inclusion criteria for patients with pulmonary edema were as follows:

  • Negative RT-PCR test,

  • No fever,

  • Ground glass opacities,

  • Interlobular septal thickening,

  • Bilateral bronchovascular bundle thickening.

Exclusion criteria for patients with pulmonary edema were as follows:

  • Low image quality due to motion artifact,

  • Close contact with an RT-PCR positive patient over the past two weeks,

  • Clinical suspicion of mixed lung infections,

  • Clinical suspicion of drug-induced lung injury.

A total of 254 COVID-19 patients were investigated, with 141 men and 113 women (mean age ± standard deviation of 50.22 ± 10.85 years). A total of 163 individuals with edema were included in the study, 96 males and 67 women (mean age ± standard deviation of 61.45 ± 15.04 years). The gender distribution in the COVID-19 and edema groups did not differ significantly (P > 0.05). The COVID-19 group's mean age, on the other hand, was significantly lower than the edema group (P > 0.001, Table 1 ).

Table 1.

Demographic characteristics of data.

Characteristics COVID-19 Edema P-value*
Age (Mean ± SD) 50.22 ± 10.85 61.45 ± 15.04 <0.001
Number of Females (Percentage) 113 (44.44%) 67 (40.70%) 0.600
Number of Males (Percentage) 141 (55.56%) 96 (59.30%) 0.600

Thirteen thousand seven hundred seventeen imaging patches, including 5759 COVID-19 and 7958 edemata, were extracted via CT imaging by a specialist radiologist. The COVID-19 dataset was divided into 3455, 1152, and 1152 as training, testing, and validation datasets, respectively. The edema dataset was divided into 4774, 1592, and 1592 for training, testing, and validation, datasets, respectively.

2.2. Database acquisition

The computer-extracted image features were identified with different prognoses for suspicious nodules on the chest CT scan. To achieve a reliable and extendable conclusion, 417 cases were obtained from the Shariati Hospital and Taleghani Hospital of Tehran (both affiliated with the Shahid Beheshti University of the Medical Sciences). A histopathologic confirmation available for every patient was considered to ensure whether the patient had COVID-19 or edema. The number of slices per scan ranged from 226 to 389, and the slice thickness of the CT scans ranged from l to 6 mm. The CT scan images, each slice of which has dimensions of 512 × 512 with a 16-bit gray resolution, were prepared by Siemens with a voltage range of 120–140 kV and a current range of 25–40 mA concerning the patient's condition.

2.3. Pre-processing

An experienced radiologist reviewed the CT scan images after making gray-scale images. Next, slices showing symptoms of the disease were selected from the CT scan image. Fig. 2 shows an example of images of patients with COVID-19 and edema.

Fig. 2.

Fig. 2

Slices showing disease symptoms, selected by radiologist.

Other pre-processing techniques have been performed on lung CT scans to improve their quality for better diagnostic results. This stage is important because the lungs contain several structures that can make it difficult to diagnose accurately. There are diverse pre-processing methods including, linear interpolation [17], middle filter [18], morphological operation [19], Gaussian filter [20] and weight addition filter [21]. A gradient filter was used to improve the quality of CT scan images. Filtering involves a neighborhood (usually a small rectangle) and a predefined operation performed on pixels of the image in that neighborhood. The filtering operation creates a new pixel whose coordinates equal the coordinates of the neighborhood center and whose value equals the result of the filtering operation. The processed (filtered) image is executed as the center of the filter is placed on each pixel of the input image [22,23].

As an approximation of the gradients, the pixel intensity differences of the horizontal and vertical neighboring pixels of the input image are first determined. The computed gradients of the darker region are enhanced as follows:

qh(x)=fh(x).L(f(x);β,τ) (1)
qv(x)=fv(x).L(f(x);β,τ) (2)

where x represents the pixel position, f(x) indicates the input image's intensity, f h(x) and f v(x) are the input image's horizontal and vertical gradients, respectively. Also q h(x) and q v(x) are the horizontal and vertical enhanced gradients, respectively, and L(f(x); β, τ) is an enhancement function. In this work, the enhancement function is constructed with two parameters β and τ, as follows:

L(ξ;β,τ)=β11τ2ξ2β1τξ+β(ξτ)1(ξτ) (3)

where ξ is the pixel intensity. This enhancement function increases the gradient if the related pixel intensity is 0. For the intensity higher than τ, the amplification ratio is unity. This enhancement function aims to decrease the amplification ratio from 0 to τ smoothly. Parameters β and τ are set manually in this paper.

2.4. Texture feature extraction

In our study, 129 texture features were extracted from CT scan images. Higher-order statistics are used in most texture analysis methods [24,25]. It evaluates the relationships between various pixels at a time. To assess the texture in images, such methods as local binary patterns [26], wavelets [27], and Gabor filters [28] can be utilized. Haralick, Gray, absolute gradient (GRAD), Gabor, Local Binary Pattern (LBP), Laws texture, and Lapped texture are among the extracted features (Fig. 3 ).

Fig. 3.

Fig. 3

Heatmap showing expression values of radiomic features for COVID-19.

2.4.1. Haralick

Haralick et al. [29] suggested the utilization of a gray-level co-occurrence matrix (GLCM) to quantify the spatial relationship between adjacent pixels in a picture. Haralick texture characteristics calculated from GLCM are frequently employed because of their simplicity and obvious interpretations in various applications, including lung CT scan analysis [30]. Haralick characteristics have become increasingly popular in medical image analysis in recent years, for example, in the analysis of ultrasound and MRI images of the liver and heart [[31], [32], [33]], X-ray mammography [34,35], and MRI images in the research of breast cancer [36,37], prostate cancer [[38], [39], [40]], and brain cancer [[41], [42], [43]]. It is also utilized in radiomics [44,45], a new technology that extracts a vast number of quantitative information from medical images and uses them to develop models that predict things like tumor phenotype [46], survival [47,48], and categorization [49].

2.4.2. Gray

The gray texture feature introduced by Haralik et al. is a second-order structural feature based on the Gray-Level Cooccurrence Matrices (GLCMs) according to the target areas in the grayscale image. Five gray texture features were extracted in this work, including contrast, correlation, inverse difference moment, angular second moment, and entropy. These features are obtained using equations (4), (5), (6), (7), (8):

Angular second moment:

f=ij=0N1{P(ij)}2 (4)

Contrast:

f=ij=0N1Pij(ij)2 (5)

Correlation:

f=ij=0N1Pij[(iμ)(jμj)(σi)2(σj)2] (6)

Entropy:

f=ij=0N1Pij(lnPij) (7)

Inverse difference moment:

f=ij=0N11Pij1+(ij)2 (8)

where f represents the output values of the functions, and i and j are the indices of the GLCM extracted from the image. In addition, P represents the probability, μ represents the mean, and σ represents SD.

2.4.3. Absolute Gradient (GRAD)

A set consisting of 5 texture features was calculated from each determined gradient matrix CT scan. The set of features consists of Mean, Variance, Skewness, Kurtosis, Moment. Gradient features are derived from a gradient matrix after calculating its histogram (His). This histogram is determined for gradient values extended along the range of values [-255, 255]. The determined gradient features are as follows [50,51]:

Mean:

f=vHis(v+255).vTotalnumberofPixels (9)

Variance:

f=vHis(v+255)(vμ)2TotalnumberofPixels (10)

Skewness:

f=vHis(v+255)(vμ)3TotalnumberofPixels (11)

Kurtosis:

f=vHis(v+255)(vμ)4TotalnumberofPixels (12)

Moment:

f=vHis(v+255)|vμ|TotalnumberofPixels (13)

where v is the gradient value extended within the range [-255,255].

2.4.4. Gabor

The Gabor filter is widely used to extract texture features from images [[52], [53], [54], [55]], which has also shown excellent performance. According to the past literature, the use of the Gabor texture feature performs better than features, such as tree-structured wavelet transform (TWT) features, multiresolution simultaneous autoregressive model (MR-SAR), and pyramid-structured wavelet transform (PWT) features [52]. In Gabor filters, wavelets are used, in which each wavelet receives energy in a specific direction and frequency, based on which feature extraction describes the local frequency. Thus, it can be concluded that the local features extract the signal energy, and thus the texture features can be extracted from this energy distribution [56].

For an input image I(x, y) of size P × Q, the transform of Gabor wavelet is determined using a convolution:

Gmn(x,y)=stI(xs,yt)ψmn*(s,t) (14)

where s and t are filter parameters, and ψmn* is a complex conjugate of ψ mn that produces a set of similar functions using the rotation and dilation of the mother wavelet.

ψ(x,y)=12πσxσyexp[12(x2σx2+y2σy2)].exp(j2πWx) (15)

where W is the modulation frequency, and Gabor-like wavelets are generated through the following generation function:

ψmn(x,y)=amψmn(x~,y~) (16)

where m and n represent the scale and direction of the wavelet, respectively, so that m = 0, 1, …, M − 1, n = 0, 1, …, N − 1. Also, assuming that a > 1 and θ = /N, x~ and y~ are also defined as follows:

x~=am(xcosθ+ysinθ);y~=am(xcosθ+ysinθ) (17)

The variables used in this work are specified as follows:

a=(Uh/Ul)1M1,Wm,n=amUl,σx,m,n=(a+1)2ln22πam(a1)Ul,σy,m,n=12πtanπ2NUh22ln212πσx,m,n2 (18)

Our implementation is done with the following constants which are also common in the literature: U l = 0.05, U h = 0.4, s and t ranges from 0 to 60, i.e., filter mask size is 60 × 60.

2.4.5. Local Binary Pattern (LBP)

By comparing a specific pixel with neighboring pixels, an LBP [57] is calculated as follows:

LBPP,R=p=0P1s(gpgc)2p,s(x)=1,x00,x<0 (19)

where g c is the gray value of the central pixel, g p is the value of its neighbors, P is the total number of involved neighbors, and R is the radius of the neighborhood. If g c has the coordinates (0, 0), then g p has the coordinates (Rcos(2πp/P), Rsin(2πp/P)). Interpolation can approximate the gray values of neighbors that are not in the image grids. We assume that the image is I × J in size. A histogram is created to represent the texture image after recognizing each pixel's LBP pattern.

H(k)=i=1Ij=1JfLBPP,R(i,j),k,k[0,K],f(x,y)=1,x=y0,otherwise (20)

where N is the maximum value of the LBP pattern. The number of spatial transitions (bitwise changes 0 or 1) in an LBP pattern is defined as its U value.

U(LBPP,R)=s(gP1gc)s(g0gc)+p=1P1s(gpgc)s(gp1gc). (21)

In the circular binary presentation, uniform LBP patterns relate to patterns with limited transition or discontinuities (U ≤ 2). In practice, a lookup table of 2P items is used to transform LBP P,R to LBPP,Ru2 (superscript “u2” denotes uniform patterns with U ≤ 2), which has P × (P − 1) + 3 distinct output values.

A locally rotation-invariant pattern can be defined as follows to obtain rotation in variance:

LBPP,Rriu2=p=0P1s(gpgc),U(LBPP,R)2P+1,otherwise (22)

A lookup table can be used to convert LBP P,R to LBPP,Rriu2 (superscript “riu2” denotes rotation invariant ”uniform” patterns with U ≤ 2), which has P + 2 distinct output values.

2.5. Feature selection

In this work, 129 texture features were extracted from CT scan images, out of which 3 of the best texture features were selected, resulting in an RGB (Red Green Blue) image by combining these three features.

If ω 1 and ω 2 are two classes used to distinguish between COVID-19 and edema pixels in this study, and x is a feature vector, we choose ω 1 if:

P(ω1|x)>P(ω2|x), (23)

The difference between P(ω 1|x) and P(ω 2|x) determines the probability of error. When the ratio P(ω1|x)P(ω2|x) provides useful information in the field of discriminatory capabilities connected with an adopted feature vector x for the ω 1 and ω 2 classes. However, for existing values P(ω 1|x) and P(ω 2|x), lnP(ω1|x)P(ω2|x)=D12(x) also provides the same information and can be used to quantify the underlying discriminating information of class ω 1 vs. ω 2. If D 12 = 0, the classes are overlapped logically and totally. Because x is not a constant number, the average value for the class ω 1 is used:

D12=P(x|ω1)lnP(x|ω1)P(x|ω2)dx. (24)

There are also equivalent arguments for class ω 2:

D21=P(x|ω2)lnP(x|ω2)P(x|ω1)dx. (25)

Then:

d12=D12+D21. (26)

In terms of x (accepted feature vector), known as the divergence, equation (26) can be utilized as a separability metric for the ω 1 and ω 2 classes. All of the criteria we have looked at so far assess the ability to classify a two-class situation. C(k) calculates an average or ”total” value in a multiclass situation. The one-dimensional divergence d ij in Ref. [58] was used to calculate the one-dimensional divergence for each pair of classes. The matching C(k) for each feature was then set to:

C(k)=mini,jdi,j, (27)

where C(k) represents the lowest divergence value across all class pairs rather than an average value. In order to pick features with the best ”worst case” class separability capability, the highest C(k) values must be used. Dealing with features once at a time has a significant advantage in terms of processing efficiency. Such techniques, on the other hand, ignore existing feature correlations.

With n = 1, 2, …, N and k = 1, 2, …, m, consider x nk to be the kth feature of the nth pattern. The cross-correlation coefficient between any two of them is given by:

ρij=n=1xnixnjn=1Nxni2n=1Nxnj2. (28)

It is possible to demonstrate |ρ ij|⩽1. The steps that make up the selection procedure are as follows:

  • Choose a class separability criterion and calculate its values for all features x k k = 1, 2, …, m. After that, rank the results in ascending order, with the best feature earning the highest C. To put it another way, this is xi1.

  • Calculate the cross-correlation coefficient defined in equation (28) between the selected xi1 and the remaining m − 1 features (i.e., ρilj, ji l) to choose the second feature.

  • Choose the feature xi2 in which

i2=argmaxj{α1C(j)α2|ρi1j|}, (29)

where α 1 and α 2 denote the importance of relative or weighting factors that are equal in this study. In practice, in addition to the C class separability measure, the correlation with the previously given feature is considered when selecting the next feature, and the findings are then generalized for the kth step.

  • Select xi3 so that

i3=argmaxj{α1C(j)α22r=12|ρirj|}, (30)

r = 1, 2 for ji r. The average correlation between all previously chosen features is also taken into account.

2.6. Study of machine learning

In this work, several machine learning methods were used to classify lung patients, including Proposed EDECOVID-net, VGG-16, VGG-19, Xeption, ResNet101, DenseNet201.

VGG-16 consists of three fully connected layers and five convolution blocks (13 convolution layers). A million images from 1000 classes were used to train this deep network. VGG-19 contains Nineteen deep layers, five convolution blocks (16 convolution layers), and three FC layers (fully connected layers). In comparison to VGG-16, VGG-19 offers a deeper architecture with more layers. ResNet is a residual learning-based network that can help network training. The input layers are used as a reference in this network. ResNet-18 comprises 22 deep layers, starting with a convolution layer and followed by eight blocks to a fully linked layer. ResNet-50 and ResNet-18 have differing designs for residual blocks, with ResNet-50 having more (16). Both ResNet-50 and ResNet-101 have 50 layers. As a result, there are 101 deep layers and 33 residual blocks. Xception is a CNN built on deeply separable convolution layers. Two convolution layers are followed by strongly separated layers of convolution, four convolution layers, and one fully connected layer. The DenseNet-201 is a 201-deep-layer convolutional neural network. In a feed-forward method, each layer in this network is directly connected to adjacent ones. All previous layers’ feature maps are handled as independent inputs for each layer, but their feature maps are passed on to all subsequent layers as inputs [59].

We introduced EDECOVID-net, a machine learning-based method that optimizes CNNs in the dataset utilizing radiomic characteristics and transfer learning. In addition, the CNN input layer in the structure was replaced with a new layer compatible with the size of the infected patches (e.g., 60 × 60 × 1). In addition, the number of classes was set as the dimension of the last fully connected layer to the networks (Fig. 4 ). The following procedures were used to train all networks: Initial learning rate of 0.01, validation frequency of 5, Adam optimizer, an initial learning rate of 0.01, and validation frequency of 5. Each interval sea the data set disrupted, and if the training process does not change significantly, the training process is terminated.

Fig. 4.

Fig. 4

Architecture of EDECOVID-net.

2.7. Statistical analysis

Five performance metrics were generated to measure and compare CNN's performance with that of the radiologist [60,61]:

Accuracy=NTN+NTPNTN+NFN+NTP+NFP (31)
Sensitivity=NTPNFN+NTP (32)
Specificity=NTNNTN+NFP (33)
PositivePredictiveValue(PPV)=NTPNTP+NFP (34)
NegativePredictiveValue(NPV)=NTNNTN+NFP (35)

COVID-19 and edema infections are considered positive and negative instances in this investigation, respectively. As a result, NTP and NTN stand for the number of COVID-19 infections and edema diagnosed accurately, respectively. The numbers NFP and NFN represent the number of COVID-19 infections and edema misdiagnosed, respectively. In addition, the AUC (area under the ROC curve) was generated to indicate how well CNNs and radiologists performed overall. SPSS software was used for statistical analysis.

3. Experimental results and discussion

3.1. The performance of the machine learning method

Table 2 shows the diagnostic functions of the proposed EDECOVID-net. The network can distinguish the COVID-19 group from the edema group with an AUC of 0.994.

Table 2.

Results of proposed EDECOVID-net.

Accuracy Sensitivity Specificity PPV NPV
EDECOVID-net 0.98 0.98 0.98 0.98 0.98

3.2. Compare the performance of deep methods

The best machine learning networks were used in this study to classify the medical images for distinguishing COVID-19 from edema. The results of machine learning showed that these methods could make a good distinction between these two diseases with similar symptoms and speed up the healing process. According to Fig. 5, Fig. 6 and Table 3, Table 4 , the EDECOVID-net yielded the best results. The most appropriate method had a higher sensitivity for diagnosing patients with COVID-19. In this regard, the EDECOVID-net has the highest sensitivity and AUC compared to other networks, especially VGG-16, known as the best method.

Fig. 5.

Fig. 5

ROC plots of used deep networks.

Fig. 6.

Fig. 6

Radar plot for individual ten networks and radiologist on validating dataset.

Table 3.

Comparison of results from the used methods.

Precision Recall F-Score
EDECOVID-net COVID-19 0.99 0.97 0.98
Edema 0.98 0.99 0.98
Xception COVID-19 0.93 0.93 0.93
Edema 0.95 0.95 0.95
ResNet101 COVID-19 0.97 0.95 0.96
Edema 0.96 0.98 0.97
VGG-16 COVID-19 0.91 0.94 0.93
Edema 0.96 0.93 0.95
VGG-19 COVID-19 0.98 0.92 0.95
Edema 0.94 0.99 0.96
DenseNet201 COVID-19 0.97 0.97 0.96
Edema 0.97 0.98 0.97

Table 4.

Comparison of results from the used methods.

Accuracy Sensitivity Specificity PPV NPV
EDECOVID-net 0.98 0.98 0.98 0.98 0.98
VGG-16 0.94 0,91 0,96 0.94 0.96
VGG-19 0.96 0.98 0.94 0.92 0.94
Xception 0.95 0.99 0.93 0.89 0.93
DenseNet201 0.97 0.97 0.97 0.95 0.97
ResNet101 0.97 0.98 0.96 0.95 0.96
EDECOVID-net (without pre-processing) 0,93 0,92 0,94 0.95 0.96

A review of previous studies indicates that researchers have failed to consider differentiation between the COVID-19 disease and edema using a machine learning network. In contrast, the performances of six machine learning algorithms were compared with the proposed method in our study to choose the best of them. Table 4 and Fig. 6 present the performance of EDECOVID-net without pre-processing and feature extraction. As can be seen from the table and figure, the method's accuracy decreases from 0.98 to 0.93 without extracting features. This demonstrates the performance of the proposed pre-processing algorithm for differentiating COVID-19 from pulmonary edema.

So far, the methods developed for diagnosing COVID-19 are RT-PCR and CT scans, and many studies have used CT scans and RT-PCR tests to evaluate their diagnostic performance. However, the World Health Organization and the statistics given in the literature have emphasized that the results of CT scans are more reliable for diagnosing the COVID-19 disease. Nevertheless, edema and COVID-19 disease complications have many similarities in the lung CT scan image, which confuses patients and radiologists. Still, the methods for treating patients with COVID-19 and edema are entirely different and can sometimes cause irreversible complications for the patient in the case of misdiagnosis.

3.3. State of current limitation

The basis for differentiating edema from the COVID-19 in these studies is the RT-PCR test that may be erroneous so that the patient may be mistakenly transferred from the COVID-19 to the edema group. On the other hand, RT-PCR testing is a molecular test with many limitations, such as time-consuming, high costs, requiring a specialized kit, and well-equipped laboratories. These limitations matter even more for low-income countries, with dire and dangerous consequences.

The CT scan is widely available as a rapid imaging technique with high accuracy in diagnosing lung diseases. To increase this accuracy, it is essential to diagnose the type of disease after diagnosing lung disease (pneumonia diseases). However, due to the above, the physical symptoms of edema and COVID-19 are similar on the CT scans. Accordingly, the proposed system can reduce the radiologist's workload, serving as an auxiliary tool for deciding on the type of lung disease (COVID-19 or edema).

There is no pulmonary involvement in some patients with a positive PCR test (with Covid-19), and the infection's symptoms do not show up in the lungs. Pulmonary involvement usually occurs several days after infection. The findings of this study can be used in cases where there are symptoms of lung disease, such as cough.

3.4. Potential extensions for future research

In our future research, we intend to design a system that diagnoses lung disease on CT scan images, subsequently automatically determines the type of disease, and finally detects the infected tissues and automatically segments them. This helps monitor the progression of the disease.

4. Conclusion

The continuing COVID-19 pandemic has been designated a worldwide health emergency because of the disease's relatively high infection rate. When this was written, there was no clinically authorized treatment medication available to treat COVID-19. COVID-19 must be detected early and distinguished from pulmonary edema to prevent COVID-19 transmission from person to person and provide proper patient care. Currently, the most efficient strategy to prevent the transmission of COVID-l9 is to isolate and quarantine suspicious patients. In COVID-19 positive individuals, diagnostic techniques, such as chest X-rays and CT, play a significant role in monitoring disease progression and severity. This paper used computational algorithms and CT scan images of the lungs to distinguish COVID-19 from pulmonary edema. For this purpose, EDECOVID-net was proposed, which automatically distinguishes the CT scans containing COVID-19 infections from infections caused by edema. According to the experimental results on the CT dataset, EDECOVID-net can differentiate patients with COVID-19 from patients with pulmonary edema with an accuracy of 0.98. Our model performs better compared to other studies, evaluated clinically and statistically.

Human rights statements and informed consent

All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1964 and its later amendments. Informed consent was obtained from all patients for being included in the study.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Declaration of competing interest

Elena Velichko N., Faridoddin Shariaty, Mahdi Orooji, Vitalii Pavlov A., Tatiana Pervunina, Sergey Zavyalov V., Razieh Khazaei, and Amir Reza Radmard declare that they have no conflict of interest.

Acknowledgments

The reported study was funded by INSF and RFBR, project number 99 003 992 (INSF) and 20-57-56 018 (RFBR).

To obtain the results of our study, computational resources of the supercomputer center in Peter the Great Saint-Petersburg Polytechnic University Supercomputing Center (www.spbstu.ru) were used.

References

  • 1.Abdulameer M.H., Sheikh Abdullah S.N.H., Othman Z.A. A modified active appearance model based on an adaptive artificial bee colony. Sci. World J. 2014;2014 doi: 10.1155/2014/879031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Li Y., Xia L. Coronavirus disease 2019 (covid-19): role of chest ct in diagnosis and management. Am. J. Roentgenol. 2020;214:1280–1286. doi: 10.2214/AJR.20.22954. [DOI] [PubMed] [Google Scholar]
  • 3.W. H. O, World health organization (covid-19) dashboard, https://covid19.who.int.
  • 4.Luks A.M., Swenson E.R. Covid-19 lung injury and high-altitude pulmonary edema. a false equation with dangerous implications. Ann. Am. Thorac. Soc. 2020;17:918–921. doi: 10.1513/AnnalsATS.202004-327CME. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ebrahimi M., Novikov B., Ebrahimie E., Spilman A., Ahsan R., Tahsili M.R., Najafi M., Navvabi S., Shariaty F. 2020. The First Report of the Most Important Sequential Differences between Covid-19 and Mers Viruses by Attribute Weighting Models, the Importance of Nucleocapsid (N) Protein. [Google Scholar]
  • 6.Ur A., Verma K. Pulmonary edema in covid19—a neural hypothesis. ACS Chem. Neurosci. 2020;11:2048–2050. doi: 10.1021/acschemneuro.0c00370. [DOI] [PubMed] [Google Scholar]
  • 7.F. Shariaty, S. Hosseinlou, V. Y. Rud, Automatic lung segmentation method in computed tomography scans, in: J. Phys. Conf., volume 1236, IOP Publishing, p. 12028.
  • 8.M. Mousavi, F. Shariaty, M. Orooji, E. Velichko, The performance of active-contour and region growing methods against noises in the segmentation of computed-tomography scans, in: International Youth Conference on Electronics, Telecommunications and Information Technologies, Springer, pp. 573–582.
  • 9.Shariaty F., Mousavi M. Application of cad systems for the automatic detection of lung nodules. Inf. Med.Unlocked. 2019;15:100173. [Google Scholar]
  • 10.F. Shariaty, V. Davydov, V. Yushkova, A. Glinushkin, V. Y. Rud, Automated pulmonary nodule detection system in computed tomography images based on active-contour and svm classification algorithm, in: J. Phys. Conf., volume 1410, IOP Publishing, p. 12075.
  • 11.Guan W.-j., Ni Z.-y., Hu Y., Liang W.-h., Ou C.-q., He J.-x., Liu L., Shan H., Lei C.-l., Hui D.S., et al. Clinical characteristics of coronavirus disease 2019 in China. N. Engl. J. Med. 2020;382:1708–1720. doi: 10.1056/NEJMoa2002032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.F. Shariaty, V. Pavlov, E. Velichko, T. Pervunina, M. Orooji, Severity and progression quantification of covid-19 in ct images: a new deep-learning approach, in: 2021 International Conference on Electrical Engineering and Photonics (EExPolytech), IEEE, pp. 72–76.
  • 13.F. Shariaty, M. Orooji, M. Mousavi, M. Baranov, E. Velichko, Automatic lung segmentation in computed tomography images using active shape model, in: 2020 IEEE International Conference on Electrical Engineering and Photonics (EExPolytech), IEEE, pp. 156–159.
  • 14.Pan F., Ye T., Sun P., Gui S., Liang B., Li L., Zheng D., Wang J., Hesketh R.L., Yang L., Zheng C. Time course of lung changes at chest CT during recovery from coronavirus disease 2019 (COVID-19) Radiology. 2020;295(3):715–721. doi: 10.1148/radiol.2020200370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Bernheim A., Mei X., Huang M., Yang Y., Fayad Z.A., Zhang N., Diao K., Lin B., Zhu X., Li K., et al. Chest ct findings in coronavirus disease-19 (covid-19): relationship to duration of infection. Radiology. 2020:200463. doi: 10.1148/radiol.2020200463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Chiroma H., Ezugwu A.E., Jauro F., Al-Garadi M.A., Abdullahi I.N., Shuib L. Early survey with bibliometric analysis on machine learning approaches in controlling covid-19 outbreaks. PeerJ Computer Science. 2020;6:e313. doi: 10.7717/peerj-cs.313. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Cascio D., Magro R., Fauci F., Iacomi M., Raso G. Automatic detection of lung nodules in ct datasets based on stable 3d mass–spring models. Comput. Biol. Med. 2012;42:1098–1109. doi: 10.1016/j.compbiomed.2012.09.002. [DOI] [PubMed] [Google Scholar]
  • 18.Chen B., Kitasaka T., Honma H., Takabatake H., Mori M., Natori H., Mori K. Automatic segmentation of pulmonary blood vessels and nodules based on local intensity structure analysis and surface propagation in 3d chest ct images. Int. J. Comput. Assisted.Radiol.Surg. 2012;7:465–482. doi: 10.1007/s11548-011-0638-5. [DOI] [PubMed] [Google Scholar]
  • 19.S. Soltaninejad, M. Keshani, F. Tajeripour, Lung nodule detection by knn classifier and active contour modelling and 3d visualization, in: The 16th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP 2012), IEEE, pp. 440–445.
  • 20.Namin S.T., Moghaddam H.A., Jafari R., Esmaeil-Zadeh M., Gity M. IEEE International Conference on Systems, Man and Cybernetics. IEEE; 2010. Automated detection and classification of pulmonary nodules in 3d thoracic ct images; pp. 3774–3779. [Google Scholar]
  • 21.Y. Mekada, T. Kusanagi, Y. Hayase, K. Mori, J.-i. Hasegawa, J.-i. Toriwaki, M. Mori, H. Natori, Detection of small nodules from 3d chest x-ray ct images based on shape features, in: International Congress Series, volume vol. 1256, Elsevier, pp. 971–976.
  • 22.Suzuki K., Kohlbrenner R., Epstein M.L., Obajuluwa A.M., Xu J., Hori M. Computer-aided measurement of liver volumes in ct by means of geodesic active contour segmentation coupled with level-set algorithms. Med. Phys. 2010;37:2159–2166. doi: 10.1118/1.3395579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Tanaka M., Shibata T., Okutomi M. IEEE International Conference on Consumer Electronics (ICCE) IEEE; 2019. Gradient-based low-light image enhancement; pp. 1–2. [Google Scholar]
  • 24.Shariaty F., Orooji M., Velichko N.E., Zavjalov V.S. Texture appearance model, a new model-based segmentation paradigm, application on the segmentation of lung nodule in the ct scan of the chest. Comput. Biol. Med. 2021:105086. doi: 10.1016/j.compbiomed.2021.105086. [DOI] [PubMed] [Google Scholar]
  • 25.Shariaty F., Baranov M., Velichko E., Galeeva M., Pavlov V. IEEE International Conference on Electrical Engineering and Photonics. (EExPolytech), IEEE; 2019. Radiomics: extracting more features using endoscopic imaging; pp. 181–194. [Google Scholar]
  • 26.T. Ojala, M. Pietikainen, D. Harwood, Performance evaluation of texture measures with classification based on kullback discrimination of distributions, in: Proceedings of 12th International Conference on Pattern Recognition, volume vol. 1, IEEE, pp. 582–585.
  • 27.Livens S., Scheunders P., Van de Wouwer G., Van Dyck D. 1997. Wavelets for Texture Analysis, an Overview. [Google Scholar]
  • 28.Clausi D.A., Jernigan M.E. Designing gabor filters for optimal texture separability. Pattern Recogn. 2000;33:1835–1849. [Google Scholar]
  • 29.Haralick R.M., Shanmugam K., Dinstein I.H. Textural features for image classification. IEEE Transactions on systems, man, and cybernetics. 1973:610–621. [Google Scholar]
  • 30.Punithavathy K., Ramya M., Poobal S. International Conference on Robotics, Automation, Control and Embedded Systems (RACE) IEEE; 2015. Analysis of statistical texture features for automatic lung cancer detection in pet/ct images; pp. 1–5. [Google Scholar]
  • 31.Lerski R., Barnett E., Morley P., Mills P., Watkinson G., MacSween R. Computer analysis of ultrasonic signals in diffuse liver disease. Ultrasound Med. Biol. 1979;5:341–343. doi: 10.1016/0301-5629(79)90004-8. [DOI] [PubMed] [Google Scholar]
  • 32.Mayerhoefer M.E., Schima W., Trattnig S., Pinker K., Berger-Kulemann V., Ba-Ssalamah A. Texture-based classification of focal liver lesions on mri at 3.0 tesla: a feasibility study in cysts and hemangiomas. J. Magn. Reson. Imag. 2010;32:352–359. doi: 10.1002/jmri.22268. [DOI] [PubMed] [Google Scholar]
  • 33.Skorton D.J., Collins S., Woskoff S., Bean J.A., Melton H., Jr. Range-and azimuth-dependent variability of image texture in two-dimensional echocardiograms. Circulation. 1983;68:834–840. doi: 10.1161/01.cir.68.4.834. [DOI] [PubMed] [Google Scholar]
  • 34.Chan H.-P., Wei D., Helvie M.A., Sahiner B., Adler D.D., Goodsitt M.M., Petrick N. Computer-aided classification of mammographic masses and normal tissue: linear discriminant analysis in texture feature space. Phys. Med. Biol. 1995;40:857. doi: 10.1088/0031-9155/40/5/010. [DOI] [PubMed] [Google Scholar]
  • 35.Li H., Giger M.L., Lan L., Brown J.B., MacMahon A., Mussman M., Olopade O.I., Sennett C. Computerized analysis of mammographic parenchymal patterns on a large clinical dataset of full-field digital mammograms: robustness study with two high-risk datasets. J. Digit. Imag. 2012;25:591–598. doi: 10.1007/s10278-012-9452-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Chen W., Giger M.L., Li H., Bick U., Newstead G.M. Volumetric texture analysis of breast lesions on contrast-enhanced magnetic resonance images. Magn. Reson. Med.: Off. J.Int. Soc. Magn. Reson. Med. 2007;58:562–571. doi: 10.1002/mrm.21347. [DOI] [PubMed] [Google Scholar]
  • 37.Nie K., Chen J.-H., Hon J.Y., Chu Y., Nalcioglu O., Su M.-Y. Quantitative analysis of lesion morphology and texture features for diagnostic prediction in breast mri. Acad. Radiol. 2008;15:1513–1525. doi: 10.1016/j.acra.2008.06.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Fjeldbo C.S., Julin C.H., Lando M., Forsberg M.F., Aarnes E.-K., Alsner J., Kristensen G.B., Malinen E., Lyng H. Integrative analysis of dce-mri and gene expression profiles in construction of a gene classifier for assessment of hypoxia-related risk of chemoradiotherapy failure in cervical cancer. Clin. Cancer Res. 2016;22:4067–4076. doi: 10.1158/1078-0432.CCR-15-2322. [DOI] [PubMed] [Google Scholar]
  • 39.Wibmer A., Hricak H., Gondo T., Matsumoto K., Veeraraghavan H., Fehr D., Zheng J., Goldman D., Moskowitz C., Fine S.W., et al. Haralick texture analysis of prostate mri: utility for differentiating non-cancerous prostate from prostate cancer and differentiating prostate cancers with different gleason scores. Eur. Radiol. 2015;25:2840–2850. doi: 10.1007/s00330-015-3701-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Vignati A., Mazzetti S., Giannini V., Russo F., Bollito E., Porpiglia F., Stasi M., Regge D. Texture features on t2-weighted magnetic resonance imaging: new potential biomarkers for prostate cancer aggressiveness. Phys. Med. Biol. 2015;60:2685. doi: 10.1088/0031-9155/60/7/2685. [DOI] [PubMed] [Google Scholar]
  • 41.Assefa D., Keller H., Ménard C., Laperriere N., Ferrari R.J., Yeung I. Robust texture features for response monitoring of glioblastoma multiforme on-weighted and-flair mr images: a preliminary investigation in terms of identification and segmentation. Med. Phys. 2010;37:1722–1736. doi: 10.1118/1.3357289. [DOI] [PubMed] [Google Scholar]
  • 42.Ryu Y.J., Choi S.H., Park S.J., Yun T.J., Kim J.-H., Sohn C.-H. Glioma: application of whole-tumor texture analysis of diffusion-weighted imaging for the evaluation of tumor heterogeneity. PLoS One. 2014;9 doi: 10.1371/journal.pone.0108335. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Brynolfsson P., Nilsson D., Henriksson R., Hauksson J., Karlsson M., Garpebring A., Birgander R., Trygg J., Nyholm T., Asklund T. Adc texture—an imaging biomarker for high-grade glioma? Med. Phys. 2014;41:101903. doi: 10.1118/1.4894812. [DOI] [PubMed] [Google Scholar]
  • 44.Lambin P., Rios-Velazquez E., Leijenaar R., Carvalho S., Van Stiphout R.G., Granton P., Zegers C.M., Gillies R., Boellard R., Dekker A., et al. Radiomics: extracting more information from medical images using advanced feature analysis. Eur. J. Cancer. 2012;48:441–446. doi: 10.1016/j.ejca.2011.11.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Kumar V., Gu Y., Basu S., Berglund A., Eschrich S.A., Schabath M.B., Forster K., Aerts H.J., Dekker A., Fenstermacher D., et al. Radiomics: the process and the challenges. Magn. Reson. Imag. 2012;30:1234–1248. doi: 10.1016/j.mri.2012.06.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Aerts H.J., Velazquez E.R., Leijenaar R.T., Parmar C., Grossmann P., Carvalho S., Bussink J., Monshouwer R., Haibe-Kains B., Rietveld D., et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 2014;5:1–9. doi: 10.1038/ncomms5006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Lovinfosse P., Polus M., Van Daele D., Martinive P., Daenen F., Hatt M., Visvikis D., Koopmansch B., Lambert F., Coimbra C., et al. Fdg pet/ct radiomics for predicting the outcome of locally advanced rectal cancer. Eur. J. Nucl. Med. Mol. Imag. 2018;45:365–375. doi: 10.1007/s00259-017-3855-5. [DOI] [PubMed] [Google Scholar]
  • 48.Li Q., Bai H., Chen Y., Sun Q., Liu L., Zhou S., Wang G., Liang C., Li Z.-C. A fully-automatic multiparametric radiomics model: towards reproducible and prognostic imaging signature for prediction of overall survival in glioblastoma multiforme. Sci. Rep. 2017;7:1–9. doi: 10.1038/s41598-017-14753-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Cho H.-h., Park H. 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) IEEE; 2017. Classification of low-grade and high-grade glioma using multi-modal image radiomics features; pp. 3081–3084. [DOI] [PubMed] [Google Scholar]
  • 50.Lerski R.A., Straughan K., Schad L., Boyce D., Blüml S., Zuna I. Viii. mr image texture analysis—an approach to tissue characterization. Magn. Reson. Imag. 1993;11:873–887. doi: 10.1016/0730-725x(93)90205-r. [DOI] [PubMed] [Google Scholar]
  • 51.Al-Kilidar S.H., George L.E. Texture classification using gradient features with artificial neural network. J. Southwest Jiaot. Univ. 2020;55 [Google Scholar]
  • 52.Manjunath B.S., Ma W.-Y. Texture features for browsing and retrieval of image data. IEEE Trans. Pattern Anal. Mach. Intell. 1996;18:837–842. [Google Scholar]
  • 53.Smith J.R. Columbia University; 1997. Integrated Spatial and Feature Image Systems: Retrieval, Analysis and Compression. [Google Scholar]
  • 54.Deng Y. University of California; Santa Barbara: 1999. A Region Representation for Image and Video Retrieval. Ph.D. thesis, Ph. D. thesis. [Google Scholar]
  • 55.W. Ma, B. Manjunath, Scene netra: a toolbox for navigating large image databases, in: International Conference on Image Processing, volume vol. 1.
  • 56.Daugman J.G. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. JOSA A. 1985;2:1160–1169. doi: 10.1364/josaa.2.001160. [DOI] [PubMed] [Google Scholar]
  • 57.Ojala T., Pietikainen M., Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002;24:971–987. [Google Scholar]
  • 58.Su K.-Y., Lee C.-H. Speech recognition using weighted hmm and subspace projection approaches. IEEE Trans. Speech Audio Process. 1994;2:69–79. [Google Scholar]
  • 59.G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708.
  • 60.Liu Q., Yu L., Luo L., Dou Q., Heng P.A. Semi-supervised medical image classification with relation-driven self-ensembling model. IEEE Trans. Med. Imag. 2020;39:3429–3440. doi: 10.1109/TMI.2020.2995518. [DOI] [PubMed] [Google Scholar]
  • 61.Abbasi A.A., Hussain L., Awan I.A., Abbasi I., Majid A., Nadeem M.S.A., Chaudhary Q.-A. Detecting prostate cancer using deep learning convolution neural network with transfer learning approach. Cognitive Neurodynamics. 2020;14:523–533. doi: 10.1007/s11571-020-09587-5. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Computers in Biology and Medicine are provided here courtesy of Elsevier

RESOURCES