Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2020 Nov 2;114:107747. doi: 10.1016/j.patcog.2020.107747

Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images

Adel Oulefki a,, Sos Agaian b, Thaweesak Trongtirakul c, Azzeddine Kassah Laouar d
PMCID: PMC7605758  PMID: 33162612

Abstract

History shows that the infectious disease (COVID-19) can stun the world quickly, causing massive losses to health, resulting in a profound impact on the lives of billions of people, from both a safety and an economic perspective, for controlling the COVID-19 pandemic. The best strategy is to provide early intervention to stop the spread of the disease. In general, Computer Tomography (CT) is used to detect tumors in pneumonia, lungs, tuberculosis, emphysema, or other pleura (the membrane covering the lungs) diseases. Disadvantages of CT imaging system are: inferior soft tissue contrast compared to MRI as it is X-ray-based Radiation exposure. Lung CT image segmentation is a necessary initial step for lung image analysis. The main challenges of segmentation algorithms exaggerated due to intensity in-homogeneity, presence of artifacts, and closeness in the gray level of different soft tissue. The goal of this paper is to design and evaluate an automatic tool for automatic COVID-19 Lung Infection segmentation and measurement using chest CT images. The extensive computer simulations show better efficiency and flexibility of this end-to-end learning approach on CT image segmentation with image enhancement comparing to the state of the art segmentation approaches, namely GraphCut, Medical Image Segmentation (MIS), and Watershed. Experiments performed on COVID-CT-Dataset containing (275) CT scans that are positive for COVID-19 and new data acquired from the EL-BAYANE center for Radiology and Medical Imaging. The means of statistical measures obtained using the accuracy, sensitivity, F-measure, precision, MCC, Dice, Jacquard, and specificity are 0.98, 0.73, 0.71, 0.73, 0.71, 0.71, 0.57, 0.99 respectively; which is better than methods mentioned above. The achieved results prove that the proposed approach is more robust, accurate, and straightforward.

Keywords: Corona-virus Ddisease (COVID-19), Computer-Aaided Ddetection (CAD), COVID-19 lesion, Segmentation, Color-mapping, 3D Visualization

1. Introduction

The eruption of the severe acute respiratory syndrome COVID-19 continues to grow where over (25, 414, 924) worldwide cases of confirmed infections are reported at the end of August 2020, according to Worldometer [http://worldomet-ers.info/coronavirus/].

To control the spread of this virus, screening large numbers of suspected cases for appropriate quarantine and treatment has become an urgent priority. Most tests to check for COVID-19 relied on tests done at a laboratory for pathogen testing, which is the most accurate test possible. Still, it is time-consuming with significant false-negative results [1].

Patients suspected of having respiratory infection or pneumonia are admitted to the hospital under consideration of specific diagnostic procedures with laboratory and other non-laboratory tests to identify the cause, location, and severity of the infection. The laboratory tests include standard procedures, like blood gas analysis tests, complete blood count (CBC), and pleural effusion [2], in a procedure that needs transporting samples from the hospital to the lab, which takes up valuable time. On the flip side, the non-laboratory tests are the computer-assisted imagery analysis techniques used to inspect/registrate the lung regions using digital chest radiography (or standard 2D X-ray) or CT scan. On the contrary to the conventional 2D X-ray; which uses a fixed X-ray tube that does not provide much detail. 3D CT scan is a nondestructive scanning technology that has the advantage of providing a very detailed view of the lung of bone, soft tissue, and blood vessels [2]. Advantages of CT imaging include low cost, wide availability, high spatial resolution with current multi-slice scanners, short scan time and higher sensitivity. Disadvantages include: Inferior soft tissue contrast compared to MRI and X-ray-based radiation exposure [3].

Various methodologies in computer vision have been proposed to deal with different sides to combat the COVID-19 pandemic, including segmentation and classification methods [4]. These approaches can be classified into two fundamental classes: Classical Machine Learning and Deep Learning methods [5].

In general, Image segmentation has become an increasingly important task in radiology research and clinical practice. The goal of segmentation is to separate regions or objects of interest from other parts of the body to make quantitative measurements. More specifically, obtaining further diagnostic insights, including measuring the area and volume of segmented structures. The main challenges of segmentation algorithms exaggerates due to intensity in-homogeneity, presence of artifacts and closeness in the gray level of different soft tissue.

Various aspects of segmentation algorithms have been explored for many years. Existing segmentation approaches can be classified into three main classes: manual, semi-automatic, and fully-automatic. Manual segmentation methods are time-consuming, monotonous, and can be affected by inter and intra-observer variability. Semi-automatic approaches are already widespread and integrated with publicly available software packages. Finally, fully-automatic procedures do not need user intervention. Each of these methods has its advantages and limitations. However, even now, the investigators try to make segmentation steps as easy as possible with the help of automatic software tools. Still, the problem of segmentation remains challenging [6], because (1) no general solution can be applied on a large and continually growing number of different regions of interest (ROI), (2) vast variations of ROI properties, (3) different medical imaging modalities. Lastly (4) associated changes of signal homogeneity; mainly variability and noise for each object [7]. Besides, as noted in Shi et al. [8] (a) while plenty of Al systems have been proposed to assist in diagnosing COVID-19 in clinical practice, there are only a few works related to infection segmentation in CT scans [9]. (b) most of COVID-19 imaging data-sets focus on diagnosis, only one data-set providing segmentation labels, and (c) the qualitative evaluation of infection and longitudinal changes in CT scans could thus offer useful and vital information in fighting against COVID-19.1 However, these methods are not applicable when we have a tiny data-set. In this paper, we focus on a segmentation system to automatically quantify infection regions of interest (ROIs) and measure the volume of the infection area.

In [10], the author presented a broad survey of computer vision methods to combat the challenge of the COVID-19 pandemic. Some examples of image segmentation and classification methods in COVID-19 applications are summarized in Table 1 along with results obtained of each approach.

Table 1.

Summary of a combined COVID-19 lesion segmentation-classification approaches.

Authors Database used Approach used Obtained results Highlights
Zheng et al. [12] 499 +131 CT volumes U-Net Accuracy of 0.901 -Rapid COVID-19 lesions diagnosis - Great potential to be applied in clinical application
Cao et al. [13] 2 patients U-Net architecture -Quantitative Pipeline
Gozes et al. [14] Testing set of 157 patients (China,U.S). Deep Learning CT Image Analysis 0.996 AUC 92.2% specificity -Heat map of a 3D volume display -Measures the progression of disease overtime
Jin et al. [15] 1136 cases 723 positives UNet+, CNN sensitivity of 0.974 specificity of 0.922 -The system automatically highlighted all lesion regions for faster examination
Ying et al.[8] 88 patients diagnosed with the COVID-19 ResNet-50 AUC of 0.95, recall (sensitivity) of 0.96 -A rapid and accurate identification of COVID-19
Shan et al. [8] 249 COVID patients VB-Net Dice similarity coefficients of 91.6 -A deep learning-based system for automatic segmentation of infection regions as well as the entire lung from chest CT scans
Shen et al. [16] 44 confirmed COVID-19 cases Threshold-based region growing R ranged 0.7679, P<0.05 -Moderate correlation between lesion percentage scores obtained by radiologists

In this article, we took advantages of a classical machine learning-based COVID-19 segmentation methods, since (a) it provides a high accuracy while deep networks require large data-sets, (b) computationally cheap, (c) easy to design, interpret, and use, which is helpful for clinicians and consumers communities in developing countries.

Finally, clinical detection and diagnosis of COVID-19 by experienced doctors is often a tedious task, where there is a need to have a simple, fast, working on low power computational devices (including cellphone) an automated method that can provide segmentation and quantification of infection regions of the patients every 3–5 days and monitoring progression the infected patients using lungs CT scans images. The qualitative evaluation of longitudinal changes in CT scans could thus offer essential information in fighting against COVID-19. To address the above issues, this paper proposes a novel tool for automatic COVID-19 Lung Infection segmentation and measurement using chest CT images. The presented architecture has the potential to be quantifying the COVID-19 infected regions and monitoring longitudinal disease changes. The presented system will be very beneficial in developed countries to help Africa through the COVID-19 pandemic [11].

Therefore, the expected outcome of the proposed work is to take into account the coronal-view of the CT scan for a better experimental interpretation. Although there are several approaches for medical image segmentation, few studies have been done and tested on an image with COVID-19 lesion to observe its ability to segment the lesion precisely [8]. Subsequently, those algorithms may not be the best choice for searching for smaller homogeneous regions in medical images that may contain the features of a disease such as COVID-19. This challenge motivates us to observe the performance of some state-of-the-art algorithms proposed in the literature for tackling the COVID-19 lesions area. Besides, it leads us to offer a robust scheme that has an excellent ability to segment the COVID-19 lesion in an accurate way.

The proposed framework is developed and investigated within the Matlab® environment, and the essential CT Scan images of COVID-19 is collected from the COVID-CT-Dataset [17]. Also, a data-set from the EL-BAYANE center for Radiology and Medical Imaging which include a data CT images on a person who got two times CT images. The main contributions of this paper are as follows:

  • A new image contrast enhancement algorithm by combing linear and logarithmic stitching parametric algorithms.

  • An improved image-dependent multilevel image thresholding method.

  • Method to compute the number of image-dependent multilevel k.

  • An image segmentation approach to minimize the over-segmented regions.

The remaining part contained in the present paper is organized as follows. After briefly reviewing the related works on the segmentation of COVID-19 in Section 1. Section 2 outlines the proposed pipeline and measurement comparison both on objective and subjective perspectives (in Section ) and in terms of the coloring of a segmentation algorithm (in Section 3 also). We conclude by highlighting the outcome achieved (in Section 4).

2. Materials and methods

In the remainder of this section, we report the enhancement method, along with the proposed segmentation and visualization functions. Its impact is quite more significant in biomedical and medical research [18]. The impact-contributions of this section are:

  • Method to compute the number of image-dependent thresholds k, which based on a local minimum level of a project 2D histogram.

  • Method to correct threshold by using the k-largest entries of Kapurs Entropy.

  • The method to compute the optimal threshold by taking a weighted combination of corrected thresholds.

  • The method to compute Kapurs Entropy using a projected 2D histogram.

  • A new image contrast enhancement algorithm by combing linear and logarithmic stitching parametric algorithms. The linear function works well for common-exposed imaging components, while the logarithmic function works well for under-exposed imaging components.

The outline of the proposed COVID-19 enhancement, segmentation, and visualization method is described in Fig. 1 . First, the lung region is extracted from the input CT images. Then, the left and right lungs are separated [19]. After that, image enhancement is applied on the right and the left lung separately (detailed in 2.1.1). At this stage, we proposed a modified Local Contrast Enhancement for a small more detailed CT target detection. The fundamentals of our modification are taken the concept from a local contrast method that can be found in Chen et al. [20].

Fig. 1.

Fig. 1

Flowchart of the proposed COVID-19 enhancement, segmentation, and visualization.

2.1. Enhancement

In this sub-section, we focus on images that are degradedly acquired, to improve the quality of CT images for a more accurate perception of information in images for both human viewers and for other automated COVID-19 lungs infected region segmentation and measurement systems. In Fig. 2 we illustrated the steps used for enhancements by separating the considered image into a small tile. In addition to that, we generated a directional block. In each directional filter, we included a local mean filter.

Fig. 2.

Fig. 2

Illustrative example of a (3-by-3) directional filter operation with a locally adaptive filter; (a) directional pattern; (b) (3-by-3) local adaptive filter.

2.1.1. Generate a contrast metric

Original CT-scan images illustrate bright regions in an image to introduce significant noticeable information. However, some parts are over-bright, and some parts are dark. Prior to classifying details, it is important to be enhanced local contrast in order to return a more accurate segmentation. Generally, COVID-19 targets have positive local contrast which means that the lesion areas are brighter than the local background in all directions. To control the proper luminance level, we propose two enhancement functions: (i) an exponential function; and (ii) a logarithmic function. The exponential function slight increases local contrast and it can be set as a preserving function in case the exponential parameter is set as (1.0). Another proposed function is written in a logarithmic term. It strongly increases local contrast in dark regions and preserve local contrast in bright regions, simultaneously. The essential details can be visualized by combining two enhanced features and written as:

Yi,j=(L1)(Ei,jmin{Ei,j}max{Ei,jmin{Ei,j}}) (1)
Ei,j=αAi,jβBi,j (2)
Ai,j=(Si,j2|Gi,j|+ψ)γa;Bi,j=logγb(1+Si,j2|Gi,j|+ψ) (3)
|Gi,j|=maxz|Ii,jfx,y,z| (4)

where Yi,j a visualized contrast metric. L the total number of luminance levels in a visualized domain. Ei,j a contrast metric. Ai,j a linear contrast metric. Bi,j a logarithmic contrast metric. α a constant of the linear contrast metric. β a contrast of the logarithmic contrast metric. Si,j - a filtered (dilated) structural image [21]. γa a contrast enhancement parameter of the linear contrast metric. γb - a contrast enhancement parameter of the logarithmic contrast metric. ψ a small number to avoid a calculation error in case any elements of a directional gradient edge metric equal to zero. |Gi,j| a directional gradient edge metric. Ii,j an input image. fx,y,z a directional compass mask (compass masks called the masks, which are generated by taking a single mask and rotating it to the eight primary compass orientations) in a z-direction. - a two-dimensional convolution operator.

General mathematics concepts with dilation are when a structuring element has a height [22]. Thus, the dilation of A(x,y) by B(x,y) is defined as:

(AB)(x,y)=max{A(xx,yy)+B(xx)|(x,y)Db}, (5)

where (A) original image, (B) dilated features and (D) structural element of (b). The proposed masking pneumonia regions comparison by using different structural filters are presented in Fig. 3 .

Fig. 3.

Fig. 3

The proposed masking pneumonia regions comparison; (a) a CT-scan image; (b) a dilated structural image by using a ‘circle’ dilating filter; (c) a dilated structural image by using a ‘plus’ dilating filter; (d) a dilated structural image by using a ‘square’ dilating filter; (e) a dilated structural image by using a ‘circle’ dilating filter; (f) a visualized contrast metric; (g) a visualized contrast metric by using a ‘circle’ dilating filter; (h) a visualized contrast metric by using a ‘plus’ dilating filter; (i) a visualized contrast metric by using a ‘square’ dilating filter; (j) a visualized contrast metric by using a ‘circle’ dilating filter.

2.2. Masking metric and multilevel image thresholding for image segmentation by optimizing Kapur entropy

In this sub-section, we focus on presenting a method that divides a CT image into COVID-19 lung infected from none infected regions. Extensive computer simulations show that the use of the bi-level threshold (pneumonia and non-pneumonia regions) in the COVID-19 CT segmentation is not efficient. Otsu and Kapur based methods are the most used for multilevel threshold image segmentation. Otsu’s method chooses an optimal threshold by maximizing the between-class variance, while Kapur et al. [23] threshold determine by maximizing the entropy of the object and background pixels. Several people implemented evolutionary algorithms for the best multilevel threshold selections. Nevertheless, all these models require more computational time, which makes multilevel thresholding impractical for most image processing and computer vision applications with weak computational resources.

2.2.1. Improved Kapur entropy-based multilevel thresholding procedure (masking metric generation)

In this part, we present an improvement multilevel Kapur’s entropy-based thresholding technique by adding a new element, namely choosing the image dependent level number of thresholds automatically. That will help to categorize small targets of CT scan images and reduce the computational complexity of Kapur’s multilevel thresholding which heavily depends on the number of thresholding and which rapidly increases with the increasing number of thresholds as illustrated in the masking metric Algorithm 1 (Steps 1 to 9). The bi-level thresholding segments an image into two different regions. The pixels with gray values greater than a specific value T are classified as object pixels, and the others with gray values lesser than T are classified as background pixels. Multilevel thresholding is a process that segments a gray level image into several different regions-segments the image into certain brightness regions, which correspond to one background and several objects. Fig. 5. It shows by increasing the number of thresholds the thresholded image tends towards the original image (Fig. 4 ).

Fig. 6.

Fig. 6

Illustrative example of the proposed segmentation lesion detection; (a) Enhanced CT scan; (b) Ground-Truth; (c) Proposed segmented mask; (d) Segmented mask using Medical image segmentation (MIS) [24]; (e) Segmented mask using GraphCut [25]; (f) Segmented mask using Watershed [26].

Algorithm 1.

Algorithm 1

Masking algorithm: pseudo-code of multilevel thresholding.

Fig. 5.

Fig. 5

Results using Kapur threshold (a) input lung image, (b) 2-level thresholding, (c) 4-level, (d) 8-level of the proposed segmentation with their histograms. It is based on modified Kapur’s entropy computation (see Fig. 6).

Fig. 4.

Fig. 4

The proposed masking pneumonia regions comparison; (a) Original CT image; (b) one-dimensional histogram of the lung tissue region; (c) dilated image; (d) two-dimensional histogram; (e) one-dimensional histogram projected by a 2D histogram; (f) local minima numbers on histograms (the image dependent level number of thresholds).

2.3. Classify a lung region into small sub-regions

To mask all possible of small pneumonia regions, the masking equation needs to control the minimum number of pixels in each region and classifies sub-regions into two classes by using the proposed fractional threshold correction. It can be described as the masking metric algorithm (see steps 9 and 10).

3. Results and discussions

In this section, we present the experimental results of the proposed image segmentation pipeline. Firstly, we show that our method achieves state-of-the-art results on a COVID-CT-Dataset containing (275) CT scans that are positive for COVID-19 with manually labeled ground-truth lesions by the radiologist doctor. Second, we study with new CT images from a local hospital containing 22 patients tested positive for the corona-virus. Third, we show the 3D visualization and the effect of the COVID-19 lesion on the patient’s lungs.

The empirical results achieved in the proposed work are presented and discussed in this section. The developed scheme is executed using a MacBook Pro® with a (2.5) GHz Intel Core i7 processor and (16 GB) RAM equipped with the MATLAB® Academic Version. Experimental results confirm that the proposed requires a mean time of (0.4 s) to process one CT image. However, the execution time can be improved within a workstation with higher computational capability. The advantage of the proposed system lies in its fully automated that ensures a short turnaround time for both segmentation and enhancement.

To evaluate and determine the performance of the proposed segmentation approach, the statistical values of the segmented COVID-19 lesion are compared with the result of the GraphCut [25], Watershed [26], Medical Image Segmentation Approach (MIS) [24], U-Net [27], Attention-UNet [28], Gated-UNet [29], Dense-UNet [30], U-Net++ [31], Inf-Net [32], Seg-Net [33], BiSe-Net [34], and ESP-Net [35] methods. In the CT scans of patients with COVID-19 lesions, that are suspectedly considered infected by a medical expert. The possibility of having a lesion absence in the image can be presented, resulting in the consideration of patients as normal and healthy, therefore; no segmentation is required. Besides, we assessed statistically the segmentation quality of the proposed against GraphCut, Watershed, and MSA, by picking: Accuracy [36], Sensitivity [37], F-Measure [37], Precision [38], MCC (Mathew Correlation Coefficient) [37], Dice [39], Jaccard [39], and Specificity [37]. By definition, higher values on these indexes imply a better quality of segmentation. Mathematical formulas of the accuracy, F-Measure, MCC (Mathew Correlation Coefficient), dice, and jaccard are respectively expressed below:

Accuracy=TN+TNTP+TN+FP+FN (18)
Sensitivity=TPTP+FN (19)
FMeasure=(2*TP)2*TP+FP+FN (20)
Precision=(TP)TP+FP (21)
MCC=TP*TNFP*FN(TP+FN)*(TP+FP)*(TN+FN) (22)
Dice=(2*TP)(2*(TP+FP+FN)) (23)
Jaccard=Dice2Dice (24)
Specificity=TNTN+FP (25)

where (TP) stands for True Positive; (FP) stands for False Positives; (FN) stands for False Positives; (TN) stands for True Negatives

3.1. Objective evaluation

To display each quantitative metric in one figure. We selected a violin plot to present the comparisons by calculating the means of the segmentation quality measurement of the proposed with GraphCut, Watershed, and MIS approaches over COVID-CT-Dataset [17]. The Violin plots show the distribution probability of the data at different values. The asymmetric outer shape (in black) represents all possible results. Furthermore, it depicts the interquartile range, where more than (50%) of data contained between the two extremities () of the black line along with the means in the middle (+), as illustrated in Fig. 7 . The cyan, magenta, and yellow illustrate the segmentation results of GraphCut, Watershed, and MIS, respectively. While the red illustrates the segmentation results of the proposed.

Fig. 7.

Fig. 7

Violin plots with median values of the proposed against Graph Cut, Watershed, and MIS segmentation methods using accuracy, sensitivity, F-Measure, precision, MCC, Dice, Jaccard, and specificity metrics.

The comparison was conducted with eight segmentation quality metrics (as illustrated above). In Fig. 7, eight separate plots are corresponding each to four different distributions. Interestingly, the means and interquartile ranges are different between the four distributions. Also, the shapes of the distributions are different. Since higher overall values obtained from accuracy (Fig. 7a), sensitivity (Fig. 7b), F-Measure (Fig. 7c), precision (Fig. 7d), MCC (Fig. 7e), Dice (Fig. 7f), Jaccard (Fig. 7g), and specificity (Fig. 7h) indicate a better segmentation performance.

It is evident that the statistics of the accuracy, F-measure, MCC, Dice, and Jaccard metrics, provided by the proposed segmentation is greater than GraphCut, Watershed, and MIS unsupervised methods. However,in sensitivity, precision, and specificity, where MIS, Graph Cut, and Watershed respectively tend to be very close than results giving by the proposed. Quantitative segmentation results are also summarized in Table 2 which indicates the means and standard deviation of each method. As shown in Table 2, the best results are obtained by the proposed. But, MIS, GraphCut, and Watershed approaches compete in terms of sensitivity, precision, and specificity metrics.

Table 2.

Quantitative segmentation using proposed, GraphCut, Watershed [42] and MIS unsupervised methods on the COVID-CT-Dataset [43]. The best two results are shown in red and blue fonts.

Quality Segmentation methods
Proposed Graph cut Watershed MIS
Accuracy 0.989 ± 0.00 0.972 ± 0.02 0.982 ± 0.03 0.947 ± 0.02
Sensitivity 0.733 ± 0.16 0.530 ± 0.24 0.508 ± 0.27 0.947 ± 0.09
F-Measure 0.714 ± 0.14 0.582 ± 0.24 0.492 ± 0.20 0.538 ± 0.20
Precision 0.739  ± 0.16 0.808  ± 0.43 0.682 ± 0.34 0.405  ± 0.20
MCC 0.719  ± 0.13 0.631  ± 0.30 0.533  ± 0.20 0.584  ± 0.16
Dice 0.714 ± 0.14 0.582 ± 0.24 0.492 ± 0.20 0.538 ± 0.20
Jaccard 0.573  ± 0.15 0.451  ± 0.23 0.349  ±  0.16 0.396 ± 0.20
Specificity 0.994  ± 0.05 0.995  ± 0.05 0.996 ±  0.07 0.951 ± 0.03

With regard to recent supervised methods, there is a wide range of applications based on Neural Networks (Deep learning) that contribute actively to fight COVID-19 pandemic [8]; Nevertheless, few of them are currently mature enough to show a viable impact on the detection of the lesion [40]. The main advantage of these methods lies in their ability to outperform the shallow techniques, but this comes with disadvantages such as their requirement to process a large amount of sensed data. Since, they are computationally expensive, and the duration of the development process is higher [41].

Due to the lack of annotated medical images in lung segmentation, both semi-supervised and unsupervised approaches are highly demanded to analyze the COVID-19 lesion [8]. The advantage of the proposed method is also confirmed by Table 3 , As can be seen, compared with U-Net [27], Attention-UNet [28], Gated-UNet [29], Dense-UNet [30], U-Net++ [31], Inf-Net [32], Seg-Net [33], BiSe-Net [34], and ESP-Net [35] supervised methods. The proposed yields a better segmentation results with more sensitivity, precision and dice. In contrast, U-Net++ provides good results and compete, especially in sensitivity metric.

Table 3.

Quantitative segmentation using proposed against U-Net [27], Attention-UNet [28], Gated-UNet [29], Dense-UNet [30], U-Net++ [31], and Inf-Net [32] supervised methods. The best two results are shown in red and blue fonts.

Segmentation methods Quality mertics
Dice Sensitivity Specificity Precision
U-Net [27] 0.308 0.678 0.836 0.265
Attention-UNet [44] 0.466 0.723 0.930 0.390
Gated-UNet [29] 0.447 0.674 0.956 0.375
Dense-UNet [30] 0.410 0.607 0.977 0.415
U-Net+ [31] 0.444 0.877 0.929 0.369
Inf-Net [43] 0.579 0.870 0.974 0.500
Seg-Net [33] 0.705 0.852 0.954
BiSe-Net [34] 0.706 0.852 0.852
ESP-Net [35] 0.706 0.859 0.954
Proposed 0.714 0.733 0.994 0.739

3.2. Subjective evaluation

COVID-CT-Data-set [17] consists of 760 preprints of COVID-19 CT imagery, posted from January to Mars 2020. CT scans are associated with captions describing the clinical findings. Firstly, a medical expert creates ground truth (GT) regarding the localization of the different COVID-19 lesions structures, since the clinical CT scan images required manual annotations. This manual substructure segmentation meets specific radiological criteria to be recognized through algorithms rather than offering a biological interpretation of the annotated image patterns. Moreover, we will use a small data-set from the EL-BAYANE center for Radiology and Medical Imaging which include data CT images on ten patients one of which got CT images twice.

Fig. 8, illustrates the qualitative comparison of the proposed segmentation results with the ground truths against three segmentation approaches GraphCut, Watershed, and MIS. We picked up disparate images from the COVID-CT-Data-set and corresponding ground truth segmented manually by a confirmed radiologist (in black). After that, we applied the proposed along with GraphCut, Watershed, and MIS segmentation methods are shown in Fig. 8. The cyan, magenta, and yellow illustrate the segmentation results of GraphCut, Watershed, and MIS, respectively. While the red illustrates the segmentation results of the proposed.

Fig. 8.

Fig. 8

Visual comparison of COVID-19 infection segmentation results against GT.

It can be seen clearly that the COVID-19 lesions segmentation is almost in exact shape with the ground truth using the proposed. On the other hand, GraphCut, Watershed, and MIS over-segment or misse some of the COVID-19 lesion areas. Take note, that the first-row second-column, the Watershed method cut out, since it could not correctly identify the border of a lesion region.

Color-mapping images are the last step in our work, which will assist the radiologist in picking out details, examining the severity level of the infection maps, estimating quantitative values and notice patterns in the COVID-19 regions in a more intuitive fashion. Thus, we have picked the ‘Jet’ colormap [45] that has a significant impact on our case. For example, the interpretation of ‘Jet maps’ has been split into three (Red, Blue, Green) color parts to distinguish image lesion features. Fig. 9 , illustrates six different cases of lesion regions of COVID-19-CT data. From the first row, we can see that blue and green present low to moderate risk regions. Where yellow and red in the last row show three high risk regions.

Fig. 9.

Fig. 9

Visual assessment of COVID-19 severity level of the infection segmentation.

3.3. 3D visualization and measurements

CT images of a total of 22 participants were retrospectively collected. In this data set, 10 among which were confirmed with COVID-19 by radiologists. Statistical analysis showed that the lesion of COVID-19 in the lungs was significantly different among patients (Table 4 ). There were 6 cases with an infection of < (10.00%) and 4 cases with > (10.00%) presenting expansion in lesions over the lungs.

Table 4.

Summary of the effect of COVID-19 of nine patients by calculating the volume of the lungs and lesion; the last column presents the ratio in % between the volumes of lesions and lungs.

Patient N Pateint - Info
Segmentation statistics
Lungs
Lesion
Ratio
Sex Age in Vx in cm3 in Vx in cm3 in %
1 F 34 1,076,162 3087 18,253 52.35 1.69
2 M 41 6,899,553 3960.31 273,585 157.03 3.96
3 F 48 6,782,980 3893.4 634,541 364.22 9.35
4 M 60 3,839,583 2204.99 1,199,414 688.79 31.23
5 F 67 4,682,644 2687.82 24,847 14.2621 0.05
6 M 70 5,632,933 2197.13 1,574,386 614.09 27.94
7 F 72 5,101,547 2006.84 1,136,467 447.06 22.27
8 M 72 5,469,199 2944.53 142,521 76.73 2.6
9T=1 M 79 7,496,200 4302.79 140,203 80.4759 1.87
9T=2 M 79 6,621,017 3515.48 1,306,384 693.63 19.73

The effect of COVID-19 can also be seen in Table 4, (#9T=1-#9T=2). patient number 9 was performed a first test on April 27th, then the second in Mai 7. Introducing the proposed segmentation to calculate the disease progression over time of the same person is illustrated in the ninth and tenth rows [cases (#9T=1-#9T=2)]. Furthermore, the growth lesion was compared with the same patient at different dates by calculating the Dice Similarity Coefficient (DSC) the Hausdorff metric [46]. By definition, The Hausdorff distance H(A,B) is the maximum of h(A,B) and h(B,A). Thereby, it measures the degree of mismatch between two sets ([cases (A = #9T=1- B=#9T=2)]) by measuring the distance of the point of A that is farthest from any point of B and vice versa and is defined as: H(A,B)=max(h(A,B),h(B,A))

A comparison of lesion-based segmentation on the first date of examination with lesion slice-by-slice segmentation on the follow-up date resulted in a Dice Similarity Coefficient of (5.76%) and a maximum Hausdorff Distance of 29.71 ± 8,59 mm. This minimal DSC and a maximal Hausdorff distance prove that the severity of the lesion is proportional to the amount of lung tissue destruction, and the lesion grows drastically in 10 days only. The effect of the lesion is also higher, since the percentage of the lesion in voxels and cm3 in CT scans is on the order of (1.87%) in the first test, in comparison to (19.73%) for lesions in the second test. In addition to the quantitative results, we present 10 samples of segmentation results in Fig. 10 of the 3D slicer segmenting by applying the proposed. However, case number (#9) is the most important; this case was critical because it set out growth of the lesion over time.

Fig. 10.

Fig. 10

3D visualization of the segmentation results.

Fig. 10 (#9T=1) was taken when the patient develops only fever and a dry cough. The 3D CT scan after applying the proposed segmentation clearly show a small COVID-19 lesion in red on the right lung of the patient. Ten days after this examination, the lesions grow in a drastic way since the patient shows a large quantity of lesions (#9T=2). The differential diagnosis between the two examinations showed that the lesion developed drastically, as shown in Table (4) statistically and visually in Fig. 10.

4. Discussion & conclusion

Thus far, the CT-scan imaging is a widespread, affordable, detailed screening tool that effectively helps to visualize and to accelerate the evaluation of the severity of the COVID-19 lesion. In this work, we presented the utility of an automated tool of segmentation and measurement for COVID-19 lung Infection using chest CT imagery. The computer simulations on both data-sets, COVID-CT-Dataset containing (275) CT scans in addition to EL-BAYANEs centers for Radiology data, show better segmentation efficiency and flexibility in comparison to the end-to-end learning approaches as well as supervised and unsupervised methods. The offered algorithms performance evaluated using the commonly used assessment scores such as accuracy, sensitivity, F-measure, precision, MCC, Dice, Jacquard, specificity, and Hausdorff distance. Strengths of our work also include the potential to quantify the COVID-19 lesion, visualize the infected area, and quickly track the disease changes. Moreover, the proposed approach has the ability to detect abnormal regions with low-intensity contrast between lesions and healthy tissues. Even though our suggested achieved promising results, it is worth noting that there are some limitations.

The segmentation quality measurements require reliable ground truth (GT) of the lesion mask structure; a medical expert is the only person that can provide the manual segmentation that serves as a reference. Second, COVID-19 lesions have similar imaging features as pneumonia caused by other types of viruses. Due to the lack of laboratory confirmation of the etiology for each of these cases, we could not detect other viral pneumonia for comparison purposes. As future work, we consider extending the validation of the proposed by collecting chest CT images from various severity types of lesions through several institutions and countries.

Furthermore, we are planning to optimize the algorithms to separately pinpoint segment patterns of the lesions that are classified as ground-glass opacity, crazy paving, and consolidation. We are also planning to combine imaging data with clinical manifestations and laboratory examination results to help the better examination, detection, and diagnosis of COVID-19. COVID-19 continues to spread across the world following a trajectory that is not easy to predict. We are hoping that (1) the proposed automatic segmentation of the COVID-19 lesion tool can use for large-scale clinical applications; (2) the advanced tools will be helpful to other health systems facing similar challenges, including abnormalities caused by other viruses and diseases.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

Sincere gratitude to the United States Department of State and the J. William Fulbright Fellowship program for allowing us to research as Fulbright Visiting Scholar at the College of Staten Island (CSI), which is a senior college within The City University of New York (CUNY), USA.

Biographies

graphic file with name fx1_lrg.jpg

Adel Oulefki Received a Ph.D. with honors from B.B.A University, Algeria, and academic research accreditation (HDR) degrees in electrical engineering from the Institute of Electrical Engineering and Electronics (IGEE), in 2014 and 2018, respectively. he has held various academic positions since 2016 such as senior researcher within the BIOSMC team at Centre de Développement des Technologies Avancées (CDTA). In April 2018, he has been admitted for a position of MRA which primarily involved supervising the original research project. Prior to joining the CDTA, he has served as a temporary assistant lecturer in the department of electronics at B.B.A University from 2010 to 2014. His current research projects focus on interdisciplinary applications of computer science and engineering to the service of the security, environmental, medical, and agronomic areas. His areas of expertise include signals, images, and video analysis, data clustering, pattern recognition using both RGB cameras and thermal sensors. In September 2019 he has been awarded the prestigious scholarship (Fulbright) as a visiting scholar at the Department of Computer science at the College of Staten Island, CUNY in New York.

graphic file with name fx2_lrg.jpg

Sos Agaian Received the M.S. degree (summa cum laude) in mathematics and mechanics from Yerevan University, Yerevan, Armenia, the Ph.D. degree in math and physics from the Steklov Institute of Mathematics, Russian Academy of Sciences, Moscow, and the Doctor of Engineering Sciences degree from the Institute of the Control System, Russian Academy of Sciences. He is a Distinguished Professor of Computer Science at College of Staten Island and the Graduate Center (CUNY). Before joining the CUNY, Dr. Agaian was a Peter T. Flawn Professor of Electrical and Computer Engineering with the University of Texas at San Antonio. He has been a visiting faculty at the Tufts University and the Leading Scientist at the AWARE, INC. MA. His primary research interests are in Computational Vision and Sensing, Machine Learning, Big and Small Data Analytics, Cancer Sensing, and Multimodal Biometric and Health informatics. He has authored more than 650 scientific papers, ten books, and holds 44 patents/disclosers. The technologies that he invented have been adopted by multiple institutions, including the U.S. government, and commercialized by industry. He is an Editorial Board Member of several journals, including the Journal of Electronic Imaging and the Transaction of the Image processing (IEEE). He is a fellow of IS&T, SPIE, AAAS, and IEEE.

graphic file with name fx3_lrg.jpg

Thaweesak Trongtirakul Received the M.Eng. degree in instrumentation engineering from King Mongkut’s Institute of Technology Ladkrabang, Bangkok, Thailand. He is studying in D.Eng. Degree in Electronics and Telecommunication at the King Mongkut’s University of Technology Thonburi, Thailand. His primary research interests are in computer vision, machine vision, optical images, and smart cities.

Azzeddine Kassah Laouar Dr. Kassah Laouar was an attending radiologist at the Hospital of Bordj Bou Arreridj. He earned a D.D. from Constantine 1 (ex Mentouri) University. Dr. Kassah Laouar has maintained a keen interest and involvement in all aspects of diagnostic imaging, predominantly practicing interventional procedures, neuroradiology, and cross-sectional imaging for many years. Dr. Kassah Laouar actually owns EL-BAYANE center for Radiology and Medical Imaging and has always maintained an active interest and involvement in general imaging.

Footnotes

References

  • 1.Ai T., Yang Z., Hou H., Zhan C., Chen C., Lv W., Tao Q., Sun Z., Xia L. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32–E40. doi: 10.1148/radiol.2020200642. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bhandary A., Prabhu G.A., Rajinikanth V., Thanaraj K.P., Satapathy S.C., Robbins D.E., Shasky C., Zhang Y.-D., Tavares J.M.R., Raja N.S.M. Deep-learning framework to detect lung abnormality–a study with chest X-ray and lung CT scan images. Pattern Recognit. Lett. 2020;129:271–278. [Google Scholar]
  • 3.Gu Y., Kumar V., Hall L.O., Goldgof D.B., Li C.-Y., Korn R., Bendtsen C., Velazquez E.R., Dekker A., Aerts H., et al. Automated delineation of lung tumors from CT images using a single click ensemble segmentation approach. Pattern Recognit. 2013;46(3):692–702. doi: 10.1016/j.patcog.2012.10.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Ouyang W., Xu B., Yuan X. Color segmentation in multicolor images using node-growing self-organizing map. Color Res. Appl. 2019;44(2):184–193. [Google Scholar]
  • 5.Yuan X., Xie L., Abouelenien M. A regularized ensemble framework of deep learning for cancer detection from multi-class, imbalanced training data. Pattern Recognit. 2018;77:160–172. [Google Scholar]
  • 6.Yuan X. Medical Imaging 2010: Image Processing. Vol. 7623. International Society for Optics and Photonics; 2010. Segmentation of blurry object by learning from examples; p. 76234G. [Google Scholar]
  • 7.Elnakib A., Gimel’farb G., Suri J.S., El-Baz A. Multi Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies. Springer; 2011. Medical image segmentation: a brief survey; pp. 1–39. [Google Scholar]
  • 8.Shi F., Wang J., Shi J., Wu Z., Wang Q., Tang Z., He K., Shi Y., Shen D. Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for COVID-19. IEEE Rev. Biomed. Eng. 2020 doi: 10.1109/RBME.2020.2987975. [DOI] [PubMed] [Google Scholar]
  • 9.Y. Qiu, Y. Liu, J. Xu, Miniseg: an extremely minimum network for efficient COVID-19 segmentation, arXiv preprint arXiv:2004.09750(2020). [DOI] [PubMed]
  • 10.A. Ulhaq, A. Khan, D. Gomes, M. Pau, Computer vision for COVID-19 control: a survey, arXiv preprint arXiv:2004.09420 (2020). [DOI] [PMC free article] [PubMed]
  • 11.Li K., Fang Y., Li W., Pan C., Qin P., Zhong Y., Liu X., Huang M., Liao Y., Li S. Ct image visual quantitative evaluation and clinical classification of coronavirus disease (COVID-19) Eur. Radiol. 2020;30:4407–4416. doi: 10.1007/s00330-020-06817-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.C. Zheng, X. Deng, Q. Fu, Q. Zhou, J. Feng, H. Ma, W. Liu, X. Wang, Deep learning-based detection for COVID-19 from chest CT using weak label, medRxiv (2020).
  • 13.Cao Y., Xu Z., Feng J., Jin C., Han X., Wu H., Shi H. Longitudinal assessment of COVID-19 using a deep learning–based quantitative CT pipeline: illustration of two cases. Radiology. 2020;2(2):e200082. doi: 10.1148/ryct.2020200082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.O. Gozes, M. Frid-Adar, H. Greenspan, P.D. Browning, H. Zhang, W. Ji, A. Bernheim, E. Siegel, Rapid ai development cycle for the coronavirus (COVID-19) pandemic: initial results for automated detection & patient monitoring using deep learning CT image analysis, arXiv preprint arXiv:2003.05037(2020).
  • 15.S. Jin, B. Wang, H. Xu, C. Luo, L. Wei, W. Zhao, X. Hou, W. Ma, Z. Xu, Z. Zheng, et al., Ai-assisted CT imaging analysis for COVID-19screening: building and deploying a medical ai system in four weeks, medRxiv (2020). [DOI] [PMC free article] [PubMed]
  • 16.Shen C., Yu N., Cai S., Zhou J., Sheng J., Liu K., Zhou H., Guo Y., Niu G. Quantitative computed tomography analysis for stratifying the severity of coronavirus disease 2019. J. Pharm. Anal. 2020;10 doi: 10.1016/j.jpha.2020.03.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.J. Zhao, Y. Zhang, X. He, P. Xie, Covid-ct-dataset: a CT scan dataset about COVID-19, arXiv preprint arXiv:2003.13865(2020).
  • 18.Tan K.S., Isa N.A.M. Color image segmentation using histogram thresholding–fuzzy c-means hybrid approach. Pattern Recognit. 2011;44(1):1–15. [Google Scholar]
  • 19.Gu Y., Kumar V., Hall L.O., Goldgof D.B., Li C.-Y., Korn R., Bendtsen C., Velazquez E.R., Dekker A., Aerts H., et al. Automated delineation of lung tumors from CT images using a single click ensemble segmentation approach. Pattern Recognit. 2013;46(3):692–702. doi: 10.1016/j.patcog.2012.10.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Chen C.P., Li H., Wei Y., Xia T., Tang Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2013;52(1):574–581. [Google Scholar]
  • 21.Gaurav K., Ghanekar U. Image steganography based on canny edge detection, dilation operator and hybrid coding. J. Inf. Secur. Appl. 2018;41:41–51. [Google Scholar]
  • 22.Van Den Boomgaard R., Van Balen R. Methods for fast morphological image transforms using bitmapped binary images. CVGIP. 1992;54(3):252–258. [Google Scholar]
  • 23.Kapur T., Grimson W.E.L., Wells III W.M., Kikinis R. Segmentation of brain tissue from magnetic resonance images. Med. Image Anal. 1996;1(2):109–127. doi: 10.1016/S1361-8415(96)80008-9. [DOI] [PubMed] [Google Scholar]
  • 24.Xia K.-j., Yin H.-s., Zhang Y.-d. Deep semantic segmentation of kidney and space-occupying lesion area based on SCNN and ResNet models combined with sift-flow algorithm. J. Med. Syst. 2019;43(1):2. doi: 10.1007/s10916-018-1116-1. [DOI] [PubMed] [Google Scholar]
  • 25.Frants V.A., Agaian S. Mobile Multimedia/Image Processing, Security, and Applications 2020. Vol. 11399. International Society for Optics and Photonics; 2020. Dermoscopic image segmentation based on modified GrabCut with octree color quantization; p. 113990K. [Google Scholar]
  • 26.Agaian S., Madhukar M., Chronopoulos A.T. Automated screening system for acute myelogenous leukemia detection in blood microscopic images. IEEE Syst. J. 2014;8(3):995–1004. [Google Scholar]
  • 27.Ronneberger O., Fischer P., Brox T. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2015. U-Net: convolutional networks for biomedical image segmentation; pp. 234–241. [Google Scholar]
  • 28.Lian S., Luo Z., Zhong Z., Lin X., Su S., Li S. Attention guided U-Net for accurate iris segmentation. J. Vis. Commun. Image Represent. 2018;56:296–304. [Google Scholar]
  • 29.Schlemper J., Oktay O., Schaap M., Heinrich M., Kainz B., Glocker B., Rueckert D. Attention gated networks: learning to leverage salient regions in medical images. Med. Image Anal. 2019;53:197–207. doi: 10.1016/j.media.2019.01.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Li X., Chen H., Qi X., Dou Q., Fu C.-W., Heng P.-A. H-denseunet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging. 2018;37(12):2663–2674. doi: 10.1109/TMI.2018.2845918. [DOI] [PubMed] [Google Scholar]
  • 31.Zhou Z., Siddiquee M.M.R., Tajbakhsh N., Liang J. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer; 2018. UNet++: a nested U-Net architecture for medical image segmentation; pp. 3–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Fan D., Zhou T., Ji G., Zhou Y., Chen G., Fu H., Shen J., Shao L. Inf-Net: automatic COVID-19 lung infection segmentation from CT images. IEEE Trans. Med. Imaging. 2020;39(8):2626–2637. doi: 10.1109/TMI.2020.2996645. [DOI] [PubMed] [Google Scholar]
  • 33.Badrinarayanan V., Kendall A., Cipolla R. SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017;39(12):2481–2495. doi: 10.1109/TPAMI.2016.2644615. [DOI] [PubMed] [Google Scholar]
  • 34.Yu C., Wang J., Peng C., Gao C., Yu G., Sang N. Proceedings of the European Conference on Computer Vision (ECCV) 2018. Bisenet: bilateral segmentation network for real-time semantic segmentation; pp. 325–341. [Google Scholar]
  • 35.Mehta S., Rastegari M., Caspi A., Shapiro L., Hajishirzi H. Proceedings of the European Conference on Computer Vision (ECCV) 2018. ESPNet: efficient spatial pyramid of dilated convolutions for semantic segmentation; pp. 552–568. [Google Scholar]
  • 36.Fernandez-Moral E., Martins R., Wolf D., Rives P. 2018 IEEE Intelligent Vehicles Symposium (IV) IEEE; 2018. A new metric for evaluating semantic segmentation: leveraging global and contour accuracy; pp. 1051–1056. [Google Scholar]
  • 37.Boughorbel S., Jarray F., El-Anbari M. Optimal classifier for imbalanced data using Matthews correlation coefficient metric. PLoS One. 2017;12(6):e0177678. doi: 10.1371/journal.pone.0177678. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Gupta S., Girshick R., Arbeláez P., Malik J. European Conference on Computer Vision. Springer; 2014. Learning rich features from RGB-D images for object detection and segmentation; pp. 345–360. [Google Scholar]
  • 39.Trongtirakul T., Oulefki A., Agaian S., Chiracharit W. Mobile Multimedia/Image Processing, Security, and Applications 2020. Vol. 11399. International Society for Optics and Photonics; 2020. Enhancement and segmentation of breast thermograms; p. 113990F. [Google Scholar]
  • 40.L. Wang, A. Wong, COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest radiography images, arXiv preprint arXiv:2003.09871(2020). [DOI] [PMC free article] [PubMed]
  • 41.Rajab M., Woolfson M., Morgan S. Application of region-based segmentation and neural network edge detection to skin lesions. Comput. Med. Imaging Graph. 2004;28(1–2):61–68. doi: 10.1016/s0895-6111(03)00054-5. [DOI] [PubMed] [Google Scholar]
  • 42.Tarabalka Y., Chanussot J., Benediktsson J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognit. 2010;43(7):2367–2379. [Google Scholar]
  • 43.Fan D.-P., Zhou T., Ji G.-P., Zhou Y., Chen G., Fu H., Shen J., Shao L. Inf-Net: automatic COVID-19 lung infection segmentation from CT images. IEEE Trans. Med. Imaging. 2020;39 doi: 10.1109/TMI.2020.2996645. [DOI] [PubMed] [Google Scholar]
  • 44.Abraham N., Khan N.M. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) IEEE; 2019. A novel focal Tversky loss function with improved attention U-Net for lesion segmentation; pp. 683–687. [Google Scholar]
  • 45.S. Agaian, C. Mosquera-Lopez, Systems and methods for image/video recoloring, color standardization, and multimedia analytics, 2016, US Patent App. 15/082,036.
  • 46.Takacs B. Comparing face images using the modified Hausdorff distance. Pattern Recognit. 1998;31(12):1873–1881. [Google Scholar]

Articles from Pattern Recognition are provided here courtesy of Elsevier

RESOURCES