Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2023 May 25;40:101280. doi: 10.1016/j.imu.2023.101280

To segment or not to segment: COVID-19 detection for chest X-rays

Sara Al Hajj Ibrahim 1,, Khalil El-Khatib 1
PMCID: PMC10211251  PMID: 37346468

Abstract

Artificial intelligence (AI) has been integrated into most technologies we use. One of the most promising applications in AI is medical imaging. Research demonstrates that AI has improved the performance of most medical imaging analysis systems. Consequently, AI has become a fundamental element of the state of the art with improved outcomes across a variety of medical imaging applications. Moreover, it is believed that computer vision (CV) algorithms are highly effective for image analysis. Recent advances in CV facilitate the recognition of patterns in medical images. In this manner, we investigate CV segmentation techniques for COVID-19 analysis. We use different segmentation techniques, such as k-means, U-net, and flood fill, to extract the lung region from CXRs. Afterwards, we compare the effectiveness of these three segmentation approaches when applied to CXRs. Then, we use machine learning (ML) and deep learning (DL) models to identify COVID-19 lesion molecules in both healthy and pathological lung x-rays. We evaluate our ML and DL findings in the context of CV techniques. Our results indicate that the segmentation-related CV techniques do not exhibit comparable performance to DL and ML techniques. The most optimal AI algorithm yields an accuracy range of 0.92–0.94, whereas the addition of CV algorithms leads to a reduction in accuracy to approximately the range of 0.81–0.88. In addition, we test the performance of DL models under real-world noise, such as salt and pepper noise, which negatively impacts the overall performance.

Keywords: Artificial intelligence, Machine learning, Computer vision, Image processing, COVID-19 detection

1. Introduction

The first case of COVID-19 was reported in Wuhan, China, on December 31, 2019, with symptoms including fever, dry cough, tiredness, nausea, shortness of breath, lung infiltrates, and dyspnea. The World Health Organization (WHO) classified COVID-19 as a PHEIC (Public Health Emergency of International Concern). As of Dec 4, 2022, COVID-19 infected 649,866,193, of which 6,646,175 died and 627,025,938 were recovered [1]. As COVID-19 continues to spread, scientists continue to investigate its primary distribution. Recent findings indicate that long clinical testing times are a major contribution to the pandemic's rapid spread. To solve this issue, medical screening techniques like chest x-rays (CXR)s have been demonstrated to accelerate the identification process [2]. Using x-ray imaging technologies, medical professionals are able to detect infections much faster. In addition, CXRs are less expensive than blood tests or throat swabs when it comes to identifying diseases like COVID-19. Such detection approaches are also more precise than current methods for identifying several viruses such as COVID-19.

During a pandemic, when scientists must instantly obtain knowledge about the virus's flow and distribution, AI technology is used. An artificial intelligence (AI) agent in an environment learns rules from gathered information like humans do without having to program specific rules for each possible scenario. AI is an effective technique for promptly identifying contaminated regions in medical images [3,4], for example, finding cancer lesions in breast x-ray images known as mammograms [5]. As AI has been used to effectively diagnose cancer from x-rays in the past [5,6], it is also possible to diagnose COVID-19 from x-rays. In a COVID-19 CXR, a patient with COVID-19 symptoms such as fever, coughing, or shortness of breath reveals that the lungs have a hazy grey colour as opposed to black, with white lung boundaries for the blood vessels. For instance, in Ref. [7], scientists identified such COVID-19 patterns in the lungs within CT scans.

AI technologies are best fitted in a crisis of COVID-19 [8]. Machine learning (ML) and deep learning (DL) are both types of AI. ML algorithms learn from structured data to predict outputs and discover patterns. In most complex problems, scientists employ DL methods. DL is a subset of ML that employs DL models such as neural networks to mimic the learning process of the human brain. The popularity of DL approaches has grown recently due to their ability to identify complex hidden patterns and improve accuracy as more data is collected. DL has several layers. The more layers there are, the better the accuracy. The layers are used to find the features. After collecting features of training data, a DL model is taught to classify with more control. Weight vectors are used to connect the layers of a DL model [9]. The convolutional component of the DL model known as "convolutional neural network", is particularly well-suited for image recognition. A convolutional neural network (CNN) employs convolutional and pooling layers, which represent the translation-invariant nature of most images.

Computer vision (CV) technologies, in addition to AI algorithms, are most appropriate in the COVID-19 crisis [8]. CV is the practise of leveraging computers to interpret and comprehend images. In recent years, segmentation has been proven to be a successful approach in CV, especially in medical imaging [10,11]. Research has confirmed that segmentation plays a crucial role in real-world applications [12]. It helps to eliminate background information, decrease the chances of data leakage, and focus the model's attention solely on the most significant areas of the image. Segmentation approaches mainly identify any areas of an image that require further examination. In particular, lung segmentation within x-rays can be important for diseases like pneumonia, tuberculosis, cystic fibrosis, cancer, COVID- 19, and others. In this context, developing a computer-assisted method, such as segmentation, for COVID-19, could be of significant use. First, we use a segmentation technique to split a CXR into two segments: the foreground, which includes the lungs, and the background. We particularly employ 3 segmentation techniques: k-means [13], U-net [14] and flood fill [15]. Then, we test both segmented and non-segmented (original) x-ray images against several ML and DL techniques. The output of a binary classifier is either label 0 (Covid) or 1 (Normal). The objective of this study is to conduct an analysis of the COVID-19 virus using several CV segmentation algorithms as well as DL and ML techniques. The paper is structured as follows. In Section 2, we analyze related work. In Section 3, we describe the methodology in detail. We discuss the results in Section 4. Finally, we state the conclusion in Section 5.

2. Related work

As the COVID-19 pandemic continues, scientists are exploring new ways to combat the virus. Many studies have shown that CXRs may be used to detect COVID-19 [16,17]. In most cases, ML is used to detect patterns in data [18]. An application of ML is the detection of an infected patient's fever [19]. Instead, researchers are creating more effective DL-based solutions for COVID-19 than ML-based approaches. By being able to recognise patterns in medical images, DL techniques have changed how the COVID-19 virus is being detected and treated. For instance, in Ref. [16], researchers detected the virus with a 91.62% classification accuracy by analysing CXRs using a deep CNN. In Ref. [20], Wang et al. presented a deep CNN-based method, COVID-Net architecture, that finds COVID-19 instances in CXR data. The accuracy for the three-class classifier is 92.6%. Additionally, a novel DL-based system, COVIDX-Net, was proposed in Ref. [21]. It diagnosed COVID-19 from CXRs using VGG19 and DenseNet201 networks with 90% accuracy. Another case study, introduced an AI technique that makes a quick diagnosis of COVID-19 cases using both CT and CXR scans [22]. A modified CNN achieved 94.1% accuracy, while a pre-trained network achieved 98%.

Recently, computer-assisted diagnosis techniques like x-ray imaging have been used to find solutions for most diseases such as osteoporosis [23], cancer [24] and cardiac disease [25]. When it is hard to determine the primary region of interest for x-rays, segmentation is used. Lung segmentation within CXRs is used to facilitate the detection of an infected region. Various methods for lung segmentation have been proposed in the literature [10,11]. Lung segmentation has been widely used in improving COVID-19 detection. For instance, Pedro et al. used segmentation techniques to detect multi-category CXRs using U-net and DenseNet201 networks [26]. The experiment uncovered a database bias. Alternatively, the authors in Ref. [27] provided a DL pipeline for detecting multiple COVID-19 classes, non-viral, viral and COVID-19 pneumonia, as well as a thoracic disease using segmentation. DeepLabv3, U-net, and fully convolutional networks were compared. The performance of the proposed method on CXRs was comparable to that of senior radiologists. Another autonomous segmentation method for COVID-19 based on U-net models was presented in Ref. [28]. To improve feature representation and model robustness, the study applied a set of transformations and a soft attention method where image patches are given some weight. Experiment results demonstrated that COVID- 19 segmentation of CXRs is highly effective. Another case study also employed the U-net model [29]. This research showed that COVID-19 detection on segmented CXR images had a lower sensitivity than on non-segmented ones. Their results assume that lung segmentation reduces the ability to distinguish between multiple sources. Our work differs from the previously proposed research as we conduct a set of experiments with a variety of ML models in addition to investigating alternate segmentation strategies that were not evaluated in the literature (flood fill and k-means).

3. Methodology

The initial stage for our method is lung segmentation. An x-ray image is used as input, and the output generated is a binary mask that highlights the region of interest. We estimate that evaluating models using segmentation will aid in the reliance on information from the lung region rather than the x-ray background information, which is expected to improve the models’ predictions and reliability when applied to real-world contexts. This step particularly helps to identify specific positions and sizes of any lesions. When it comes to partitioning the lungs into segments, there are several options. The options range from simply choosing pixels with comparable values to those that are based on DL models. In this experiment, we compare the effectiveness of three different segmentation approaches on CXRs: k-means, U-net, and flood fill.

  • Flood fill:

The flood fill method uses a simple rule to choose which pixels to include or exclude in a growing region. This is accomplished through the use of an iterator, which facilitates the operation of the region-growing method. It examines the intensity levels within a particular interval. Then it visits nearby pixels to create an expanding zone. As a result, the algorithm formulates a way for assessing whether or not a specific pixel should be included in the current region [15].

One example of an advanced flood fill algorithm is the fuzzy flood fill mean shift algorithm [30]. This technique builds upon the foundation of the mean shift algorithm, enhancing the segmentation process by incorporating fuzzy kernels and flood fill techniques. The mean shift algorithm itself is a popular method for image segmentation, which aims to iteratively shift each pixel's location towards the mode of its surrounding feature space, effectively clustering pixels with similar properties together. By combining fuzzy kernels, which introduce probabilistic considerations, with the flood fill technique, the fuzzy flood fill mean shift algorithm achieves more accurate and robust segmentation results, particularly in cases where the mean shift algorithm alone may encounter challenges.

In a different application, the flood fill algorithm was adapted to segment blood vessels in an image using the regional parameter expansion method, as demonstrated in a study by Yin et al. [31]. This adaptation of the flood fill algorithm proved effective in accurately segmenting blood vessels, even in the presence of complex backgrounds and blurry edges. The regional parameter expansion method extends the basic flood fill approach by incorporating region-based analysis and expansion techniques. By considering the regional characteristics and expanding the segmentation based on appropriate parameters, this method achieves precise segmentation results for blood vessels, facilitating tasks such as medical diagnostics or research in the field of vascular imaging.

  • K-means clustering:

The k-means clustering method is an unsupervised learning algorithm in which each observation is assigned to a cluster that includes its centroid [13]. K-means is frequently employed in the field of image analysis. The purpose of the procedure is to minimise the squared Euclidean distance between the observation and the cluster centroid. Pixels serve as observations in this case, and clusters represent the image colours (0,1).

K-means has proven to be a valuable technique for addressing image segmentation challenges. Researchers have demonstrated the success of the k-means technique in this area, particularly for segmenting images effectively and efficiently. Its ease of use and ability to produce quality results have made it an ideal choice for various tasks, including medical imaging.

The U-net model is a convolutional neural network (CNN) built for biological image segmentation. It is made up of an encoder-decoder scheme:

  • Encoder: At each layer, the encoder flattens the data space and adds more channels to it. A previously defined collection of binary masks is required for training. The binary masks are generated manually by domain experts, who possess the requisite knowledge and understanding of the specific attributes and characteristics to look for in the images. These experts carefully analyze the images, identifying and marking the areas of interest, which are usually termed as the ’foreground’, and the remaining parts, typically referred to as the ’background’.

The foreground of an image is often the primary object or region of interest, while the background constitutes everything else. By labeling the image in this manner, the experts create a binary mask, which is essentially a two-color (black and white) representation of the original image. In this mask, one color represents the foreground, and the other represents the background. These binary masks play a vital role when training a U-net model. The U-net model learns from these binary masks associated with each training image. During training, the model receives an image and its corresponding binary mask as input. It uses this information to learn the intricate patterns and features that distinguish the foreground from the background.

Over time, as the model is exposed to numerous images and their corresponding binary masks, it learns to generalize the patterns and features that differentiate the foreground from the background. This learning enables the U-net model to predict segmentation masks for new, unseen images. By inputting a new image into the trained model, it outputs a predicted binary mask, segmenting the image into the foreground and background. This process allows for automated, efficient, and accurate image segmentation, demonstrating the value of manually created binary masks in training deep learning models like U-net.

  • Decoder: The decoder is responsible for expanding the spatial dimensions while concurrently reducing the number of channels. The tensor that is passed on to the decoder is often termed as the ’bottleneck’. The final stage involves the restoration of spatial dimensions, enabling a prediction to be made for each pixel within the input image. U-net models, in particular, have become a foundation in many real-world applications due to their efficiency and effectiveness [14]. In our work, we use a U-net model that was trained using the chest x-ray masks and labels database [34]. The architectural U-net layout is shown in Fig. 1 .

Fig. 1.

Fig. 1

U-net architecture for x-ray segmentation.
  • The U-net model:

The second step consists of evaluating the DL and ML algorithms on both the non-segmented and segmented x-ray images. We adapt several ML models. Initially, we adapt KNeighborsClassifiers (KNN)s. A KNN algorithm performs classification based on voting by the target point's nearest k-neighbors [35]. Moreover, we examine support vector classifiers (SVC)s. SVCs provide a map of the sorted data with the smallest feasible margins across classes [36]. In kaggle competitions, ML algorithms based on decision trees are increasingly used. We include.

Random Forest Classifiers (RF)s in this context, which are meta estimators that classify different sub-samples within the dataset [37]. Furthermore, we assess Decision Trees (DT)s that are able to predict the value of a target variable by learning basic decision rules based on data properties. In this case, a tree can be thought of as an approximation of a piecewise constant [38]. We assess another tree-based learning technique, Light Gradient Boosting Method (LGB) [39], that is a gradient boosting framework. LGB is designed to be more distributed and efficient, with the following benefits dominating: faster training speed and efficiency, lower memory utilisation, higher accuracy, parallel and GPU learning support, and the ability to manage large-scale data. Nevertheless, the top solutions in kaggle contests present a different algorithm called Extreme Gradient Boosting (XGBoost), which is a scalable, gradient-boosted, and distributed algorithm [40]. In addition to that, we evaluate gradient-based methods, such as Gradient Boosting Classifiers (GB)s. At each step, a GB classifier constructs an additive model iteratively by fitting trees to the negative gradient of the binary or multiclass log loss function. In GB, a single regression tree is generated if binary classification is considered [41]. Deep learning algorithms, particularly CNNs, are the last classifiers that we employ [42]. The CNN network architecture is represented in Fig. 2 .

Fig. 2.

Fig. 2

CNN architecture for COVID-19 classification.

4. Results

The three segmentation models (k-means, U-net, and flood fill) are examined on 3,616 Covid and 10,192 Normal images from the COVID-19 radiography database [34]. A generated CXR is shown in Fig. 3 . We notice that, visually, the best CXRs are those created by U-net. At a second stage, original and segmented images are used to train and test DL and ML algorithms. The CNN training and testing is divided into 20% testing data and 80% training data. We consider four assessment metrics: training time, accuracy, F1-score, and Area Under the Curve (AUC). The experiment is run on a 3.00 GHz Core(TM) i7-1185G7 processor with 16 GB RAM.1

Fig. 3.

Fig. 3

Example of an x-ray image obtained by different segmentation methods.

From Table 1 , results demonstrate that KNN's training time is the fastest. Training takes only milliseconds to complete. Using Fig. 4 , however, XGBoost and CNN surpass KNN and other algorithms with regard to accuracy. When flood fill and k-means segmentation techniques are used, XGBoost outperforms CNN. In contrast, CNN outperforms XGBoost for non-segmented images and when U-net segmentation is applied. We additionally provide the AUC and F1-scores of AI models in the segmentation procedures in Fig. 4. The AUC and F1-scores are comparable to the accuracy results.

Table 1.

Training time completion in seconds among different AI models.

AI model RF KNN DT LGB GB XGB SVC CNN
Time (secs) 21.12 0.2 11.79 4.171 201.16 36.90 36.24 536.75

Fig. 4.

Fig. 4

Evaluation metrics of AI models on different COVID-19 segmentation techniques.

After flood fill segmentation, k-means segmentation is the most effective segmentation technique for detection algorithms. Despite the fact that U-net looks to offer the greatest visual segmentation representation for x-rays, AI detection techniques perform the poorest on images obtained by U-net models. Moreover, the accuracy, AUC, and F1- score of non-segmented images are superior to the segmented images. Using original, non-segmented x-ray images, the CNN has the highest accuracy of 0.961 and an F1-score of 0.9475. We expected the models to perform better on segmented images than on images without segmentation. It looks information was lost during segmentation, as the COVID status indicator is missing. During the training phase, the classification model may focus on the outlying background regions. It has been suggested, for instance, that patterns within the lungs lead to more bias than those outside the lungs. Another possible explanation is that our masks are automated rather than handcrafted by professional radiologists, which could introduce an error margin. However, the observed differences are not excessive and are impossible to explain unless explainable AI (XAI) is incorporated. In the future, it would be interesting to investigate XAI as a technique that provides a more comprehensive explanation.

Accuracy assessments utilising the original benchmark database were employed to evaluate the performance of such AI systems. This may not fully reflect how resilient systems are in real-world environments. Any noise or perturbation, for example, could be applied to samples by some external factors. These perturbations are frequently understandable by humans, but can lead to erroneous decisions by AI systems. To address this issue, we implement some perturbation algorithms that simulate a variability in CXRs. We particularly examine the CNN model's robustness against noise and blurring, as it had the best results. We expect external factors to have an effect on the models performance. Our findings show that when an x-ray input contains low levels of noise or a filter, such as salt and pepper noise, a median filter [43], or a bilateral filter [44], the performance of CNN models suffers. Fig. 5 illustrates how the model prediction varies when salt-and-pepper noise is added. Before the salt and pepper noise in (a), CXRs obtained by k-means and U-net segmentation algorithms are classified as COVID. After the salt and pepper noise, the predictions change to Normal. This analysis is a crucial step towards identifying problems in AI models used in COVID-19 systems and evaluating the robustness of DL models in real-world scenarios with noisy data.

Fig. 5.

Fig. 5

Predicting an x-ray image example against Salt & Pepper noise.

5. Conclusion

In conclusion, we conducted an evaluation of various DL, CV, and ML approaches for detecting COVID-19. We analysed three segmentation strategies in particular: k-means, U-net, and flood fill. Our findings indicate that segmentation using U-net, flood fill, and k-means is not a superior method for detecting COVID-19. However, the COVID-19 segmentation methods from the literature have found models may focus outside of the lung region if no segmentation is used, resulting in skewed performance. As such, classification performance that lacks lung segmentation can be skewed. In spite of these limitations, we expect greater results in the future with ongoing study and development. Finally, we develop perturbation approaches such as the median filter, bilateral filter, salt and pepper noise, and others that may be used in performance evaluation tests. The DL classifier predictions change once noise or filters have been added. Finally, in addition to studying new successful methods for COVID-19 identification, it is crucial to understand how robustly models can perform with noisy data in a real-world context.

Declaration of competing interest

NO conflict of interest

Footnotes

[32] is one of the notable studies highlighting the k-means algorithm in image segmentation, which emphasizes its effectiveness in this domain. In this research, the authors have demonstrated the potential of K-means to address various issues related to image segmentation. Another significant contribution in the field of medical imaging comes from a study conducted by Katkar et al. [33]. In this research, the authors present a novel method that employs k-means for segmenting medical images. The proposed approach has shown promising results in accurately identifying regions of interest within the images, further emphasizing the utility of k-means in the field of medical imaging.

References

  • 1.Info Worldometers. 2023. Coronavirus cases.https://www.worldometers.info/coronavirus [Google Scholar]
  • 2.Mamunur Rahaman Md, Chen Li, Yao Yudong, et al. Identification of COVID-19 samples from chest X-ray images using deep learning: a comparison of transfer learning approaches. J X Ray Sci Technol. 2020;28:821–839. doi: 10.3233/XST-200715. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Tao Ai, Yang Zhenlu, Hou Hongyan, et al. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296:E32–E40. doi: 10.1148/radiol.2020200642. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Yan Li, Xia Liming. Coronavirus disease 2019 (COVID-19): role of chest CT in diagnosis and management. AJR Am J Roentgenol. 2020;214:1280–1286. doi: 10.2214/AJR.20.22954. [DOI] [PubMed] [Google Scholar]
  • 5.Al-Antari Mugahed A., Han Seung-Moo, Kim Tae-Seong. Evaluation of deep learning detection and classification towards computer-aided diagnosis of breast lesions in digital X-ray mammograms. Comput Methods Progr Biomed. 2020;196 doi: 10.1016/j.cmpb.2020.105584. [DOI] [PubMed] [Google Scholar]
  • 6.Worawate Ausawalaithong, Arjaree Thirach, Sanparith Marukatat, Theerawit Wilaiprasitporn. 2018. Automatic lung cancer prediction from chest X-ray images using the deep learning approach in 2018 11th Biomedical Engineering International Conference (BMEiCON):1–5IEEE. [Google Scholar]
  • 7.Bhargava Anuja, Bansal Atul. Novel coronavirus (COVID-19) diagnosis using computer vision and artificial intelligence techniques: a review. Multimed Tool Appl. 2021;80 doi: 10.1007/s11042-021-10714-5. –19946. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ahmed Afif Monrat, Olov Schelén, Karl Andersson. A survey of blockchain from the perspectives of applications, challenges, and opportunities. IEEE Access. 2019;7:117134–117151. [Google Scholar]
  • 9.Shrestha Ajay, Mahmood Ausif. Review of deep learning algorithms and architectures. IEEE Access. 2019;7:53040–53065. [Google Scholar]
  • 10.Gusztáv Gaál, Balázs Maga, András Lukács. 2020. Attention U-net based adversarial architectures for chest X-ray lung segmentation. arXiv preprint arXiv:2003.10304. [Google Scholar]
  • 11.Ewa Pietka. Lung segmentation in digital radiographs. J Digit Imag. 1994;7:79–84. doi: 10.1007/BF03168427. [DOI] [PubMed] [Google Scholar]
  • 12.Gianluca Maguolo, Loris Nanni. 2021. A critic evaluation of methods for COVID-19 automatic detection from X-ray images Information Fusion. 76:1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lloyd Stuart. Least squares quantization in PCM. IEEE Trans Inf Theor. 1982;28:129–137. [Google Scholar]
  • 14.Olaf Ronneberger, Philipp Fischer, Thomas Brox. 2015. U-net: convolutional networks for biomedical image segmentation in International Conference on Medical image computing and computer-assisted intervention:234–241Springer. [Google Scholar]
  • 15.Eva-Marie Nosal. 2008 New Trends for Environmental Monitoring Using Passive Systems. 2008. Flood-fill algorithms used for passive acoustic detection and tracking. 1–5IEEE. [Google Scholar]
  • 16.Kumar Das Amit, Ghosh Sayantani, Samiruddin Thunder, Dutta Rohit, Agarwal Sachin, Amlan Chakrabarti. Automatic COVID-19 detection from X-ray images using ensemble learning with Convolutional Neural Network. Pattern Anal Appl. 2021;24:1111–1124. [Google Scholar]
  • 17.Arora Neelima, Banerjee Amit K., Narasu Mangamoori L. The role of artificial intelligence in tackling COVID-19. Future Virol. 2020;15:717–724. [Google Scholar]
  • 18.Bishop Christopher M., Nasrabadi Nasser M. vol. 4. Springer; 2006. (Pattern recognition and machine learning). [Google Scholar]
  • 19.Erickson Bradley J., Panagiotis Korfiatis, Zeynettin Akkus, Kline Timothy L. vol. 37. 2017. p. 505. (Machine learning for medical imaging Radiographics). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Wang Linda, Lin Zhong Qiu, Wong Alexander. Covid-net: a tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Sci Rep. 2020;10:1–12. doi: 10.1038/s41598-020-76550-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.El-Din Hemdan Ezz, Shouman Marwa A., Mohamed Esmail Karar. 2020. Covidx-net: a framework of deep learning classifiers to diagnose COVID- 19 in X-ray images. arXiv preprint arXiv:2003.11055. [Google Scholar]
  • 22.Maghdid Halgurd S., Asaad Aras T., Zrar Ghafoor Kayhan, Ali Safaa Sadiq, Seyedali Mirjalili, Khurram Khan Muhammad. Multimodal image exploitation and learning. Vol. 11734. 2021. Diagnosing COVID- 19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. 99–110SPIE 2021. [Google Scholar]
  • 23.Paola Pisani, Maria Daniela Renna, Francesco Conversano, et al. Screening and early diagnosis of osteoporosis through X-ray and ultrasound based techniques. World J Radiol. 2013;5:398–410. doi: 10.4329/wjr.v5.i11.398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Al-Antari Mugahed A., Al-Masni Mohammed A., Mun-Taek Choi, Seung-Moo Han, Tae-Seong Kim. A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification. Int J Med Inf. 2018;117:44–54. doi: 10.1016/j.ijmedinf.2018.06.003. [DOI] [PubMed] [Google Scholar]
  • 25.Speidel Michael A., Wilfley Brian P., Star-Lack Josh M., Heanue Joseph A., Van Lysel Michael S. Scanning-beam digital x-ray (SBDX) technology for interventional and diagnostic cardiac angiography. Med Phys. 2006;33:2714–2727. doi: 10.1118/1.2208736. [DOI] [PubMed] [Google Scholar]
  • 26.Bassi Pedro R.A.S., Romis Attux. COVID-19 detection using chest X-rays: is lung segmentation important for generalization? Res Biomed Eng. 2022:1–19. [Google Scholar]
  • 27.Wang Xiaofei, Jiang Lai, Liu Li, et al. Joint learning of 3D lesion segmentation and classification for explainable COVID-19 diagnosis. IEEE Trans Med Imag. 2021;40:2463–2476. doi: 10.1109/TMI.2021.3079709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Chen Xiaocong, Yao Lina, Zhang Yu. 2020. Residual attention U-net for automated multi-class segmentation of COVID-19 chest CT images. arXiv preprint arXiv:2004.05645. [Google Scholar]
  • 29.Teixeira Lucas O., Pereira Rodolfo M., Diego Bertolini, et al. Impact of lung segmentation on the diagnosis and explanation of COVID-19 in chest X-ray images. Sensors. 2021;21:7116. doi: 10.3390/s21217116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Kang Hoon, Lee Seung Hwan, Lee Jayong. 2010. Image segmentation based on fuzzy flood fill mean shift algorihm in 2010 Annual Meeting of the North American Fuzzy Information Processing Society; pp. 1–6. [Google Scholar]
  • 31.Zong-Xian Yin, Hong-Ming Xu. An unsupervised image segmentation algorithm for coronary angiography. BioData Min. 2022;15:27. doi: 10.1186/s13040-022-00313-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Nameirakpam Dhanachandra, Khumanthem Manglem, Chanu Yambem Jina Image segmentation using K-means clustering algorithm and subtractive clustering algorithm. Procedia Comput Sci. 2015;54:764–771. [Google Scholar]
  • 33.Juilee Katkar, Trupti Baraskar, Mankar Vijay R. International Conference on Applied and theoretical Computing and communication technology (iCATccT):430–435IEEE. 2015. A novel approach for medical image segmentation using PCA and K-means clustering. [Google Scholar]
  • 34.Rahman . 2022. COVID-19 chest X-ray images and lung masks dataset.https://www.kaggle.com/tawsifurrahman/ covid19-radiography-database. [Google Scholar]
  • 35.Altman Naomi S. An introduction to kernel and nearest-neighbor nonparametric regression. Am Statistician. 1992;46:175–185. [Google Scholar]
  • 36.Corinna Cortes, Vladimir Vapnik. Vol. 20. 1995. Support-vector networks Machine learning; pp. 273–297. [Google Scholar]
  • 37.Kam Ho Tin. Proceedings of 3rd international conference on document analysis and recognition. Vol. 1. 1995. Random decision forests in; p. 278. 282IEEE. [Google Scholar]
  • 38.Wu Xindong, Kumar Vipin, Ross Quinlan J., et al. Vol. 14. 2008. Top 10 algorithms in data mining Knowledge and information systems; pp. 1–37. [Google Scholar]
  • 39.Ke Guolin, Qi Meng, Thomas Finley, et al. vol. 30. 2017. (Lightgbm: a highly efficient gradient boosting decision tree Advances in neural information processing systems). [Google Scholar]
  • 40.Chen Tianqi, Carlos Guestrin. Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 2016. Xgboost: a scalable tree boosting system in; pp. 785–794. [Google Scholar]
  • 41.Friedman Jerome H. 2001. Greedy function approximation: a gradient boosting machine Annals of statistics; pp. 1189–1232. [Google Scholar]
  • 42.Keiron O'Shea, Ryan Nash. 2015. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458. [Google Scholar]
  • 43.Tukey J.W. Nonlinear (nonsuperposable) methods for smoothing data. Cong. Rec. EASCON’74. 1974;673 [Google Scholar]
  • 44.Tomasi C., Manduchi R. IEEE Cat. No.98CH36271; 1998. Bilateral filtering for gray and color images in sixth international Conference on computer vision; pp. 839–846. [Google Scholar]

Articles from Informatics in Medicine Unlocked are provided here courtesy of Elsevier

RESOURCES