Skip to main content
Frontiers in Big Data logoLink to Frontiers in Big Data
. 2023 Apr 6;6:1120989. doi: 10.3389/fdata.2023.1120989

AI-based radiodiagnosis using chest X-rays: A review

Yasmeena Akhter 1, Richa Singh 1, Mayank Vatsa 1,*
PMCID: PMC10116151  PMID: 37091458

Abstract

Chest Radiograph or Chest X-ray (CXR) is a common, fast, non-invasive, relatively cheap radiological examination method in medical sciences. CXRs can aid in diagnosing many lung ailments such as Pneumonia, Tuberculosis, Pneumoconiosis, COVID-19, and lung cancer. Apart from other radiological examinations, every year, 2 billion CXRs are performed worldwide. However, the availability of the workforce to handle this amount of workload in hospitals is cumbersome, particularly in developing and low-income nations. Recent advances in AI, particularly in computer vision, have drawn attention to solving challenging medical image analysis problems. Healthcare is one of the areas where AI/ML-based assistive screening/diagnostic aid can play a crucial part in social welfare. However, it faces multiple challenges, such as small sample space, data privacy, poor quality samples, adversarial attacks and most importantly, the model interpretability for reliability on machine intelligence. This paper provides a structured review of the CXR-based analysis for different tasks, lung diseases and, in particular, the challenges faced by AI/ML-based systems for diagnosis. Further, we provide an overview of existing datasets, evaluation metrics for different[][15mm][0mm]Q5 tasks and patents issued. We also present key challenges and open problems in this research domain.

Keywords: chest X-ray, trusted AI, interpretable deep learning, Pneumoconiosis, tuberculosis, pneumonia, COVID-19

1. Introduction

Advances in medical technology have enhanced the process of disease diagnosis, prevention, monitoring, treatment and care. Imaging technologies such as computer tomography (CT), medical imaging resonance (MRI), ultrasonography (USG), PET and others, along with digital pathology, are at ease for medical practitioners to assess and treat any disorder. Table 1 provides a comparative overview of the existing common imaging modalities used in medical sciences.1 2 Every year across the globe, a massive number of investigations are performed to assess human health for disease diagnosis and treatment and the data generated from hospitals annually is in petabytes (IDC, 2014). The generated ‘big data' include all electronic health records (EHR) consisting of medical imaging, lab reports, genomics, clinical notes and financial and operational data (Murphy, 2019). Out of the total generated data from the hospital, the maximum contribution is made by radiology or imaging data. However, 97% of this data remained unanalyzed or unused (Murphy, 2019).

Table 1.

Comparative analysis of common and widely used imaging modalities for medical applications.

Specifications CT MRI X-Ray PET SPECT USG
Acronynm Computer Tomography Magnetic Resonance Imaging X-radiation/ Rontgen radiation Positron Emission Tomography Single Photon Emission Computed Tomography Ultrasound/ Ultrasonography
Working principle Uses multiple X-rays at different angles to generate 3D image Uses magnet and pulsing radio waves to generate response from presence of water molecules inside the human body X-ray beam passed through body gets blocked due to denser tissue which results in shadow of the tissue Injection with Radioactive tracer that emits positrons. Later, these positrons are tracked over time in the form of a 3D image. Same as PET Uses high frequency sound waves as short pulses from area of interest as reflections received by transducer
Usage/ application Recommended for all structures of human body (soft/ bone/blood vessels) Best Suited for soft tissues Recommended for diseased tissues/organs like lungs and bony structures such as teeth, skull etc. Allows to trace the biological processes within human body Same as PET Best suited for internal organs. Not recommended for bony structures
Scanner cost ($) 85–450 K 225–500 K+ 40–175 K 225–750 K 400–600 K 20–200 K
Radiation exposure Yes None Yes Yes Yes None
Per scan cost ($) 1,200–3,200 1,200–4,000 ~70 3,000–6,000 100–1,000
Time of scanning 30 s 10 min–2 h A few seconds 2–4 h 2–4 h 10–15 min
Side effect Excessive exposure can lead to cancer Prolonged exposure is hazardous Radioactive allergy can occur. Overdue exposure can be dangerous Same as PET Comparatively safer
Spatial resolution (mm) 0.5–1 0.2 - 6–10 7–15 0.1–1
Details of soft/hard tissue Higher contrast images are generated and ideal for both types of Data with higher details of soft tissues are received Can be used for soft tissues as well such Gall bladder, lungs etc. Covers biological phenomenon such as drug delivery etc. Allows to inspect functioning of various body organs and useful in brain disorders, heart problems and bone disorders Soft tissues such as muscles, internal organs etc.
Limitations Patients with large body size may underfit the scanning process Patients with heavy weight may underfit the scanning process. Also, patients with pacemakers, tattoos are not advised the scan Limited to few body parts Kids and pregnant women are not recommended. Expensive. Long scan time, low resolution, higher artifacts rate. Expensive Objects deeper or hidden under bone are not captured. Presence of air spaces also fail scanning process.

Among all the imaging modalities, X-ray is the most common, fast and inexpensive modality used to diagnose many human body disorders such as fractures and dislocations and ailments such as cancer, osteoporosis of bones, chest conditions such as pneumonia, tuberculosis, COVID-19, and many more. It is a non-invasive and painless medical examination that uses an electric device for emission to pass through the patient body, and a 2-D image with the impression of internal body structures is generated. It is estimated that more than 3.5 billion diagnostic X-rays are performed annually worldwide (Mitchell, 2012) and they contribute 40% to the total imaging count per year (WHO, 2016), billion CXRs are performed worldwide. However, the availability of a trained workforce to handle this amount of workload is limited, particularly in developing and low-income nations. For instance, in some parts of India, there is one radiologist for 100,000 patients, and in the U.S., it is one radiologist for 10,000 patients.

In recent years, with the unprecedented advancements in deep learning and computer vision, computer-aided diagnosis has started to intervene in the diagnosis process and ease the workload for doctors. CXR-based analysis with machine learning and deep learning has drawn attention among researchers to provide an easy and reliable solution for different lung diseases. Many attempts have been made to provide easy automatic CXR-based diagnosis to increase the acceptance of AI-based solutions. Currently, many commercial products are available for clinical use which have cleared CE marked (Europe) and/or FDA clearance (United States), for instance, qXR by qure.ai (Singh et al., 2018), TIRESYA by Digitec (Kim et al., 2017), Lunit INSIGHT CXR by Lunit (Hwang et al., 2019), Auto lung by Samsung Healthcare (Sim et al., 2020), AI-Rad companion by Seimens Healthineers (Fischer et al., 2020), CAD4COVID-XRay by Thirona (Murphy et al., 2020) and many more.

Based on the projection, CXRs are differentiated into three categories; posteroanterior (PA), anteroposterior (AP) and lateral (LL). Figure 1 showcases CXR samples for three different projections. PA view is the standard projection of the X-ray beam traversing the patient from posterior to anterior. On the other hand, AP is the opposite alternative to PA, where an X-ray beam passes the patient chest from anterior to posterior. A lateral view is performed erect left lateral (default). It demonstrates a better anatomical view of the heart, and assesses posterior costophrenic recesses. It is generally done to assess the retrosternal and retrocardiac airspaces.3 Table 2 tabulates the differences in the AP and PA views. The patient alignment also compromises the assessment of the chest X-ray for different organs such as the heart, mediastinum, tracheal position, and lung appearances. Rotation of the person can lead to certain misleading appearances in CXRs, such as heart size. In a left rotation in PA CXR, the heart appears enlarged and vice-versa. Moreover, the rotation can affect the assessment of soft tissue in CXRs, misleading the impressions in the lungs, for instance, costophrenic angle.4 About 25% of the total CXR count per year, faces the ‘reject rates' due to image quality or patient positioning (Little et al., 2017).

Figure 1.

Figure 1

Showcasing the chest-X rays for three projections. (A) AP view, (B) PA view, and (C) Lateral View.

Table 2.

Illustrates the differences between two common CXR projections.

PA view AP view
Standard frontal Chest projection Alternative frontal projection to the PA
X-ray beam traverses the patient from posterior to anterior X-ray beam traverses the patient from anterior to posterior
Needs full aspiration and standing position from patient Can be performed patient sitting on the bed
Best practice to examine lungs, mediastinum and thoracic cavity Best practice for intubated and sick patients
Heart size appear normal Heart size appear magnified
Images are of higher quality and a better option to assess heart size Not a good option to assess the size of heart

In the existing literature, with the release of multiple datasets for different lung diseases, different tasks have been established with CXR data. Below is the list of tasks accomplished for CXR-based analysis using different ML and DL approaches. Figure 2 showcases the transition across different tasks for CXR-based image analysis.

Figure 2.

Figure 2

Showcasing the transition across different tasks in CXR-based analysis for a given input image.

  • Image enhancement: The collected data from the hospitals do not always contribute to the detection process. The reason is varying quality samples. So, before proposing a detection pipeline, authors have used different CXR enhancement techniques for noise reduction, contrast enhancement, edge detection and many more.

  • Segmentation: In CXR, segmentation of ROI usually gives a better edge to the disease detection pipeline. This reduces the ineffectual part of CXR, allowing lesser chances of misdiagnosis. Existing work has focused on the segmentation of the lung field, ribs, diseased part, diaphragm, costophrenic angle and support devices.

  • Image classification: For the CXR datasets, multi-class and multi-label classification tasks have been performed using ML and DL approaches. With datasets such as CheXpert (Irvin et al., 2019), ChestXray14 (Wang et al., 2017) etc., multi-label classification is done. It reflects the different manifestations (local labels) in CXR due to any disease. For instance, Pneumoconiosis can cause multiple manifestations in the lung tissue, such as atelectasis, nodules, fibrosis, emphysema and many more. Similarly, in multi-class, we differentiate CXR into a particular class for diseases. For instance, the detection of pneumonia in CXR is a multi-class problem. We need to distinguish viral, bacterial and COVID-19, representing three classes (types) of pneumonia.

  • Disease localization: It specifies the region within CXR infected by any particular disease. This is generally indicated by a bounding box, dot or circular shape.

  • Image generation: Generally, the datasets are small in number and also suffer class imbalance problems. In order to improve the training set number, different approaches apart from affine transformation-based data augmentation, such as Generative Adversarial network-based approaches, are used. Moreover, analysis are done on the real and synthetic CXRs.

  • Report generation: The generation of reports for a given CXR is one of the recent areas covered in CXR-based image analysis. The task involves reporting all the findings present in CXR in a text file.

  • Model explainability: With the remarkable performance of the deep model, model explainability is a must to build trust in the machine intelligence-based decision. Explanation of machine intelligence justifies the decision process and builds trust in the automatic decision process. Interpretability encourages understanding the mechanism of algorithmic predictions.

The availability of intelligent machine diagnostics for Chest X-rays aids in reducing information overload and exhaustion of radiologists by interpreting and reporting the radiology scans. Many diseases affect the lungs, including lung cancer, bronchitis, COPD, Fibrosis, and many more. The literature review below is based on the publicly available datasets and the work done for these common diseases. Figure 3 showcases different areas for which existing literature is available for CXR-based analysis.

Figure 3.

Figure 3

Showcasing research problems which have been studied in the literature.

In this review article, we focus on using computer vision, machine learning, and deep learning algorithms for different disorders where CXR is a standard medical investigation. We discuss the related work about the tasks mentioned above for CXR-based analysis. We further present the literature for widely studied disorders such as TB, Pneumonia, Pneumoconiosis, COVID-19, and lung cancer available in terms of publications and patents. We also discuss the evaluation metrics used to assess the performance of different tasks, publicly available datasets for various disorders and tasks. Figure 4 shows the schematic organization of the paper.

Figure 4.

Figure 4

Illustrating the schematic structure of the paper.

2. Task-based literature review

We first present the review of different tasks with respect to CXR-based analysis, such as pre-processing and classification and disease localization.

2.1. Image pre-processing

Image pre-processing includes enhancement and segmentation tasks and are either rule-based/handcrafted or deep learning based.

2.1.1. Pre-deep learning based approaches

Sherrier and Johnson (1987) used a region-based histogram equalization technique to improve the image quality of CXR locally and finally obtain an enhanced image. Zhang D. et al. (2021) used the dynamic histogram enhancement technique (Abin et al., 2022) used different image enhancement techniques such as Brightness Preserving Bi Histogram (BBHE) (Zadbuke, 2012), Equal Area Dualistic Sub-Image Histogram Equalization (DSIHE) (Yao et al., 2016), Recursive Mean Separate Histogram Equalization (RMSHE) (Chen and Ramli, 2003) followed by a Particle swarm optimization (PSO) (Settles, 2005) for further enhancing the CXRs for detecting pneumonia. Soleymanpour et al. (2011) used adaptive contrast equalization for enhancement, morphological operation-based region growing to find lung contour for lung segmentation followed by oriental spatial Gabor filter (Gabor, 1946) for rib suppression. Candemir et al. (2013) used graph cut optimization (Boykov and Funka-Lea, 2006) method to find the lung boundary. Van Ginneken et al. (2006) used three approaches, Active shape model (Cootes et al., 1994), active appearance models (Cootes et al., 2001), pixelwise classification to segment the lung fields in CXRs. Li et al. (2001) used an edge detection-based approach by calculating vertical and horizontal derivatives to find the RoI in CXR. Annangi et al. (2010) used edge detection with an active contour method-based approach for lung segmentation.

2.1.2. Deep learning based approaches

Abdullah-Al-Wadud et al. (2007) proposed enhancing the CXR images input for a CNN model for pneumonia detection. Hasegawa et al. (1994) used a shift-invariant CNN-based approach for lung segmentation. Hwang and Park (2017) proposed a Multi-stage training approach to perform segmentation using atrous convolutions. Hurt et al. (2020) used UNet-based (Ronneberger et al., 2015) semantic segmentation for extracting lung field and performed pneumonia classification. Li B. et al. (2019) used the UNet model to segment the lung part, followed by the attention-based CNN for pneumonia classification. Kusakunniran et al. (2021) and Blain et al. (2021) used UNet for lung segmentation for COVID-19 detection. Oh et al. (2020) used the extended fully convolution DenseNet (Jégou et al., 2017) to perform pixel-wise segmentation for lung fields in CXR to improve the classification performance for COVID-19 detection. Subramanian et al. (2019) used UNet based model to segment out the central venous catheters (CVCs) in CXRs. Cao and Zhao (2021) used a UNet-based semantic segmentation model with variational auto-encoder features in the encoder and decoder of UNet with an attention mechanism to perform automatic lung segmentation. Singh et al. (2021) propose an approach based on DeepLabV3+ (Chen et al., 2017b) with dilated convolution for lung field segmentation. Figures 5A, B showcase examples of the preprocessing tasks.

Figure 5.

Figure 5

Showcases the examples of outputs obtained after tasks such as pre-processing and classification. (A) shows output of contrast enhancement. (B) shows output of the segmentation task and (C) shows the classification pipeline.

2.1.3. Patent review

Hong et al. (2009a) proposed an approach to segment the diaphragm from the CXR using a rule-based method. Huo and Zhao (2014) proposed an approach to suppress the clavicle bone in CXR based on the edge detection algorithm. Chandalia and Gupta (2022) proposed a deep learning-based detection model to detect the inputted image as CT or CXR. Jiezhi et al. (2018) proposed a method to determine the quality of the inputted CXR image using deep learning.

2.1.4. Discussion

From the above literature, pre-deep learning-based approaches require well-defined heuristics to either enhance or segment the lung region. A major focus has been laid on noise removal or contrast enhancement and lung segmentation. However, limited attention has been given to diseased ROIs segmentation. The common datasets used to perform lung segmentation are Montgomery and Shenzhen (Jaeger et al., 2014); however, the number of samples is limited. No dataset is available to focus on local findings.

2.2. Image classification

This section covers the literature on CXR classification for multiclass and multilabel settings. Input CXR images undergoes feature extraction followed by classification algorithms, which are either rule-based or handcrafted or use deep learning.

2.2.1. Pre-deep learning based approaches

Katsuragawa et al. (1988) developed an automated approach based on the two-dimensional Fourier transform for detecting and characterizing interstitial lung disorder. The approach uses the textural information for a given CXR as normal or abnormal. Ashizawa et al. (1999) used 16 radiological features from CXR and ten clinical parameters to classify a given CXR as one of the classes among 11 interstitial lung diseases using ANN. A statistically significant improvement was reported over the diagnostic results from the radiologists.

2.2.2. Deep Learning based approaches

Thian et al. (2021) combined two large publicly available datasets, ChestXray14 (Wang et al., 2017) and MIMICCXR (Johnson et al., 2019), to train a deep learning model for the detection of pneumothorax and assess its generalizability on six external validation CXR sets independent of the training set.

Homayounieh et al. (2021) proposed an approach to assess the ability of AI for nodule detection in CXR. The study included an in-house dataset trained on the deep model, which is pretrained on ChestXray14 (Wang et al., 2017) and ImageNet datasets for 14 class classifications. Lenga et al. (2020) used the existing continual learning approach for the medical domain for CXR-based analysis. Zech et al. (2018) assessed deep models for pneumonia using the training data from different institutions. Figure 5C showcases the classification pipeline using CXR for different lung diseases.

2.2.3. Patent review

Lyman et al. (2019) proposed a model to differentiate CXR into normal or abnormal. The model is trained to find any abnormality like effusion, emphysema etc., to classify a given CXR as abnormal. Hong et al. (2009b) proposed a method for feature extraction to detect nodules in the CXR to reduce the false positives. Hong and Shen (2008) proposed an approach for automatically segmenting the heart region for nodule detection. Guendel et al. (2020) proposed a deep multitask learning approach to classify CXR for different findings present in it. The proposed approach also performs segmentation along with disease localization simultaneously. Clarke et al. (2022) proposed a computer-assisted diagnostic (CAD) method using wavelet transform-based feature extraction for automatically detecting nodules in the CXRs. Putha et al. (2022) proposed a deep learning-based method to predict the risk of lung cancer associated with the characteristics (size, calcification etc.) of nodules present in the CXR. Doi and Aoyama (2002) proposed a neural network-based approach to detect the presence of nodule and further classify them as benign or malignant. Lei et al. (2021) created a cloud-based platform for lung-based disease detection using CXR. Ting et al. (2021) proposed a transfer learning approach for detecting lung inflammation from a given CXR. Kang et al. (2019) proposed a transfer learning-based approach for predicting lung disease in the CXR image. Qiang et al. (2020) proposed a lung disease classification approach, which extracts the lung mask and enhances the segmented image and CNN-based model for feature extraction and classification. Luojie and Jinhua (2018) proposed a deep learning-based classification for lung disease for 14 different findings. Kai et al. (2019) proposed a deep learning system to classify the lung lesion in a given CXR. Harding et al. (2015) proposed an approach for lung segmentation and bone suppression in a given CXR to improve CAD results.

2.2.4. Discussion

Researchers have generally developed algorithms for classification using supervised machine learning approaches. Both multilabel and multi-class classification tasks are studied. Due to availability of small sample size datasets with data imbalance, transfer learning is widely used in most research.

2.3. Image generation

This section covers the existing work for the image generation task. This is a new field, where mostly generative models are used for other tasks and verify the model performance on synthetic and real CXR-based disease detection. Figure 6 showcases the synthetically generated CXR samples.

Figure 6.

Figure 6

Showcasing the synthetically generated chest-X ray images. For a given normal image (A), the proposed approach by Tang et al. (2019b) generates the abnormal images (B–G) is predicted segmentation mask for same input image and results in mask-image pairs from (B–G). Figure is adapted from Tang et al. (2019b).

Tang et al. (2019b) proposed XLSor, a deep learning model for generating CXRs for data augmentation and a criss-cross attention-based segmentation approach. Eslami et al. (2020) proposed a multi-task GAN-based approach for image-to-image translation, generating bone-suppressed and segmented images using the JSRT dataset. Wang et al. (2018) proposed a hybrid CNN-based model for CXR classification and image reconstruction. Madani et al. (2018) used GAN based approach to generate and discriminate CXRs for the classification tasks. Sundaram and Hulkund (2021) used GAN based approach to perform data augmentation and evaluated the classification model for synthetically generated and affine transformation-based data in the CheXpert dataset.

2.3.1. Discussion

The current work in image generation for CXR has focused on alleviating the data deficiency for training deep models. It is observed that the synthetic data generated using GAN-based approaches improve model performance compared to the standard data augmentation methods such as rotation and flip.

2.4. Disease localization

Disease localization is an interesting task for localizing diseased ROIs. This allows us to look at the difference between the predicted and the actual diseased area in the CXR. Yu et al. (2020) proposed a multitasking-based approach to segment Peripherally inserted central catheter (PICC) lines and detect tips simultaneously in CXRs. Zhang et al. (2019) proposed SDSLung, a multitasking-based approach adapted from Mask RCNN (Girshick et al., 2014) for lung field detection and segmentation. Wessel et al. (2019) proposed a Mask RCNN-based approach for rib detection and segmentation in CXRs. Schultheiss et al. (2020) used a RetinaNet (Ren et al., 2015) based approach to detect the nodule along with lung segmentation in CXRs. Kim et al. (2020) used Mask RCNN and RetinaNet to assess the effect of input size for nodule and mass detection in CXRs. Takemiya et al. (2019) proposed a CNN-based approach to perform nodule opacity classification and further used R-CNN to detect the nodules in CXRs. Kim et al. (2019) compared existing CNN-based object detection models for nodule and mass detection in CXRs. Cho et al. (2020) used a YOLO (Redmon and Farhadi, 2017) object detection model to detect different findings in CXRs.

2.4.1. Patent review

Putha et al. (2021) proposed a deep learning-based system for detecting and localizing infectious diseases in CXR alongside using the information from the clinical sample for the same patient. Jinpeng et al. (2020) proposed a deep learning approach for automatic disease localization using CXRs based on weakly-supervised learning.

2.4.2. Discussion

The current work in CXR-based analysis has focused on detecting the lung part in the given CXR or the disease area in the bounding box. Most of the work have used object detection algorithms such as YOLO, RCNN and its variants (Mask RCNN, Faster RCNN).

2.5. Report generation

This section covers the existing work in report generation for CXR image analysis. This is a recent area which combines two domains; Natural Language Processing (NLP) and Computer Vision (CV).

Xue et al. (2018) proposed a multimodal approach consisting of LSTM and CNN for the cohesive indent-based report generation with an attention mechanism. Li X. et al. (2019) proposed VisPi, a CNN and LSTM-based approach with attention to generating reports in medical imaging. The proposed algorithm performs classification and localization and then finally generates a detailed report. Syeda-Mahmood et al. (2020) proposed a novel approach to generate reports for fine-grained labels by fine-tuning the model learnt on fine-grained and coarse labels.

2.5.1. Discussion

This recently explored area requires more attention. In CXR-based analysis, report generation allows a multi-modal learning using CNNs and sequential models. However, the task is challenging as the large text corpus is required with the CXR dataset and only a fewer datasets are available for this task.

2.6. Model explainability

Jang et al. (2020) trained a CNN on three CXR-based datasets (Asan Medical Center-Seoul National University Bundang Hospital (AMC-SNUBH), NIH, and CheXpert) for assessing the robustness of deep models in labeling noise. Authors added different noise levels in the labels of these datasets to demonstrate that the deep models are sensitive to the label noise; as for huge datasets, the labeling is done using report parsing or NLP, leading to a certain extent in labeling the CXR samples. Kaviani et al. (2022) and Li et al. (2021) reviewed different deep adversarial attacks and defenses on medical imaging. Li and Zhu (2020) proposed an unsupervised learning approach to detect the different adversarial attacks in CXRs and assess the robustness of deep models. Gongye et al. (2020) studied the effect of different existing adversarial attacks on the performance of the deep model for COVID-19 detection from CXRs. Hirano et al. (2021) studied the universal adversarial perturbations (UAP) effect on the deep model-based pneumonia detection and reported performance degradation in the classification of CXRs. Ma et al. (2021) studied the effect of altering the textural information present in the CXRs, which can lead to misdiagnosis. Seyyed-Kalantari et al. (2021) studied the fairness gaps in existing deep models and datasets for CXR classifications. Li et al. (2022) studied the gender bias affecting the performance of different deep models on existing datasets. Rajpurkar et al. (2017) used Class Activation Maps (CAMs) to interpret the model decisions for detecting different findings in CXRs. Pasa et al. (2019) used a 5-layered CNN-based architecture for detecting TB in CXRs from two publicly available datasets, Shenzhen and Montgomery. The authors used Grad-CAM visualization for model interpretability.

2.6.1. Discussion

Work done so far on model interpretability for CXR-based disease detection is based on post-hoc approaches such as saliency map or CAM analysis. Explainability in AI-based decisions is a must to rely on machine intelligence. Healthcare is a challenging domain, and the life of humans is at risk based on a false positive or false negative. There is a need to incorporate the inbuilt model explainability to handle noisy or adversarial samples, thus improving model robustness for CXR-based systems. Further, challenges occur due to the data imbalance and less model interoperability, as models are usually trained on data from a single hospital. This results in unfair decisions by learning sensitive information from the data. The existing work should encourage more pathways for robust and fair CXR-based systems, which will further increase the chances of deployment of such systems in places with poor healthcare settings.

3. Disease detection based literature

In this section, we present the literature review of commonly addressed lung diseases. Several CXR datasets are made publicly available, allowing to development of novel approaches for different disease-related tasks. Figure 7 showcases the samples of CXRs affected with different lung diseases.

Figure 7.

Figure 7

Showcasing the chest-X rays affected with different lung disorders. (A) Normal, (B) Pneumoconoisis, (C) TB, (D) Pneumonia, (E) Bronchitis, (F) COPD, (G) Fibrosis, and (H) COVID-19.

3.1. Tuberculosis

TB is caused by Mycobacterium tuberculosis. It is one of the most common reasons for mortality in lung disease worldwide. About 10 million people were affected by TB in 2019 (WHO, 2021). In the year 2013, it took 1.5 million lives (WHO, 2013). TB is curable; however, hospital patient rush delays the diagnostic process and its treatment. CXR are the common radiological modality used to diagnose TB. Computer-aided diagnosis and CAD-based TB detection for CXR images will ease the detection process.

3.1.1. Pre-deep learning based approaches

Govindarajan and Swaminathan (2021) used reaction-diffusion set method for lung segmentation followed by local feature descriptors such as Median Robust Extended Local Binary Patterns (Liu et al., 2016), local binary pattern (Liu et al., 2017) and Gradient Local Ternary Patterns (Ahmed and Hossain, 2013) with Extreme Learning Machine (ELM) and Online Sequential ELM (OSELM) (Liang et al., 2006) classifiers for detecting TB in CXR images using Montgomery dataset. Alfadhli et al. (2017) used speed-up robust features (SURF) (Bay et al., 2008) for feature detection and performed classification using SVM for TB diagnosis. Jaeger et al. (2014) collected different handcrafted features such as histogram of gradients (HOG) (Dalal and Triggs, 2005), the histogram of intensity, magnitude, shape and curvature descriptors, LBP Ojala et al., 1996) as set A for detection. They further used edge, color (fuzzy-color and color layout) based features as Set B for image retrieval. Chandra et al. (2020) used two-level hierarchical features (shape and texture) with SVM for TB classification. Santosh et al. (2016) used thoracic edge map encoding using PHOG (Opelt et al., 2006) for feature extraction followed by multilayer perceptron-based (MLP) based classification of CXR into TB or normal.

3.1.2. Deep learning based approaches

Duong et al. (2021) created a dataset of 28,672 images by merging different publicly available datasets (Jaeger et al., 2014; Wang et al., 2017; Chowdhury et al., 2020; Cohen et al., 2020) for three class classification; TB, pneumonia and normal. Authors performed a deep learning-based classification using a pretrained EfficientNet (Tan and Le, 2019) trained on ImageNet (Deng et al., 2009) dataset, pretrained Vision Transformer (ViT) (Dosovitskiy et al., 2020) and finally developed a hybrid between EfficientNet and Vision Transformer. For the proposed hybrid model, the CXR is given input to the pretrained EfficientNet to generate features which are later fed to the ViT and finally, the classification results are obtained. Ayaz et al. (2021) proposed a feature ensemble-based approach for TB detection using Shenzen and Montgomery datasets. The authors used Gabor filter-based handcrafted features and seven different deep learning architectures to generate the deep features. Dasanayaka and Dissanayake (2021) proposed a deep learning-algorithm comprising of data generation using DCGAN (Radford et al., 2015), lung segmentation using UNet (Ronneberger et al., 2015) and transfer learning approach based feature ensemble and classification. The authors used genetic algorithm-based hyperparameter tuning. Msonda et al. (2020) used the deep model-based approach with spatial pyramid pooling and analyzed its effect on TB detection using CXR allowing robustness to the combination of features, thus improving the performance. Sahlol et al. (2020) proposed an Artificial Ecosystem-based Optimization (AEO) (Zhao et al., 2020) on top of the features extracted from a pre-trained network, MobileNet, trained on ImageNet dataset as feature selector. The authors used two publicly available datasets, Shenzen and Pediatric Pneumonia CXR dataset (Kermany et al., 2018b). Rahman et al. (2020b) used a deep learning approach for CXR segmentation and classification into TB or normal. For segmentation, the authors used two deep models, UNet and modified UNet (Azad et al., 2019). Authors also used different existing visualizations techniques such as SmoothGrad (Smilkov et al., 2017), Grad-CAM (Selvaraju et al., 2017), Grad-CAM++ (Chattopadhay et al., 2018), and Score-CAM (Wang H. et al., 2020) for interpreting deep model for making classification decisions. The authors used nine different deep models for CNN-based classification of CXR into TB or normal. Rajaraman and Antani (2020) created three different models for three different lung diseases. First model was trained and tested on RSNA pneumonia (Stein et al., 2018), pediatric pneumonia (Kermany et al., 2018b), and Indiana (McDonald et al., 2005) datasets for pneumonia detection. The second model is trained and tested for TB detection using the Shenzhen dataset. Finally, the first model is finetuned for TB detection to improve model adaption for a new task and reported majority voting results for TB classification. Rajpurkar et al. (2020) collected CXRs from HIV-infected patients from two hospitals in South Africa and developed CheXaid, a deep learning algorithm for the detection of TB to assist clinicians with web-based diagnosis. The proposed model consists of DenseNet121 trained on CheXpert (Irvin et al., 2019) dataset, and outputs six findings (micronodular, nodularity, pleural effusion, cavitation, and ground-glass) with the presence or absence of TB in a given CXR. Zhang et al. (2020) proposed an attention-based CNN model, CBAM, and used channel and spatial attention to generate more focus on the manifestation present in the TB CXR. The authors used different deep models and analyzed the effect of the attention network on detecting TB. Table 3 summarizes the above work for TB detection using CXRs.

Table 3.

Review of the literature for TB detection using CXRs based on different feature extraction methods.

References Highlights Pretraining Dataset
Govindarajan and Swaminathan (2021) Texture-based feature descriptors with ML classifier No Montgomery
Alfadhli et al. (2017) Used SURF as feature extractor and SVM as classifier No Montgomery
Jaeger et al. (2014) Used texture-based features (LBP, HOG) and statistical feature with ML Classifier No Shenzhen, Montgomery
Chandra et al. (2020) Used shape and textural features with SVM No Shenzhen, Montgomery
Santosh et al. (2016) Used PHOG as features with MLP as classifier No Shenzhen, Montgomery
Duong et al. (2021) Used Pretrained EfficientNet and ViT, and developed a hybrid of two Yes Shenzhen, Montgomery, Chestxray14, COVID-CXR (Chowdhury et al., 2020)
Ayaz et al. (2021) Used Feature ensemble of handcrafted and deep features Yes Shenzen, Montgomery
Dasanayaka and Dissanayake (2021) Generated synthetic images, performed segmentation and used feature ensemble for classification Yes Shenzhen, Montgomery
Msonda et al. (2020) Used spatial pyramid pooling for deep feature extraction Yes Shenzhen, Montgomery, private
Sahlol et al. (2020) Used Meta-heuristic approach for Deep feature selection Yes Shenzen, Montgomery, PedPneumonia
Rahman et al. (2020b) Performed segmentation and used different visualization techniques Yes Shenzhen, Montgomery, NIAID TB, RSNA
Rajaraman and Antani (2020) Performed tri-level classification and studied task adaptation Yes RSNA pneumonia, PedPneumonia, Indiana, Shenzhen
Rajpurkar et al. (2020) Developed a web-based system for TB affected HIV patients Yes CheXpert private dataset
Zhang et al. (2020) Used deep model with Attention based CNN (CBAM) module Yes Shenzhen, Montgomery
Rahman M. et al. (2021) Merged publicly available CXR dataset with XGBoost as classifier Yes Shenzhen, Montgomery
Owais et al. (2020) Used a feature ensemble by combining low and high level features Yes Shenzhen, Montgomery
Das et al. (2021) Modified a pre-trained InceptionV3 for TB classification Yes Shenzhen, Montgomery
Munadi et al. (2020) Used enhancement techniques to improve deep classification Yes Shenzhen, Montgomery
Oloko-Oba and Viriri (2020) Used deep learning-based pipeline for classification Yes Montgomery
Ul Abideen et al. (2020) Proposed the Bayesian CNN to deal with uncertain TB and non-TB cases that have low discernibility. Yes Shenzhen, Montgomery
Hwang et al. (2016) Proposed a modified AlexNet-based model for end-to-end training. Also performed cross-database evaluations. Yes Shenzhen, Montgomery
Gozes and Greenspan (2019) Proposed MetaChexNet, trained on CXRs and metadata of gender, age and patient positioning. Later, finetuned the model for TB classification Yes ChestXray14, Shenzhen, Montgomery

Pretraining (yes/no) refers to the use of weights of a deep model trained on ImageNet dataset. Private refers that the data used being in-house and is not released publicly.

3.1.3. Patent review

Kaijin (2019) proposed a deep learning-based approach for segmentation and pulmonary TB detection in CXR images. Venkata Hari (2022) proposed a deep learning model for detecting TB in chest X-ray images. Chang-soo (2021) proposed an automatic chest X-ray image reader which involves reading data from the imaging device, segments the lung part, followed by gray level co-occurrence matrix-based feature extraction and finally discriminates it as normal, abnormal or TB. Minhwa et al. (2017) proposed a CAD-based system for diagnosing and predicting TB in CXR using deep learning.

3.1.4. Discussion

In most handcrafted approaches, the texture of CXR is used to define features, followed by any ML classifier. From the above, it is highlighted that the major focus for TB detection is on two datasets; Shenzhen and Montgomery. However, the two datasets contain below 1000 samples even when combined together. This results in poor generalization and needs a pretrained backbone network which is finetuned later. This is why pretrained models trained on the ImageNet dataset are widely used for TB classification from CXRs. Thus, there is a need for large-scale datasets for TB detection with segmentation masks and disease annotations to achieve model generalizability and interpretability.

3.2. Pneumoconoisis

Pneumoconoisis is a broad term that describes lung diseases among industry workers due to overexposure to silica, coal, asbestos, and mixed dust. It is an irreversible and progressive occupational disorder prevalent worldwide and is becoming a major cause of death among workers. It is further categorized based on elements inhaled by the workers, such as silicosis (silica), brown lung (cotton and other fiber), pneumonoultramicroscopicsilicovolcanoconiosis (ash and dust), coal worker Pneumoconiosis (CWP) or black lung (asbestos), and popcorn lung (Diacetyl). People exposed to these substances are at a high risk of developing other lung diseases such as lung cancer, lung collapse, and TB.

3.2.1. Pre-deep learning based approaches

Okumura et al. (2011) proposed a rule-based model for detecting the region of interests (ROIs) for nodule patterns based on the Fourier transform and an ANN-based approach for other ROIs which were not covered using the power spectrum analysis. The dataset is based on 11 normal and 12 abnormal cases of Pneumoconiosis, where normal cases were selected from an image database of the Japanese Society of Radiological Technology. Abnormal cases were selected randomly from the digital image database. Ledley et al. (1975) demonstrated the significance of textural information present in the CXR to detect the presence of coal work Pneumoconiosis (CWP). Hall et al. (1975) used the textural information present in CXRs and generated features based on spatial and histogram moments for six regions of a given segmented image. Authors performed classification based on maximum likelihood estimation and linear discriminant analysis (LDA). The authors further performed 4 class profusion classification for a given CXR in CWP workers. Yu et al. (2011) used the active shape modeling to segment out the lung from the CXR. The segmented image is divided into six non-overlapping zones as per the ILO guidance. On top of this, six separate SVM classifiers are built on the histogram and co-occurrence-based features generated from each zone. The authors also generated a chest-level classification by integrating the prediction results of the six regions. The experiments are carried out on a dataset of 850 PA CXRs with 600 normal and 250 abnormal cases collected from Shanghai Pulmonary Hospital, China. Murray et al. (2009) proposed based on an amplitude-modulation frequency-modulation (AM-FM) approach to extract the features and used partial least squares for classification. The authors extracted AM-FM features for multiple scales and used a classifier for each scale, later combining results from the individual classifiers. The authors performed the experiments on the CXRs collected from the Miners' Colfax Medical Center and the Grant's Uranium Miners, Raton, New Mexico, for CWP detection. Xu et al. (2010) collected a private dataset of 427 CXR images, consisting of 252 and 175 images for normal and Pneumoconiosis, respectively. The authors performed segmentation using an active shape model followed by dividing the image into six sub-regions. For each subregion, five co-occurrence-based features are extracted. A separate SVM is trained for each subregion, followed by the staging of Pneumoconiosis using a separate SVM.

3.2.2. Deep learning based approach

Yang et al. (2021) proposed a deep learning-based approach for Pneumoconiosis detection. The proposed approach consists of a two-stage pipeline, UNet (Ronneberger et al., 2015) for lung segmentation and pre-trained ResNet34 for feature extraction on the segmented image. The dataset is collected in-house and includes 1,760 CXR images for two classes; normal and Pneumoconiosis. Zhang L. et al. (2021) proposed a deep model for screening and staging Pneumoconiosis by dividing the given CXR into six subregions. This was followed by a CNN-based approach to detect the level of opacity in each subregion, and finally, a 4-class classification was performed to determine normal I, II, and III stages of Pneumoconiosis for a UNet-based segmented image. The results are obtained on the in-house data of 805 and 411 subjects for training and testing, respectively. Devnath et al. (2021) applied a deep transfer learning CheXNet (Rajpurkar et al., 2017) model on a private dataset. The approach is based on the multilevel features extracted from the CheXNet and fed to a different configuration of SVMs. Wang X. et al. (2020) collected a dataset of 1881, including the 923 and 958 samples for Pneumoconiosis and normal, respectively. They used InceptionV3, a deep learning architecture to detect Pneumoconiosis in the given CXR to determine the potential of deep learning for assessing Pneumoconiosis. Wang D. et al. (2020) generated synthetic data for both normal and Pneumoconiosis using CycleGAN (Zhu et al., 2017), followed by a CNN-based classifier. The author proposed a cascaded framework of pixel classifier for lung segmentation, CycleGAN, for generating training images and a CNN-based classifier. Wang et al. (2021) collected a set of in-house 5,424 CXRs, including normal and Pneumoconiosis cases, belonging to 4 different stages (0–3). Authors used ResNet101 (He et al., 2016) for detecting Pneumoconiosis on segmented images from the UNet segmentation model and showed improved results compared to radiologists. Sydney, and Wesley Medical Imaging, Queensland, Australia. Hao et al. (2021) collected data consisting of 706 images from Chongqing CDC, China, with 142 images positive for Pneumoconiosis. Authors trained two deep learning architectures, ResNet34 and DenseNet53 (Huang et al., 2017) for the classification of CXRs into normal or Pneumoconiosis. Table 4 summarizes the above work based on the method of feature extraction.

Table 4.

Review of the literature for Pneumoconiosis detection using CXRs.

References Highlights Pretraining Dataset
Okumura et al. (2011) Used Fourier Transform to demonstrate the nodule pattern with Neural Nets for detection & No JSRT, Private
Hall et al. (1975) Used textural for six regions to determine profusion level No Private
Yu et al. (2011) Used active shape model to segment lung, divided each lung into six regions. Features generated from each region are used to train SVM No Private
Xu et al. (2010) Used textural features generated from six lung regions with SVM for classification and staging No Private
Yang et al. (2021) Two stage pipeline with segmentation followed by classification Yes Private
Zhang L. et al. (2021) Used Deep learning for screening and staging based on six lung regions Yes Private
Devnath et al. (2021) Used Feature ensemble of multiple level deep features generated from pretrained model on CXR data Yes ChestXray14, private
Wang X. et al. (2020) Used InceptionNet for end-to-end classification No Private
Wang D. et al. (2020) Generated synthetic CXR samples and trained CNN with real and synthetic Yes Chestxray14, Private
Wang et al. (2021) Performing Pneumoconioisis staging on segmented CXR images Yes Private
Hao et al. (2021) Used two different deep models with different depths for feature generation, followed by classification Yes Private

Pretraining (yes/no) refers to the use of weights of deep models trained on the ImageNet dataset. Private refers that the data used being in-house and is not released publicly.

3.2.3. Patent review

Sahadevan (2002) proposed an approach to use high-resolution digital CXR images to detect early-stage lung cancer, Pneumoconiosis and pulmonary diseases. Wanli et al. (2021) proposed a deep learning-based approach for Pneumoconoisis detection using lung CXR image.

3.2.4. Discussion

From the above-cited work, it is clear that there is no publicly available dataset. The current work is done on the in-house datasets with fewer samples. This draws our attention to the fact that the automatic detection of Pneumoconiosis from CXRs requires publicly available datasets for developing robust, generalizable and efficient algorithms.

3.3. Pneumonia

It is a viral or bacterial infection affecting the lungs and humans of all ages, including children. CXRs are widely used to examine the manifestation caused due to pneumonia infection.

Sousa et al. (2014) compared different machine learning models for the classification of pediatric CXRs into normal or pneumonia. Zhao et al. (2019) merged four different CXR datasets for pneumonia classification and performed lung and thoracic cavity segmentation using DeepLabv2 (Chen et al., 2017a) and ResNet50 for pneumonia classification from CXRs on top of the segmented images. Tang et al. (2019a) used CycleGAN to generate synthetic data and proposed TUNA-Net to adapt adult to pediatric pneumonia classification from CXRs. Narayanan et al. (2020) used UNet for lung segmentation followed by a two-level classification viz; level 1 classifies given CXR into pneumonia or normal, and level 2 further classifies pneumonia CXR into either bacterial or viral class. Rajaraman et al. (2019) highlighted different visualization techniques for interpreting CNN-based pneumonia detection using CXRs. Ferreira et al. (2020) used VGG16 for classifying pediatric CXR into normal pneumonia and further classifying them as bacterial or viral. Zhang J. et al. (2021) proposed an EfficientNet-based confidence-aware anomaly detection model to differentiate viral pneumonia as a one-class classification from non-viral and normal classes (Elshennawy and Ibrahim, 2020; Longjiang et al., 2020; Yue et al., 2020) used different deep learning models using a transfer learning approach to perform classification using CXRs for pneumonia. Mittal et al. (2020) used an ensemble of CNN and CapsuleNet (Sabour et al., 2017) for detecting pneumonia from CXRs images using publicly available pediatric dataset (Stein et al., 2018). Rajpurkar et al. (2017) proposed a pre-trained DenseNet121 model for classifying 14 findings present in CXRs in Chestxray14 dataset. The authors further performed a binary classification to detect pneumonia. Table 5 summarizes the above based on the feature extraction methods.

Table 5.

Summarizes the literature for Pneumonia detection using CXR.

References Highlights Pretraining Dataset
Sousa et al. (2014) Compared different ML classifiers for Pediatric Pneumonia No PedPneumonia
Zhao et al. (2019) Used Multiple datasets and performed semantic lung segmentation No PedPneumonia, RSNA-Pneumonia, Private
Tang et al. (2019a) Generated synthetic data and trained model for adult pneumonia, and later adapted that for pediatric pneumonia No RSNA, PedPneumonia
Narayanan et al. (2020) Lung segmentation followed by two level of classification Yes PedPneumonia
Rajaraman et al. (2019) Comparison of different visualization techniques for deep model explaination Yes PedPneumonia
Ferreira et al. (2020) A multistage CXR classification viz; healthy or pneumonia and viral or bacterial pneumonia Yes PedPneumonia
Zhang J. et al. (2021) EfficientNet-based confidence-aware anomaly detection model No PedPneumonia
Mittal et al. (2020) Used an ensemble of deep model (CNN) and CapsuleNet Yes PedPneumonia
Rajpurkar et al. (2017) Performed multilabel classification with CAM analysis Yes Chestxray14

Pretraining (yes/no) refers to the use of weights of a deep model trained on the ImageNet dataset. Private refers that the data used is in-house and is not released publicly.

3.3.1. Patent review

Shaoliang et al. (2020) proposed a system for pneumonia detection from CXR using deep learning based on transfer learning.

3.3.2. Discussion

Most of the work is done around (Stein et al., 2018) dataset in multi-class settings. However, there are challenges which need to be addressed other than the dataset challenge, which includes lung segmentation and model interpretability. Transfer learning is widely used to improve generalization for Pneumonia detection on CXRs. Pneumonia is a common manifestation of many lung disorders and is thus required to be detected in multilabel settings.

3.4. COVID-19

COVID-19 is caused due to SARS-CoV-2 Coronavirus prevalent worldwide and is responsible for the ongoing pandemic. It is responsible for the death of more than 6 million people worldwide. Rt-PCR is an available test to detect the presence of COVID-19; however, using CXR is a rapid method for diagnosis and detecting the presence of pneumonia-like symptoms in the lungs.

3.4.1. Pre-deep learning based approaches

Rajagopal (2021) used both transfer learning (pre-trained VGG16) and ML (SVM, XGBoost) trained on a deep features-based approach for three-class classification for COVID-19 detection. Jin et al. (2021) used a pre-trained AlexNet to generate the features on CXR images followed by feature selection and classification using SVM.

3.4.2. Deep learning based approaches

Chowdhury et al. (2020) proposed a dataset by merging publicly available datasets (Wang et al., 2017; Mooney, 2018; Cohen et al., 2020; ISMIR, 2020; Rahman et al., 2020a; Wang L. et al., 2020) for COVID-19 and used eight pretrained CNN models [MobileNetv2, SqueezeNet, ResNet18, ResNet101, DenseNet201, Inceptionv3, ResNet101, CheXNet (Rajpurkar et al., 2017), and VGG19] for the three class classification; normal, viral pneumonia, and COVID-19 pneumonia. Khan et al. (2020) proposed CoroNet, a transfer learning-based approach using XceptionNet-based approach, trained end-to-end for classification of CXR images into normal, bacterial pneumonia, viral pneumonia, COVID-19 using publicly available datasets. Islam et al. (2020) proposed a CNN-LSTM based architecture for detecting COVID-19 from CXRs for a dataset of 4,575 images. Pham (2021) compared the fine-tuning approach with the recently developed deep architectures for 2-class and 3-class classification problems for COVID-19 detection in CXRs on three publicly available datasets. Al-Rakhami et al. (2021) extracted deep features from pre-trained models and performed classification using RNN. Duran-Lopez et al. (2020) proposed COVID-XNET, for detecting COVID-19 from CXR images based on CNN for binary classification. Gupta et al. (2021) proposed InstaCovNet-19, by stacking different fine-tuned deep models with variable depth as to increase model robustness for COVID-19 classification on CXRs. Punn and Agarwal (2021), Wang N. et al. (2020), Khasawneh et al. (2021), Jain et al. (2021), El Gannour et al. (2020), Panwar et al. (2020b), and Panwar et al. (2020a) used transfer learning based approach for differentiating COVID-19 from viral pneumonia and normal CXRs. Abbas (2021) proposed a CNN-based class decomposition approach, DeTraC, which aims to decompose classes into subclasses and assign new labels independent of each other within the datasets by adding a class decomposition layer and later adding back these subsets to generate final predictions. The authors used the COVID-19 Classification from CXR images on publicly available datasets. Gour and Jain (2020) proposed a stacked CNN-based approach using five different submodules from two different deep models; first fine-tuned VGG16 and second a 30-layered CNN, and the output is combined by logistic regression for three classifications for COVID-19 using CXRs. Malhotra et al. (2022) proposed COMiT-Net, a deep learning-based multitasking approach for COVID-19 detection from CXR, simultaneously performs semantic lung segmentation, and disease localization to improve model interpretability. Pereira et al. (2020) combined both handcrafted and deep learning-based features and performed two-level classification for COVID-19 detection. Rahman T. et al. (2021) compared the effect of different enhancement techniques and lung segmentation on classification tasks based on transfer learning for differentiating CXRs as COVID-19, normal, and Non-COVID. Li et al. (2020) developed COVID-MobileXpert, a knowledge distillation-based approach consisting of three models, one large teacher model, trained on a large CXR dataset and two student models; one finetuned on COVID-19 dataset to discriminate COVID-19 pneumonia from normal CXRs and another a small lightweight model to perform on-device screening for CXR snapshots. Ucar and Korkmaz (2020) proposed Bayes-Squeeznet, based on pretrained SqueezeNet and Bayesian optimization for COVID-19 detection in CXRs. Shi et al. (2021) proposed a knowledge distillation-based attention method with transfer learning for COVID-19 detection from CT and CXRs. Saha et al. (2021) proposed EMCNet, based on extracting deep features from CXRs and training different machine learning classifiers. Mahmud et al. (2020) proposed a CovXNet, based on training a deep model on different resolution CXR data, Stacked CovXNet, and later finetune it on COVID-19 and non-COVID-19 CXR data as a target task. Table 6 summarizes the above work for COVID-19 detection using CXRs.

Table 6.

Review of the literature for COVID19 detection using CXRs.

References Highlights Pretraining Dataset
Rajagopal (2021) Combined deep learning and ML classifier Yes PedPneumonia, COVID-CXR, https://github.com/agchung
Jin et al. (2021) Used deep feature followed by feature selection with SVM Yes PedPneumonia, COVID-CXR
Chowdhury et al. (2020) Used deep ensemble feature generation Yes Mutiple datasets with different disorders
Khan et al. (2020) XceptionNet based end-to-end training Yes PedPneumonia, COVID-CXR, COVIDDGR
Islam et al. (2020) Used a combination of LSTM-CNN-based architecture Yes Combination of
publicly available data Pham (2021) Used a multi-level classification approach for two and three disease classes Yes COVID-CXR, PedPneumonia, COVID-19 (kaggle), ActualMed (github)
Al-Rakhami et al. (2021) Approach combines CNNs with sequential deep model Yes Data collected from various available sources
Duran-Lopez et al. (2020) Proposed COVID-XNet, a custom deep learning model for binary classification Yes BIMVC, COVID-CXR
Gupta et al. (2021) Proposed InstaCovNet-19, with ensemble generated from deep features Yes Chowdhury et al. (2020), COVID-CXR
Abbas (2021) Class decomposition into sub-classes with pre-trained models Yes JSRT, COVID-CXR
Gour and Jain (2020) Submodule stacking from pretrained and customized deep models Yes COVID-CXR, ActualMed, PedPneumonia
Malhotra et al. (2022) Multi-task approach with segmentation, disease classification and Yes CheXpert, Chestxray14, BIMVC-COVID19, Various online sources
Pereira et al. (2020) Feature ensemble of handcrafted and deep features Yes COVID-CXR, Chestxray14, Radiopedia Encyclopedia
Rahman T. et al. (2021) Employed and compared different enhancement technique for performance improvement Yes PedPneumonia, BIMCV+COVID19
Li et al. (2020) On-device detection approach for CXR snapshots Yes PedPneumonia, COVID-CXR
Ucar and Korkmaz (2020) Used Bayesian optimization with deep models for differentiating Pneumonia Yes PedPneumonia, COVID-CXR
Shi et al. (2021) Knowledge transfer in the form of attention from teacher to student network No COVID-CXR, SIRM
Saha et al. (2021) Used deep features with different ML classifiers Yes COVID-CXR, SIRM, PedPneumonia, Chestxray14,
Mahmud et al. (2020) Used feature stacking generated from different resolutions Yes PedPneumonia, private

Pretraining (yes/no) refers to use of weights of deep model trained on ImageNet dataset. Private refers that the data used is in-house and is not released publicly.

3.4.3. Patent review

Shankar et al. (2022) proposed a deep learning-based SVM approach for classifying chest X-rays affected with COVID-19 or normal.

3.4.4. Discussion

The research is very recent, and papers produced on different datasets are generated either with fewer samples or combining more than one dataset. The CXR data released post-pandemic is collected from multiple centers across the globe. Further, only a fewer works have incorporated inherent model interpretability. To the best of our knowledge, no work has been established for segmentation, report generation, or disease localization and the primary focus is on the classification task.

4. Datasets

Several chest X-ray datasets have been released over the past. These datasets are either made available in DICOM, PNG or JPEG format. The labeling is either done with the help of experts in this domain or label extraction methods using the natural language processing techniques from the reports associated with each image. Moreover, a few datasets also include the local labels as disease annotations for a given sample. Authors have also included lung field masks available as ground truth for performing segmentation and associated tasks. In this section, we include the publicly available datasets used in the literature. The statistics are also summarized in Table 7. Figure 8 illustrates the samples from the different CXR datasets mentioned below.

Table 7.

Illustrates the available CXR datasets in the literature.

Name Number of Images (I)/Patients (P) View position Global labels Local labels Image format Labeling method
JSRT (Shiraishi et al., 2000) I: 247 PA: 247 3 N/A DICOM Radiologist
Open-i (O) (Demner-Fushman et al., 2012) I: 7910 PA: 3955, LL: 3955 N/A N/A DICOM Radiologist
NLST (Team, 2011) I: 5493 No public information is available. The dataset was reported by Lu et al. (2019)
Shenzhen (Jaeger et al., 2014) I: 340 PA: 340 2 N/A PNG Radiologist
Montgomery (Jaeger et al., 2014) I: 138 PA: 138 2 N/A PNG Radiologist
Indiana (Demner-Fushman et al., 2016) I: 7466 PA: 3807, LL: 3659 N/A N/A N/A Radiology reports
Chestxray8 (Wang et al., 2017) I: 108K+, P: 32,717 N/A N/A 8 PNG Report parsing
Chestxray14 (Wang et al., 2017) I: 112K, P: 31K PA: 67K, AP: 45K No 14 PNG Report parsing
RSNA-Pneumonia (Stein et al., 2018) I: 30K PA: 16K, AP: 14K 1 N/A DICOM Radiologist
Ped-Pneumonia (Kermany et al., 2018a) I: 5856 N/A 2 N/A JPEG Radiologist
CheXpert (Irvin et al., 2019) P: 65K, I: 224K PA: 29K, AP: 16K, LL: 32K N/A 14 JPEG Report parsing Cohort of Radiologists
CXR14-Rad-Labels (This, 2020) P: 1709, I: 4374 AP: 3244, PA: 1132 4 N/A PNG Radiologist
MIMIC-CXR (Johnson et al., 2019) P: 65K, I: 372K PA+AP: 250K, LL: 122K N/A 14 JPEG(V1) DICOM(V2) Report Parsing
SIIM-ACR (Anuar, 2019) I: 16K, P: 16K PA: 11K, AP: 4799 1 N/A DICOM Radiologist
Padchest (Bustos et al., 2020) P: 67K, I: 160K PA: 96K, AP: 20K, LL: 51K N/A 193 DICOM Report parsing Radiologist Interpretation of reports
BIMCV (Vayá et al., 2020) P: 9129, I: 25,554 PA: 8,748, AP: 10,469, LL: 6,337 1 N/A PNG Laboratory Reports
CAAXR (Mittal et al., 2022) P: 1749, I: 1749 Not mentioned 1 N/A PNG Cohort of radiologists
COVIDSSL (Hospitales, 2020) P: 1,725 Mostly AP 1 N/A DICOM Laboratory Reports
COVIDGR (Tabik et al., 2020) I: 852 PA: 852 2 N/A JPEG Radiologist
COVID-CXR (Cohen et al., 2020) I: 866, P: 449 PA: 344, AP: 438, LL: 84 4 N/A JPEG Radiologist
VinDr-CXR (Nguyen et al., 2020) I: 18K PA: 18K 6 22 DICOM Radiologist
Brax (Reis, 2022) P: 19,351, I: 40,967 Numbers are not mentioned N/A 14 DICOM + PNG Report parsing
Belarus (Rosenthal et al., 2017) I: 306, P:169 No other information is available

The table presents the description of each dataset with the number of images, patients, the available format of images, view position and labeling (annotation) method. The global label refers to the label single label assigned to the image for multiclass settings while local labels refer to multiple labels assigned to a single image for different findings present in multilabel settings. PA stands for Posterior to anterior, AP stands for Anterior to posterior and LL stands for lateral view. K refers to 1,000. N/A stands for not available.

Figure 8.

Figure 8

Showcases the sample examples of the CXRs from different datasets. The samples belong to Shenzhen (A, B), Montgomery (C, D), JSRT (E, F), Chestxray14 (G, H), VinDr-CXR (I, J), CheXpert (K, L), RSNA Pneumonia (M, N), Covid-CXR (O, P), PedPneumonia (Q, R), and MIMIC-CXR (S, T). The samples across different datasets highlight a wide variety in terms of quality, contrast, brightness and original image size.

  • JSRT: Shiraishi et al. (2000) introduced the dataset in the year 2000, consisting of 247 images for two classes; malignant and benign. The resolution of each image is 2048 X 2048. The dataset can be downloaded from http://db.jsrt.or.jp/eng.php.

  • Open-i (O) : Demner-Fushman et al. (2012) proposed the chest X-ray dataset consisting of 3955 samples for 3955 subjects. Images are available in DICOM format. The findings are available in the form of reports made available by the radiologists. The dataset is collected from Indiana Network for Patient care (McDonald et al., 2005). The dataset can be downloaded from https://openi.nlm.nih.gov/.

  • NLST : The dataset available collected from the NLST screening trails (Team, 2011). The dataset consists of 26,732 subjects for CXRs, and a subset of the dataset is available on request from https://biometry.nci.nih.gov/cdas/learn/nlst/images/.

  • Shenzhen: Jaeger et al. (2014) introduced the dataset in the year 2014, consisting of 662 CXRs belonging to two classes; Normal and TB. The dataset was collected from Shenzhen No.3 Hospital in Shenzhen, Guangdong providence, China, in September 2012. The samples are shared publicly with original full resolution and include lung segmentation masks. The dataset can be downloaded from https://openi.nlm.nih.gov/imgs/collections/ChinaSet_AllFiles.zip.

  • Montgomery: Jaeger et al. (2014) introduced the dataset in the year 2014, consisting of 138 CXRs belonging to two classes; Normal and TB. The dataset is collected from the tuberculosis control program of the Department of Health and Human Services of Montgomery County, MD, USA. It also includes lung segmentation masks, which are shared as original full-resolution images. The dataset can be downloaded from https://openi.nlm.nih.gov/imgs/collections/NLM-MontgomeryCXRSet.zip.

  • KIT: Ryoo and Kim (2014) proposed the dataset in year 2014. It consists of 10,848 DICOM CXRs with 7020 for normal and 3828 for TB. The dataset is collected from the Korean Institute of TB.

  • Indiana: Demner-Fushman et al. (2016) introduced the dataset in year 2015. The dataset is collected from the Indiana University hospital network. The dataset includes 3996 radiology reports and 8121 associated images. The dataset can be downloaded from https://openi.nlm.nih.gov/.

  • Chestxray8: Wang et al. (2017) released the dataset in year 2017. It includes 108,948 frontal-view CXRs of 32,717 unique patients with eight different findings. The dataset is labeled report parsing (NLP) associated with each sample. The dataset can be downloaded from https://nihcc.app.box.com/v/ChestXray-NIHCC.

  • Chestxray14: Wang et al. (2017) published the dataset in 2017 consisting of 112,120 CXR samples from 30,805 subjects. Dataset consists of 1, 024 × 1, 024 image resolution images collected from the National Institute of Health (NIH), US. It contains labels for the 14 findings, automatically generated from the reports using NLP. The dataset is publicly available and can be downloaded from https://www.kaggle.com/nih-chest-xrays/data.

  • RSNA-Pneumonia: It's the dataset generated from the samples ChestXray14 dataset for pneumonia detection. It contains a total of 30,000 CXRs with pneumonia annotations with a 1, 024 × 1, 024 resolution. The annotations include lung opacities, resulting in samples with three classes normal, lung opacity, and not normal (Stein et al., 2018). The dataset can be downloaded from https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data.

  • Ped-Pneumonia: Kermany et al. (2018a) published the dataset in 2018, consisting of 5856 pediatric CXRs. The data is collected from Guangzhou Women and Children's Medical Center, Guangzhou, China. The dataset is labeled as viral and bacterial pneumonia. It also contains samples as normal. The dataset can be downloaded from https://data.mendeley.com/datasets/rscbjbr9sj/2.

  • CheXpert: Irvin et al. (2019) published one of the largest chest X-ray datasets consisting of 224,316 images with a total of 65,240 subjects in the year 2017. It took the authors almost 15 years to collect the dataset from Stanford Hospital, US. The dataset contains labels as presence, absence, uncertainty, and no mention of 12 abnormalities, no findings, and the existence of support devices. All these labels are generated automatically from radiology reports using a rule-based labeler (NLP). The dataset can be downloaded from https://stanfordmlgroup.github.io/competitions/chexpert/.

  • CXR14-Rad-Labels: This (2020) introduced the dataset as the subset of the ChestXray14, consisting of 4 labels for 4,374 studies and 1,709 subjects. The annotations are provided by the cohort of radiologists and are made available along with agreement labels.

  • MIMIC-CXR: Johnson et al. (2019) published the dataset in the year 2019 with 371,920 CXRs collected from 64588 subjects admitted to the emergency department of Beth Israel Deaconess Medical Center. It took authors almost 5 years to collect the dataset, and it is made available in two versions; V1 and V2. V1 contains images with 8-bit grayscale images in full resolution, and V2 contains DICOM images with anonymized radiology reports. The labels are automatically generated by report parsing. The dataset can be downloaded from https://physionet.org/content/mimic-cxr/.

  • SIIM-ACR: Anuar (2019) is Kaggle challenge dataset for pneumothorax detection and segmentation. It is believed by some researchers that the data samples are taken from the ChestXray14 dataset; however, no official confirmation is made about this. CXRs are available as DICOM images with 1, 024 × 1, 024 resolution.

  • Padchest: Bustos et al. (2020) published the collected dataset in year 2020, consisting of 160,868 CXRs, 109,931 studies and 67,000 subjects. It took the authors almost 8 years to collect the dataset from the San Juan Hospital, Spain. The dataset is labeled using domain experts for a set of 27,593 images, and for the rest of the data, an RNN was trained to generate the labels from reports.

  • BIMCV: Vayá et al. (2020) introduced the dataset for COVID-19 in year 2020. It includes of CXRs, CT scans and laboratory test results. The dataset is collected from Valencian Region Medical ImageBank (BIMCV). It consists of 3,293 CXRs from 1,305 COVID-19-positive subjects.

  • COVID abnormality annotation for X-Rays (CAAXR): Mittal et al. (2022) proposed the dataset with annotations on the existing BIMCV-COVID-19+ dataset performed by the radiologists. The dataset contains annotations for different findings such as atelectasis, consolidation, pleural effusion, edema and others. CAAXR contains a total of 1,749 images with 3,943 annotations. The dataset can be downloaded from https://osf.io/b35xu/ and http://covbase4all.igib.res.in/.

  • COVIDDSL: The dataset was released in 2020 for COVID-19 detection (Hospitales, 2020). The dataset is collected from the HM Hospitales group in Spain and includes CXRs from 1725 subjects along with detailed results from laboratory testing, vital signs etc.

  • COVIDGR: Tabik et al. (2020) released the dataset, collected from Hospital Universitario Clínico San Cecilio, Granada, Spain. It consists of 852 PA CXRs, with labels for positive and negative COVID-19. The dataset also includes the severity of COVID-19 in positive cases.

  • COVID-CXR: Cohen et al. (2020) released the dataset for COVID-19 with a total of 930 CXRs. The dataset includes samples from a large variety of places. It includes data collected from different methods, including screenshots from the research papers. The dataset is labeled as the label mentioned in the source and is available in PNG or JPEG format. The dataset can be downloaded from https://github.com/ieee8023/covid-chestxray-dataset.

  • VinDr-CXR: Nguyen et al. (2020) proposed the dataset collected from the two major hospitals of Vietnam from 2018 to 2020. The dataset includes 18,000 CXRs, 15,000 samples for training and 3,000 for testing. The annotations are made manually by 17 expert radiologists for 22 local labels and six global labels. The samples for the training set are labeled by three radiologists, while the testing set is labeled independently by five radiologists. Images in the dataset are available in DICOM format and can be downloaded from https://vindr.ai/datasets/cxr after signing the license agreement.

  • Brax: Reis (2022) introduced the dataset which includes 40,967 CXRs, 24,959 imaging studies for 19,351 subjects, collected from the Hospital Israelita Albert Einstein, Brazil. The dataset is labeled for 14 radiological findings using report parsing (NLP). Dataset is made available in both DICOM and PNG format. The dataset can be downloaded from https://physionet.org/content/brax/1.0.0/.

  • Belarus: is used in many papers and consists of 300 CXR images. However, the download link is not available and also further details about the dataset are missing as well.

4.1. Discussion

Generating large datasets in the medical domain is always a challenging process due to data privacy concerns and the need for expert annotators. While several existing datasets have enabled different research threads for CXR-based image analysis for disorders such as TB and pneumonia, the number of annotated samples in these datasets is less for modern deep learning based algorithm development. Further, local ground truth labeling plays an important role in disease classification and detection, and improves explainability. Existing datasets, in general, lack variability in terms of sensors and demographics. For many thoracic disorders, such as Pneumoconiosis, COPD, and lung cancer, there is a lack of publicly available datasets. On the other hand, datasets for the recent COVID-19 pandemic are collected from different hospitals across the globe with fewer samples and limited labels. Only a few datasets have associated local labels; for instance, Chestxray14 and CheXpert. These labels are generated using the report parsing method and results in high label noise. This may increase higher chances of missing labels due to the absence of findings in radiology reports on which the report parser (NLP algorithm) is designed. This draws the attention to carefully handling the labeling process while releasing the datasets to avoid any errors during deep model training.

5. Evaluation metrics

This section covers different metrics used to evaluate the proposed approach in the existing literature. Table 8 summarizes the various metrics that are used to evaluate different tasks in CXR-based image analysis.

Table 8.

Summarizes the metrics used for assessing the performance of different tasks performed by an ML/DL model.

Image enhancement PSNR SSIM MSE MAXERR L2rat
Segmentation Intersection over Union (IOU) Dice Coefficient Pixel accuracy
Classification Sensitivity Specificity Accuracy Precision F1-score AUC-ROC Curve
Fairness Demographic Parity Equalized odds Degree of bias Disparate impact Predictive Rate Parity Equal opportunity Treatment Equality Individual Fairness Counterfactual fairness
Image captioning BLEU METEOR ROGUE-L CIDEr SPICE

5.1. Image enhancement task

To assess the quality of images for different enhancement techniques, the difference between the original and enhanced image is calculated using the following metrics.

  • Peak signal to noise ratio (PSNR): It is a quality assessment metric and is expressed as the ratio of the maximum possible power of the original signal to the power of the noisy signal.

  • Structural Similarity Index (SSIM): It is a quality measure used to compare the similarity between two images.

  • Mean squared error (MSE): It is a quality assessment measure and is defined as the accumulative sum of square error between enhanced and original images.

  • MAXERR: It is the maximum absolute squared error of the specified enhanced image with a size equal to that of the original image (Huynh-Thu and Ghanbari, 2008).

  • L2rat: It is defined as the squared norm of the enhanced image to the original image (Huynh-Thu and Ghanbari, 2008).

5.2. Segmentation task

Segmentation approaches aim to find the ROI in a given image. In order to evaluate the segmentation algorithms for generating the prediction mask, and compare that with the ground truth mask, the following performance metrics are used :

  • Intersection over Union (IOU): It is also called as Jaccard Index. It is defined as the ratio of intersection over the union of area for the predicted mask to the area of the ground truth mask. The IOU value lies between 0 for poor overlap and 1 for complete overlap. Values above 0.5 are considered decent for the algorithm. It is defined as;

IOU=predictedmaskareagroundtruthmaskareapredictedmaskareagroundtruthmaskarea
  • Dice Coefficient: It is also defined as an F1 score. It is defined as the ratio of twice the area of overlap between the predicted mask and ground truth mask to the total number of pixels for both masks. It is similar to the IOU. Mathematically, it is defined as

DiceCoefficient=(2Areaofoverlap)sumofpixelscombined
  • Pixel accuracy: It is another metric for evaluating semantic segmentation. It is defined as the percentage of pixels that are correctly classified. It can give misleading results for the minor class. Mathematically, it is defined as the ratio of correctly classified pixels to the sum of all the pixels. For a binary image, it is defined as;

PixelAccuracy=TruePositive+TrueNegativeTruePositive+TrueNegative+FalsePositive+FalseNegative

5.3. Classification task

To evaluate the ML model for the classification task, the following metrics are widely used in the literature.

  • Sensitivity: aka recall, is the proportion of the actual positive samples that are correctly identified as positive. It indicates what percent of actual disease affected patients were detected by the model. Mathematically, it is defined as:

Sensitivity(Recall)=True PositivesTrue Positives+False Negatives 
  • Specificity: aka true negative rate, refers to the fraction of the samples' actual negative cases from all the predicted negative cases. It indicates what percent of the disease-negative patients are detected as positive (False positive) Mathematically, It can be defined as:

Specificity(True Negative Rate)=TrueNegativesTrueNegatives+FalsePositives
  • Accuracy: It is defined as the number of correctly classified samples from the total number of samples. It shows often the model predicts the class labels accurately. However, it can be misleading sometimes, and class wise accuracy is preferred over overall accuracy.

Accuracy=TruePositives+TrueNegativeTruePositives+FalseNegatives+TrueNegative+FalsePositive
  • Precision: Also known as a positive predictive value, is the ratio of positive samples that are accurately predicted. It emphasizes how many correctly predicted samples are actually TB positive. It is majorly used in cases where false positive are of more importance than false negatives. Mathematically it is defined as:

Precision=TruePositiveTruePositive+FalsePositive
  • F1-score: It is defined as the harmonic mean of precision and recall. It reaches the maximum value when both precision and recall are equal. It is of high use in cases where both false positives and true negatives are of equal concern. Mathematically, it is defined as

F1score=2PrecisionrecallPrecision+Recall
  • AUC-ROC Curve: It tells the probability of separating samples of negative class from positive class samples based on different thresholds. For different thresholds, a plot is obtained for different values of True Positive Rate (TPR) and their corresponding False Positive Rate (FPR) values. For example, it is not always necessary to have a particular threshold such as 0.5 and classify a patient as a positive for disease if value is >0.5 and negative if value is <0.5. A set of different thresholds is used to find an optimal threshold, where both positive and negative patients are classified best by the model.

TPR=Sensitivity=TruePositiveTruePositive+FalseNegative
FPR=1Specificity=FalsePositiveFalsePositive+TrueNegative

5.4. Fairness metrics

DL models are black boxes and act differently across protected attributes such as age, gender, race, or socio-economic status. Fair or bias-free decisions show zero affinity of the model toward any individual or subgroup in the population set based on any inherent characteristics. To evaluate a deep model for exhibiting disparities across subgroups, fairness metrics demonstrate whether the decisions are fair or not for the protected attributes. These allow us to avoid any ill-treatment toward any subgroup after the deployment of the model in real-world settings.

To assess the model performance for different protected attributes in the population, the following are a few fairness metrics used in the literature for measuring bias or assessing the fairness of AI Systems.

  • Demographic parity: It is defined as the probability of being classified with the favorable label and is independent of group membership (protected and unprotected). It is also known as Statistical Parity (Zafar et al., 2017). For a disease classification problem, demographic parity is witnessed if the samples are not equally classified independent of the membership of being male or female.

  • Equalized odds: It is defined as both false-positive and true-positive rates for protected and unprotected groups being the same. It is also known as Separation, Positive Rate Parity (Zafar et al., 2017). For a For a disease classification problem, if training data patients and are males only and all females as normal samples and equalized odds is satisfied if the model equally classifies or misclassifies the positive samples irrespective of whether that's male or female at the test time.

  • Degree of bias: It is defined as the standard deviation of classification accuracy across different subgroups of a demographic group.

  • Disparate impact: It is defined as the ratio of probabilities of being classified with the favorable label between protected and unprotected groups close to one. For instance, for a disease classification problem, if the model is favoring males over females and thus showing disparate impact.

  • Predictive rate parity: It is defined as the fraction of correct positive predictions that is the same for protected and unprotected groups (Chouldechova, 2017). For example, the predictive parity rate for the disease classification is achieved if the precision for both subgroups (e.g., male and female) is the same. Predictive rate parity is also known as predictive parity.

  • Equal opportunity: It is defined as the true positive rate being the same between protected and unprotected groups (Hardt et al., 2016). For example, for a disease classification problem, if disease-positive patients are only males and all females as normal samples. Equal opportunity is achieved if the model still predicts samples equally irrespective of whether they are male or female (protected attributes)

  • Treatment equality: It is defined if both protected and unprotected groups have an equal ratio of false negatives, and false positives (Berk et al., 2021).

  • Individual fairness: It is defined as the metric which treats similar individuals similarly (Dwork et al., 2012). For instance, Individual fairness is satisfied if samples from two different individuals with the same severity for a disease are equally treated by the model for disease classification.

  • Counterfactual fairness: It considers a model to be fair for a particular individual or group if its prediction in the real world is the same as that in the counterfactual world where the individual(s) had belonged to a different demographic group. It provides a way to check the possible way to interpret the causes of bias and the impact of replacing only the sensitive attributes (Russell et al., 2017).

5.5. Report generation

To evaluate the report/caption generation for images, the following are the widely used evaluation metrics. All these metrics find the similarity (n-gram matching) solely between the ground truth and predicted captions without taking the image into consideration.

  • BLEU: Bilingual Evaluation Understudy measures the quality of the translated sentences with reference to the similarity between predicted and labels caption, based on the n-gram matching rule. Its value lies between 0 and 1 (Papineni et al., 2002). It is based on the n-gram co-occurrence frequency between the reference and predicted captions.

  • METEOR: Metric for Evaluating Translation with Explicit Ordering calculates the precision and recall and then takes a harmonic mean for the query image caption (Banerjee and Lavie, 2005). Unlike BLEU, it measures the word-to-word matching and calculates recall for accurate word matching.

  • ROGUE-L: Recall-oriented Understudy for Gisting Evaluation is used to evaluate the co-occurrence of n-tuples in the abstracts. It is the evaluation method to calculate the machine's fluency of translation (Lin and Hovy, 2003). It uses the concept of dynamic programming to find the longest common subsequence between the reference and predicted caption and to uses it to calculate the recall to determine the similarity between the two captions. Higher the value of ROGU-L, better the model, however, it doesn't consider the grammatical accuracy or the semantic level of description.

  • CIDEr: Consensus-based Image Description Evaluation calculates the similarity between the reference and predicted caption by considering each sentence as a document. The Cosine angle of the word frequency-inverse document frequency (TF-IDF) vector is calculated. The final result is obtained by averaging the similarity of tuples of different lengths (Vedantam et al., 2015).

  • SPICE: Semantic Propositional Image Caption Evaluation uses the graph-based semantic representation to encode the objects, attributes, and relationships in the description sentence and evaluate the description sentence at the semantic level (Anderson et al., 2016). It faces challenges with repetitive sentences, however, generates captions with a high correlation with human judgement.

6. Open problems

Based on the literature review, here we present the open challenges in AI-based CXR analysis that require attention from the research community.

  • Unavailability of data: Due to the inaccessibility of publicly available datasets for many lung diseases such as the detection of Pneumoconoisis from CXRs, it is challenging to create large-scale models for different lung diseases. In addition, a number of datasets are from a few specific countries like the USA. In order to build generalizable models, it is important to create large-scale datasets with diversity.

  • Small sample size problem and interoperability: Existing work is done on fewer in-house collected chest X-ray samples. Developing a robust and generalizable deep learning-based model requires a huge amount of training data. The datasets are very small in size compared to general object detection problems (for instance, the ImageNet dataset). Since the scanners might vary according to locations, deep models need to be aware and invariant of the dependency of learning a specific portion of the dataset, specifically for the datasets where data is collected from different hospitals.

  • Multilabel and limited label problem: A given chest X-ray of the patient suffering from Pneumoconoisis or TB develops multiple manifestations such as nodules, emphysema, tissue scarring, and fibrosis, which results in multilabel problems. On top of the limited accessible data, data labeling is also a challenge and requires detailed inputs from domain experts. Chest diseases are mainly focused on the lung fields; however, ground mask to segment the CXRs is scanty in the literature. Domain experts such as chest radiologists and pulmonologists must be consulted for data annotation and labeling, and encourage collaboration with more hospitals, radiologists and pulmonologists.

  • Low-quality images: The data collected may not always be of high quality. Samples also suffer alignment problems, which sometimes need to be fixed. Handling noisy data contributes to another challenge for algorithm design. A robust AI-based pipeline is needed to handle noise and image registration for lung disease detection.

  • Lung disease correlation and co-occurrence: The presence of Pneumoconoisis and its related diseases, such as TB, share similar pathology, often resulting in misdiagnosis. Two diseases can be associated with the same patient, for instance, Silicotuberculosis (silicosis and TB). A similar problem is faced with pneumonia with its three variants; viral, bacterial and COVID-19.

  • Trusted AI: Building trust in machine intelligence, especially for medical diagnoses, is crucial. Data bias among different demographics and sensors can result in inaccurate diagnostic decisions. Moreover, data privacy for accessing any patient data is of utmost priority. In addition, incorporating algorithmic explainability is a significant task to handle. Explainability in models can play an essential role in developing automated disease detection solutions to ease the workload in hospitals, decrease the chances of misdiagnoses, and encourage building trust in the diagnostic assistants. In particular, deep models face data bias and adversarial attacks in machine intelligence-based prediction. To harness the efficacy of deep models for automatic disease detection using CXRs, there is a need to build trustable systems with high fairness, interpretability and robustness.

7. Conclusion

CXR based image analysis is being used for detecting the presence of diseases such as TB, Pneumonia, Pneumoconiosis and COVID-19. This paper presents a detailed literature survey of AI-based CXR analysis tasks such as enhancement, segmentation, detection, classification, image and report generation along with different models for detecting associated diseases. We also present the summary of datasets and metrics used in the literature as well as the open problems in this domain. It is our assertion that there is a vast scope for improving automatic and efficient algorithm development for CXR-based image analysis. The advent of AI/ML techniques, particularly deep learning models, provides a scope of responsible, interpretable, privacy friendly digital assistance for thoracic disorders and addresses several open problems/challenges. Furthermore, novel CXR datasets must be prepared and released to encourage development of novel approaches for various disorders.

Author contributions

YA conducted the literature review. YA, RS, and MV wrote the paper. All authors finalized the manuscript.

Funding Statement

This research was supported by a grant from MIETY, the Government of India. MV is partially supported through the Swarnajayanti Fellowship.

Footnotes

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  1. Abbas A., Abdelsamea M. M., Gaber M. M. (2021). Classification of covid-19 in chest x-ray images using detrac deep convolutional neural network. Appl. Intell. 51, 854–864. 10.1101/2020.03.30.20047456 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Abdullah-Al-Wadud M., Kabir M. H., Dewan M. A. A., Chae O. (2007). A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53, 593–600. 10.1109/TCE.2007.381734 [DOI] [Google Scholar]
  3. Abin D., Thepade S. D., Mankar H., Raut S., Yadav A. (2022). “Blending of contrast enhancement techniques for chest x-ray pneumonia images,” in 2022 International Conference on Electronics and Renewable Systems (ICEARS), 981–985. 10.1109/ICEARS53579.2022.9752286 [DOI] [Google Scholar]
  4. Ahmed F., Hossain E. (2013). Automated facial expression recognition using gradient-based ternary texture patterns. Chin. J. Eng. 2013, 831747. 10.1155/2013/831747 [DOI] [Google Scholar]
  5. Alfadhli F. H. O., Mand A. A., Sayeed M. S., Sim K. S., Al-Shabi M. (2017). “Classification of tuberculosis with surf spatial pyramid features,” in 2017 International Conference on Robotics, Automation and Sciences (ICORAS) (Melaka: IEEE; ), 1–5. [Google Scholar]
  6. Al-Rakhami M. S., Islam M. M., Islam M. Z., Asraf A., Sodhro A. H., Ding W. (2021). Diagnosis of COVID-19 from x-rays using combined cnn-rnn architecture with transfer learning. MedRxiv, 2020–08. 10.1101/2020.08.24.2018133936772394 [DOI] [Google Scholar]
  7. Anderson P., Fernando B., Johnson M., Gould S. (2016). “SPICE: semantic propositional image caption evaluation,” in Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part V, Vol. 9909 (Springer: ), 382–398. 10.1007/978-3-319-46454-1_24 [DOI] [Google Scholar]
  8. Annangi P., Thiruvenkadam S., Raja A., Xu H., Sun X., Mao L. (2010). “A region based active contour method for x-ray lung segmentation using prior shape and low level features,” in 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro (Rotterdam: IEEE; ), 892–895. [Google Scholar]
  9. Anuar A. (2019). Siim-acr Pneumothorax Segmentation. Availavle online at: https://github.com/sneddy/pneumothorax-segmentation
  10. Ashizawa K., MaCMahon H., Ishida T., Nakamura K., Vyborny C. J., Katsuragawa S., et al. (1999). Effect of an artificial neural network on radiologists' performance in the differential diagnosis of interstitial lung disease using chest radiographs. AJR Am. J. Roentgenol. 172, 1311–1315. 10.2214/ajr.172.5.10227508 [DOI] [PubMed] [Google Scholar]
  11. Ayaz M., Shaukat F., Raja G. (2021). Ensemble learning based automatic detection of tuberculosis in chest x-ray images using hybrid feature descriptors. Phys. Eng. Sci. Med. 44, 183–194. 10.1007/s13246-020-00966-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Azad R., Asadi-Aghbolaghi M., Fathy M., Escalera S. (2019). “Bi-directional convlstm u-net with densley connected convolutions,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (Seoul: IEEE; ). [Google Scholar]
  13. Banerjee S., Lavie A. (2005). “Meteor: an automatic metric for mt evaluation with improved correlation with human judgments,” in Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005 (Ann Arbor, MI: Association for Computational Linguistics; ), 65–72. [Google Scholar]
  14. Bay H., Ess A., Tuytelaars T., Gool L. V. (2008). Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110, 346–359. 10.1016/j.cviu.2007.09.014 [DOI] [Google Scholar]
  15. Berk R., Heidari H., Jabbari S., Kearns M., Roth A. (2021). Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50, 3–44. 10.1177/0049124118782533 [DOI] [Google Scholar]
  16. Blain M., Kassin M. T., Varble N., Wang X., Xu Z., Xu D., et al. (2021). Determination of disease severity in covid-19 patients using deep learning in chest x-ray images. Diagn. Intervent. Radiol. 27, 20. 10.5152/dir.2020.20205 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Boykov Y., Funka-Lea G. (2006). Graph cuts and efficient nd image segmentation. Int. J. Comput Vis. 70, 109–131. 10.1007/s11263-006-7934-5 [DOI] [Google Scholar]
  18. Bustos A., Pertusa A., Salinas J.-M., de la Iglesia-Vayá M. (2020). Padchest: a large chest x-ray image dataset with multi-label annotated reports. Med. Image Anal. 66, 101797. 10.1016/j.media.2020.101797 [DOI] [PubMed] [Google Scholar]
  19. Candemir S., Jaeger S., Palaniappan K., Musco J. P., Singh R. K., Xue Z., et al. (2013). Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans. Med. Imaging 33, 577–590. 10.1109/TMI.2013.2290491 [DOI] [PubMed] [Google Scholar]
  20. Cao F., Zhao H. (2021). Automatic lung segmentation algorithm on chest x-ray images based on fusion variational auto-encoder and three-terminal attention mechanism. Symmetry 13, 814. 10.3390/sym13050814 [DOI] [Google Scholar]
  21. Chandalia A., Gupta H. (2022). Deep learning method to Detect Chest x ray or ct Scan Images Based on Hybrid Yolo Model. U.S. Patent No. IN202223019813. [Google Scholar]
  22. Chandra T. B., Verma K., Singh B. K., Jain D., Netam S. S. (2020). Automatic detection of tuberculosis related abnormalities in chest x-ray images using hierarchical feature extraction scheme. Expert. Syst. Appl. 158, 113514. 10.1016/j.eswa.2020.113514 [DOI] [Google Scholar]
  23. Chang-soo P. (2021). Apparatus for Diagnosis of Chest x-ray Employing Artificial Intelligence. U.S. Patent No KR20210048010A. [Google Scholar]
  24. Chattopadhay A., Sarkar A., Howlader P., Balasubramanian V. N. (2018). “Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks,” in 2018 IEEE Winter Conference on Applications of computer Vision (WACV) (Lake Tahoe, NV: IEEE; ), 839–847. [Google Scholar]
  25. Chen L.-C., Papandreou G., Kokkinos I., Murphy K., Yuille A. L. (2017a). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848. 10.1109/TPAMI.2017.2699184 [DOI] [PubMed] [Google Scholar]
  26. Chen L.-C., Papandreou G., Schroff F., Adam H. (2017b). Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. 10.48550/arXiv.1706.05587 [DOI] [Google Scholar]
  27. Chen S.-D., Ramli A. R. (2003). Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation. IEEE Trans. Consum. Electron. 49, 1301–1309. 10.1109/TCE.2003.1261233 [DOI] [Google Scholar]
  28. Cho Y., Kim Y.-G., Lee S. M., Seo J. B., Kim N. (2020). Reproducibility of abnormality detection on chest radiographs using convolutional neural network in paired radiographs obtained within a short-term interval. Sci. Rep. 10, 1–11. 10.1038/s41598-020-74626-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Chouldechova A. (2017). Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5, 153–163. 10.1089/big.2016.0047 [DOI] [PubMed] [Google Scholar]
  30. Chowdhury M. E. H., Rahman T., Khandakar A., Mazhar R., Kadir M. A., Mahbub Z. B., et al. (2020). Can ai help in screening viral and COVID-19 pneumonia? IEEE Access 8, 132665–132676. 10.1109/ACCESS.2020.3010287 [DOI] [Google Scholar]
  31. Clarke L. P., Qian W., Mao F. (2022). Computer-Assisted Method and Apparatus for the Detection of Lung Nodules. U.S. Patent No US20220180514. [Google Scholar]
  32. Cohen J. P., Morrison P., Dao L., Roth K., Duong T. Q., Ghassemi M. (2020). COVID-19 image data collection: prospective predictions are the future. arXiv preprint arXiv:2006.11988. 10.48550/arXiv.2006.11988 [DOI] [Google Scholar]
  33. Cootes T. F., Edwards G. J., Taylor C. J. (2001). Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23, 681–685. 10.1109/34.927467 [DOI] [Google Scholar]
  34. Cootes T. F., Hill A., Taylor C. J., Haslam J. (1994). Use of active shape models for locating structures in medical images. Image Vis. Comput. 12, 355–365. 10.1016/0262-8856(94)90060-4 [DOI] [Google Scholar]
  35. Dalal N., Triggs B. (2005). “Histograms of oriented gradients for human detection,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), Vol. 1 (San Diego, CA: IEEE; ), 886–893. [Google Scholar]
  36. Das D., Santosh K., Pal U. (2021). “Inception-based deep learning architecture for tuberculosis screening using chest x-rays,” in 2020 25th International Conference on Pattern Recognition (ICPR) (Milan: IEEE; ), 3612–3619. [Google Scholar]
  37. Dasanayaka C., Dissanayake M. B. (2021). Deep learning methods for screening pulmonary tuberculosis using chest x-rays. Comput. Methods Biomech. Biomed. Eng. 9, 39–49. 10.1080/21681163.2020.180853231479448 [DOI] [Google Scholar]
  38. Demner-Fushman D., Antani S., Simpson M., Thoma G. R. (2012). Design and development of a multimodal biomedical information retrieval system. J. Comput. Sci. Eng. 6, 168–177. 10.5626/JCSE.2012.6.2.168 [DOI] [Google Scholar]
  39. Demner-Fushman D., Kohli M. D., Rosenman M. B., Shooshan S. E., Rodriguez L., Antani S., et al. (2016). Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc. 23, 304–310. 10.1093/jamia/ocv080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Deng J., Dong W., Socher R., Li L.-J., Li K., Fei-Fei L. (2009). “Imagenet: a large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition (Miami, FL: IEEE; ), 248–255. [Google Scholar]
  41. Devnath L., Luo S., Summons P., Wang D. (2021). Automated detection of pneumoconiosis with multilevel deep features learned from chest x-ray radiographs. Comput. Biol. Med. 129, 104125. 10.1016/j.compbiomed.2020.104125 [DOI] [PubMed] [Google Scholar]
  42. Doi K., Aoyama M. (2002). Automated computerized scheme for distinction between benign and malignant solitary pulmonary nodules on chest images. Med. Phys. 29, 701–708. 10.1118/1.1469630 [DOI] [PubMed] [Google Scholar]
  43. Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., et al. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. CoRR, abs/2010.11929. [Google Scholar]
  44. Duong L. T., Le N. H., Tran T. B., Ngo V. M., Nguyen P. T. (2021). Detection of tuberculosis from chest x-ray images: boosting the performance with vision transformer and transfer learning. Expert. Syst. Appl. 184, 115519. 10.1016/j.eswa.2021.115519 [DOI] [Google Scholar]
  45. Duran-Lopez L., Dominguez-Morales J. P., Corral-Jaime J., Vicente-Diaz S., Linares-Barranco A. (2020). COVID-xnet: a custom deep learning system to diagnose and locate COVID-19 in chest x-ray images. Appl. Sci. 10, 5683. 10.3390/app10165683 [DOI] [Google Scholar]
  46. Dwork C., Hardt M., Pitassi T., Reingold O., Zemel R. (2012). “Fairness through awareness,” in Innovations in Theoretical Computer Science 2012 (Cambridge, MA: ACM; ), 214–226. 10.1145/2090236.2090255 [DOI] [Google Scholar]
  47. El Gannour O., Hamida S., Cherradi B., Raihani A., Moujahid H. (2020). “Performance evaluation of transfer learning technique for automatic detection of patients with COVID-19 on x-ray images,” in 2020 IEEE 2nd International Conference on Electronics, Control, Optimization and Computer Science (ICECOCS) (Kenitra: IEEE; ), 1–6. [Google Scholar]
  48. Elshennawy N. M., Ibrahim D. M. (2020). Deep-pneumonia framework using deep learning models based on chest x-ray images. Diagnostics 10, 649. 10.3390/diagnostics10090649 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Eslami M., Tabarestani S., Albarqouni S., Adeli E., Navab N., Adjouadi M. (2020). Image-to-images translation for multi-task organ segmentation and bone suppression in chest x-ray radiography. IEEE Trans. Med. Imaging 39, 2553–2565. 10.1109/TMI.2020.2974159 [DOI] [PubMed] [Google Scholar]
  50. Ferreira J. R., Cardenas D. A. C., Moreno R. A., de Sá Rebelo M. D. F., Krieger J. E., Gutierrez M. A. (2020). “Multi-view ensemble convolutional neural network to improve classification of pneumonia in low contrast chest x-ray images,” in 2020 42nd annual international conference of the IEEE engineering in Medicine &Biology Society (EMBC) (Montreal, QC: IEEE; ), 1238–1241. [DOI] [PubMed] [Google Scholar]
  51. Fischer A. M., Varga-Szemes A., Martin S. S., Sperl J. I., Sahbaee P., Neumann D., et al. (2020). Artificial intelligence-based fully automated per lobe segmentation and emphysema-quantification based on chest computed tomography compared with global initiative for chronic obstructive lung disease severity of smokers. J. Thorac. Imaging 35, S28-S34. 10.1097/RT.I.0000000000000500 [DOI] [PubMed] [Google Scholar]
  52. Gabor D. (1946). Theory of communication. part 1: the analysis of information. J. Inst. Electr. Eng. III 93, 429-441. 10.1049/ji-3-2.1946.0074 [DOI] [Google Scholar]
  53. Girshick R., Donahue J., Darrell T., Malik J. (2014). “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Columbus, OH: IEEE; ), 580–587. [Google Scholar]
  54. Gongye C., Li H., Zhang X., Sabbagh M., Yuan G., Lin X., et al. (2020). “New passive and active attacks on deep neural networks in medical applications,” in IEEE/ACM International Conference On Computer Aided Design, ICCAD 2020 (San Diego, CA: IEEE; ), 1–9. 10.1145/3400302.3418782 [DOI] [Google Scholar]
  55. Gour M., Jain S. (2020). Stacked convolutional neural network for diagnosis of COVID-19 disease from x-ray images. arXiv preprint arXiv:2006.13817. 10.48550/arXiv.2006.13817 [DOI] [Google Scholar]
  56. Govindarajan S., Swaminathan R. (2021). Extreme learning machine based differentiation of pulmonary tuberculosis in chest radiographs using integrated local feature descriptors. Comput. Methods Programs Biomed. 204, 106058. 10.1016/j.cmpb.2021.106058 [DOI] [PubMed] [Google Scholar]
  57. Gozes O., Greenspan H. (2019). “Deep feature learning from a hospital-scale chest x-ray dataset with application to tb detection on a small-scale dataset,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (Berlin: IEEE; ), 4076–4079. [DOI] [PubMed] [Google Scholar]
  58. Guendel S., Ghesu F.-C., Gibson E., Sasa G, Georgescu B., Comaniciu D. (2020). Multi-task learning for chest x-ray abnormality classification. arXiv:1905.06362 [cs.CV]. 10.48550/arXiv.1905.0636234015595 [DOI] [Google Scholar]
  59. Gupta A., Gupta S., Katarya R., et al. (2021). Instacovnet-19: A deep learning classification model for the detection of COVID-19 patients using chest x-ray. Appl. Soft. Comput. 99, 106859. 10.1016/j.asoc.2020.106859 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Hall E. L., Crawford W. O., Roberts F. E. (1975). Computer classification of pneumoconiosis from radiographs of coal workers. IEEE Trans. Biomed. Eng. BME-22, 518–527. 10.1109/TBME.1975.324475 [DOI] [PubMed] [Google Scholar]
  61. Hao C., Jin N., Qiu C., Ba K., Wang X., Zhang H., et al. (2021). Balanced convolutional neural networks for pneumoconiosis detection. Int. J. Environ. Res. Public Health 18, 9091. 10.3390/ijerph18179091 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Harding D. S., Sridharan Kamalakanan a. S. K., Katsuhara S., Pike J. H., Sabir M. F., et al. (2015). Lung Segmentation and Bone Suppression Techniques for Radiographic Images. U.S. Patent No WO2015157067. [Google Scholar]
  63. Hardt M., Price E., Srebro N. (2016). “Equality of opportunity in supervised learning,” in Advances in Neural Information Processing Systems, Vol. 29. [Google Scholar]
  64. Hasegawa A., Lo S.-C. B., Freedman M. T., Mun S. K. (1994). Convolution neural-network-based detection of lung structures. Med. Imaging 2167, 654–662. 10.1117/12.175101 [DOI] [Google Scholar]
  65. He K., Zhang X., Ren S., Sun J. (2016). “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas, NV: IEEE; ), 770–778. [Google Scholar]
  66. Hirano H., Minagi A., Takemoto K. (2021). Universal adversarial attacks on deep neural networks for medical image classification. BMC Med. Imaging 21, 1–13. 10.1186/s12880-020-00530-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Homayounieh F., Digumarthy S., Ebrahimian S., Rueckel J., Hoppe B. F., Sabel B. O., et al. (2021). An artificial intelligence-based chest x-ray model on human nodule detection accuracy from a multicenter study. JAMA Netw. Open 4, e2141096-e2141096. 10.1001/jamanetworkopen.2021.41096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Hong L., Li Y., Shen H. (2009a). Method and System for Diaphragm Segmentation IN CHest x-ray Radiographs. U.S. Patent No US20090087072. [Google Scholar]
  69. Hong L., Li Y., Shen H. (2009b). Method and System for Nodule Feature Extraction Using Background Contextual Information in Chest x-ray Images. U.S. Patent No US20090103797. [Google Scholar]
  70. Hong L., Shen H. (2008). Method and System for Locating Opaque Regions in Chest x-ray Radiographs. U.S. Patent No US20080181481. [Google Scholar]
  71. Hospitales H. (2020). Covid Data Save Lives. Available online at: https://www.hmhospitales.com/coronavirus/covid-data-save-lives
  72. Huang G., Liu Z., Van Der Maaten L., Weinberger K. Q. (2017). “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI: IEEE; ), 4700–4708. [Google Scholar]
  73. Huo Z., Zhao H. (2014). Clavicle Suppression in Radiographic Images. U.S. Patent No. US20140140603. [Google Scholar]
  74. Hurt B., Yen A., Kligerman S., Hsiao A. (2020). Augmenting interpretation of chest radiographs with deep learning probability maps. J. Thorac Imaging 35, 285. 10.1097/RTI.0000000000000505 [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Huynh-Thu Q., Ghanbari M. (2008). Scope of validity of psnr in image/video quality assessment. Electron Lett. 44, 800–801. 10.1049/el:20080522 [DOI] [Google Scholar]
  76. Hwang E. J., Park S., Jin K.-N., Im Kim J., Choi S. Y., Lee J. H., et al. (2019). Development and validation of a deep learning-based automated detection algorithm for major thoracic diseases on chest radiographs. JAMA Netw. Open 2, e191095-e191095. 10.1001/jamanetworkopen.2019.1095 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Hwang S., Kim H.-E., Jeong J., Kim H.-J. (2016). A novel approach for tuberculosis screening based on deep convolutional neural networks. Med. Imaging 9785, 750–757. 10.1117/12.2216198 [DOI] [Google Scholar]
  78. Hwang S., Park S. (2017). “Accurate lung segmentation via network-wise training of convolutional networks,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Proceedings (Quebec, QC: Springer; ), 92–99. 10.1007/978-3-319-67558-9_11 [DOI] [Google Scholar]
  79. IDC (2014). The Digital Universe-Driving Data Growth in Healthcare. Framingham, MA: IDC. [Google Scholar]
  80. Irvin J., Rajpurkar P., Ko M., Yu Y., Ciurea-Ilcus S., Chute C., et al. (2019). Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison. Proc. AAAI Conf. Artif. Intell. 33, 590–597. 10.1609/aaai.v33i01.3301590 [DOI] [Google Scholar]
  81. Islam M. Z., Islam M. M., Asraf A. (2020). A combined deep cnn-lstm network for the detection of novel coronavirus (COVID-19) using x-ray images. Inf. Med. Unlocked 20, 100412. 10.1016/j.imu.2020.100412 [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. ISMIR (2020). Italian Society of Medical and Interventional Radiology, Radiology, COVID-19 Database, 2020. Available online: https://www.sirm.org/category/senza-categoria/covid-19/
  83. Jaeger S., Candemir S., Antani S., Wáng Y.-X. J., Lu P.-X., Thoma G. (2014). Two public chest x-ray datasets for computer-aided screening of pulmonary diseases. Quant Imaging Med. Surg. 4, 475. 10.3978/j.issn.2223-4292.2014.11.20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Jain R., Gupta M., Taneja S., Hemanth D. J. (2021). Deep learning based detection and analysis of COVID-19 on chest x-ray images. Appl. Intell. 51, 1690–1700. 10.1007/s10489-020-01902-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Jang R., Kim N., Jang M., Lee K. H., Lee S. M., Lee K. H., et al. (2020). Assessment of the robustness of convolutional neural networks in labeling noise by using chest x-ray images from multiple centers. JMIR Med. Inform. 8, e18089. 10.2196/18089 [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Jégou S., Drozdzal M., Vazquez D., Romero A., Bengio Y. (2017). “The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (Honolulu, HI: IEEE; ), 11–19. [Google Scholar]
  87. Jiezhi Z., Zaiwen G., Hengze Z., Yiqiang Z. (2018). X-ray Chest Radiography Image Quality Determination Method and Device. U.S. Patent No CN113052795. [Google Scholar]
  88. Jin W., Dong S., Dong C., Ye X. (2021). Hybrid ensemble model for differential diagnosis between COVID-19 and common viral pneumonia by chest x-ray radiograph. Comput. Biol. Med. 131, 104252. 10.1016/j.compbiomed.2021.104252 [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Jinpeng L., Jie W., Ting C. (2020). X-ray Lung Disease Automatic Positioning Method Based on Weak Supervised Learning. U.S. Patent No CN112116571. [Google Scholar]
  90. Johnson A. E., Pollard T. J., Berkowitz S. J., Greenbaum N. R., Lungren M. P., Deng C.-,y., et al. (2019). Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 6, 1–8. 10.1038/s41597-019-0322-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Kai K., Masaomi T., Kenji F. (2019). Lesion Detection Method Using Artificial Intelligence, and System Therefor. U.S. Patent No JP2019154943. [Google Scholar]
  92. Kaijin X. (2019). Dr Image Pulmonary Tuberculosis Intelligent Segmentation and Detection Method Based on Deep Learning. U.S. Patent No CN110782441. [Google Scholar]
  93. Kang Z., Rui H., Lianghong Z. (2019). Deep Learning-Based Diagnosis and Referral of Diseases and Disorders. U.S. Paten No WO2019157214. [Google Scholar]
  94. Katsuragawa S., Doi K., MacMahon H. (1988). Image feature analysis and computer-aided diagnosis in digital radiography: detection and characterization of interstitial lung disease in digital chest radiographs. Med. Phys. 15, 311–319. 10.1118/1.596224 [DOI] [PubMed] [Google Scholar]
  95. Kaviani S., Han K. J., Sohn I. (2022). Adversarial attacks and defenses on ai in medical imaging informatics: a survey. Expert Syst. Appl. 2022, 116815. 10.1016/j.eswa.2022.116815 [DOI] [Google Scholar]
  96. Kermany D. S, Zhang, K., Goldbaum M. (2018a). Labeled Optical Coherence Tomography (oct) and Chest X-ray Images for Classification. 10.17632/rscbjbr9sj.2 [DOI] [Google Scholar]
  97. Kermany D. S., Goldbaum M., Cai W., Valentim C. C., Liang H., Baxter S. L., et al. (2018b). Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172, 1122–1131. 10.1016/j.cell.2018.02.010 [DOI] [PubMed] [Google Scholar]
  98. Khan A. I., Shah J. L., Bhat M. M. (2020). Coronet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images. Comput. Methods Programs Biomed. 196, 105581. 10.1016/j.cmpb.2020.105581 [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Khasawneh N., Fraiwan M., Fraiwan L., Khassawneh B., Ibnian A. (2021). Detection of covid-19 from chest x-ray images using deep convolutional neural networks. Sensors 21, 5940. 10.3390/s21175940 [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Kim J. R., Shim W. H., Yoon H. M., Hong S. H., Lee J. S., Cho Y. A., et al. (2017). Computerized bone age estimation using deep learning based program: evaluation of the accuracy and efficiency. Am. J. Roentgenol. 209, 1374–1380. 10.2214/AJR.17.18224 [DOI] [PubMed] [Google Scholar]
  101. Kim Y.-G., Cho Y., Wu C.-J., Park S., Jung K.-H., Seo J. B., et al. (2019). Short-term reproducibility of pulmonary nodule and mass detection in chest radiographs: comparison among radiologists and four different computer-aided detections with convolutional neural net. Sci. Rep. 9, 1–9. 10.1038/s41598-019-55373-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Kim Y.-G., Lee S. M., Lee K. H., Jang R., Seo J. B., Kim N. (2020). Optimal matrix size of chest radiographs for computer-aided detection on lung nodule or mass with deep learning. Eur. Radiol. 30, 4943–4951. 10.1007/s00330-020-06892-9 [DOI] [PubMed] [Google Scholar]
  103. Kusakunniran W., Karnjanapreechakorn S., Siriapisith T., Borwarnginn P., Sutassananon K., Tongdee T., et al. (2021). Covid-19 detection and heatmap generation in chest x-ray images. J. Med. Imaging 8, 014001. 10.1117/1.JMI.8.S1.014001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Ledley R. S., Huang H., Rotolo L. S. (1975). A texture analysis method in classification of coal workers' pneumoconiosis. Comput. Biol. Med. 5, 53–67. 10.1016/0010-4825(75)90018-9 [DOI] [PubMed] [Google Scholar]
  105. Lei R., Xiaobao W., Tianshi X. (2021). Lung Disease Auxiliary Diagnosis Cloud Platform Based on Deep Learning. U.S. Patent No CN113192625. [Google Scholar]
  106. Lenga M., Schulz H., Saalbach A. (2020). “Continual learning for domain adaptation in chest x-ray classification,” in International Conference on Medical Imaging with Deep Learning, MIDL 2020, Vol. 121 (Montreal, QC: PMLR; ), 413–423. [Google Scholar]
  107. Li B., Kang G., Cheng K., Zhang N. (2019). “Attention-guided convolutional neural network for detecting pneumonia on chest x-rays,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (Berlin: IEEE; ), 4851–4854. [DOI] [PubMed] [Google Scholar]
  108. Li D., Lin C. T., Sulam J., Yi P. H. (2022). Deep learning prediction of sex on chest radiographs: a potential contributor to biased algorithms. Emerg. Radiol. 29, 365–370. 10.1007/s10140-022-02019-3 [DOI] [PubMed] [Google Scholar]
  109. Li L., Zheng Y., Kallergi M., Clark R. A. (2001). Improved method for automatic identification of lung regions on chest radiographs. Acad Radiol. 8, 629–638. 10.1016/S1076-6332(03)80688-8 [DOI] [PubMed] [Google Scholar]
  110. Li X., Cao R., Zhu D. (2019). Vispi: automatic visual perception and interpretation of chest x-rays. arXiv preprint arXiv:1906.05190. 10.48550/arXiv.1906.05190 [DOI] [Google Scholar]
  111. Li X., Li C., Zhu D. (2020). Covid-mobilexpert: on-device COVID-19 screening using snapshots of chest x-ray. arXiv preprint arXiv:2004.03042. 10.48550/arXiv.2004.03042 [DOI] [Google Scholar]
  112. Li X., Pan D., Zhu D. (2021). “Defending against adversarial attacks on medical imaging ai system, classification or detection?” in 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI) (Nice: IEEE; ), 1677–1681. [Google Scholar]
  113. Li X., Zhu D. (2020). “Robust detection of adversarial attacks on medical images,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI) (Iowa City, IA: IEEE; ), 1154–1158. [Google Scholar]
  114. Liang N.-Y., Huang G.-B., Saratchandran P., Sundararajan N. (2006). A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 17, 1411–1423. 10.1109/TNN.2006.880583 [DOI] [PubMed] [Google Scholar]
  115. Lin C.-Y., Hovy E. (2003). “Automatic evaluation of summaries using n-gram co-occurrence statistics,” in Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, eds. M. A. Hearst and M. Ostendorf (Edmonton: The Association for Computational Linguistics). [Google Scholar]
  116. Little K. J., Reiser I., Liu L., Kinsey T., Sánchez A. A., Haas K., et al. (2017). Unified database for rejected image analysis across multiple vendors in radiography. J. Am. College Radiol. 14, 208–216. 10.1016/j.jacr.2016.07.011 [DOI] [PubMed] [Google Scholar]
  117. Liu L., Fieguth P., Guo Y., Wang X., Pietikäinen M. (2017). Local binary features for texture classification: taxonomy and experimental study. Pattern Recognit. 62, 135–160. 10.1016/j.patcog.2016.08.032 [DOI] [Google Scholar]
  118. Liu L., Lao S., Fieguth P. W., Guo Y., Wang X., Pietikäinen M. (2016). Median robust extended local binary pattern for texture classification. IEEE Trans. Image Process. 25, 1368–1381. 10.1109/TIP.2016.2522378 [DOI] [PubMed] [Google Scholar]
  119. Longjiang E., Zhao B., Liu H., Zheng C., Song X., Cai Y., et al. (2020). Image-based deep learning in diagnosing the etiology of pneumonia on pediatric chest x-rays. Pediatr. Pulmonol. 56, 1036–1044. 10.1002/ppul.25229 [DOI] [PubMed] [Google Scholar]
  120. Lu M. T., Ivanov A., Mayrhofer T., Hosny A., Aerts H. J., Hoffmann U. (2019). Deep learning to assess long-term mortality from chest radiographs. JAMA Netw. Open 2, e197416-e197416. 10.1001/jamanetworkopen.2019.7416 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Luojie L., Jinhua M. (2018). A Lung Disease Detection Method Based on Deep Learning. U.S. Paten No CN109598719. [Google Scholar]
  122. Lyman K., Bernard D., Li Yao D. A., Covington B., Upton A. (2019). Chest x-ray Differential Diagnosis System. U.S. Patent No US20190066835. [Google Scholar]
  123. Ma X., Niu Y., Gu L., Wang Y., Zhao Y., Bailey J., et al. (2021). Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognit. 110, 107332. 10.1016/j.patcog.2020.107332 [DOI] [Google Scholar]
  124. Madani A., Moradi M., Karargyris A., Syeda-Mahmood T. (2018). “Semi-supervised learning with generative adversarial networks for chest x-ray classification with ability of data domain adaptation,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) (Washington, DC: IEEE; ), 1038–1042. [Google Scholar]
  125. Mahmud T., Rahman M. A., Fattah S. A. (2020). Covxnet: a multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest x-ray images with transferable multi-receptive feature optimization. Comput. Biol. Med. 122, 103869. 10.1016/j.compbiomed.2020.103869 [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Malhotra A., Mittal S., Majumdar P., Chhabra S., Thakral K., Vatsa M., et al. (2022). Multi-task driven explainable diagnosis of COVID-19 using chest x-ray images. Pattern Recognit. 122, 108243. 10.1016/j.patcog.2021.108243 [DOI] [PMC free article] [PubMed] [Google Scholar]
  127. McDonald C. J., Overhage J. M., Barnes M., Schadow G., Blevins L., Dexter P. R., et al. (2005). The indiana network for patient care: a working local health information infrastructure. Health Aff. 24, 1214–1220. 10.1377/hlthaff.24.5.1214 [DOI] [PubMed] [Google Scholar]
  128. Minhwa L., Hyoeun K., Sangheum H., Seungwook P., Jungin L., Minhong J. (2017). System for Automatic Diagnosis and Prognosis of Tuberculosis by Cad-Based Digital x-ray. U.S. Patent No WO2017069596. [Google Scholar]
  129. Mitchell C. (2012). World Radiography Day: Two-Thirds of the World's Population has No Access to Diagnostic Imaging. Available online at: https://tinyurl.com/2p952776
  130. Mittal A., Kumar D., Mittal M., Saba T., Abunadi I., Rehman A., et al. (2020). Detecting pneumonia using convolutions and dynamic capsule routing for chest x-ray images. Sensors 20, 1068. 10.3390/s20041068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Mittal S., Venugopal V. K., Agarwal V. K., Malhotra M., Chatha J. S., Kapur S., et al. (2022). A novel abnormality annotation database for covid-19 affected frontal lung x-rays. PLoS ONE 17, e0271931. 10.1101/2021.01.07.21249323 [DOI] [PMC free article] [PubMed] [Google Scholar]
  132. Mooney P. (2018). Chest x-ray Images (Pneumonia). Kaggle, Marzo. [Google Scholar]
  133. Msonda P., Uymaz S. A., Karaağaç S. S. (2020). Spatial pyramid pooling in deep convolutional networks for automatic tuberculosis diagnosis. Traitement du Signal 2020, 370620. 10.18280/ts.370620 [DOI] [Google Scholar]
  134. Munadi K., Muchtar K., Maulina N., Pradhan B. (2020). Image enhancement for tuberculosis detection using deep learning. IEEE Access 8, 217897–217907. 10.1109/ACCESS.2020.3041867 [DOI] [Google Scholar]
  135. Murphy K. (2019). How data will improve healthcare without adding staff or beds. Glob. Innov. Index. 2019, 129–134. [Google Scholar]
  136. Murphy K., Smits H., Knoops A. J., Korst M. B., Samson T., Scholten E. T., et al. (2020). COVID-19 on chest radiographs: a multireader evaluation of an artificial intelligence system. Radiology 296, E166. 10.1148/radiol.2020201874 [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Murray V., Pattichis M. S., Davis H., Barriga E. S., Soliz P. (2009). “Multiscale am-fm analysis of pneumoconiosis x-ray images,” in 2009 16th IEEE International Conference on Image Processing (ICIP) (Cairo: IEEE; ), 4201–4204. [Google Scholar]
  138. Narayanan B. N., Davuluru V. S. P., Hardie R. C. (2020). Two-stage deep learning architecture for pneumonia detection and its diagnosis in chest radiographs. Med. Imaging 11318, 130–139. 10.1117/12.2547635 [DOI] [Google Scholar]
  139. Nguyen H. Q., Lam K., Le L. T., Pham H. H., Tran D. Q., Nguyen D. B., et al. (2020). Vindr-cxr: An Open Dataset of Chest X-rays with Radiologist's Annotations. 10.48550/ARXIV.2012.15029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  140. Oh Y., Park S., Ye J. C. (2020). Deep learning covid-19 features on cxr using limited training data sets. IEEE Trans. Med. Imaging 39, 2688–2700. 10.1109/TMI.2020.2993291 [DOI] [PubMed] [Google Scholar]
  141. Ojala T., Pietikäinen M., Harwood D. (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 29, 51–59. 10.1016/0031-3203(95)00067-4 [DOI] [Google Scholar]
  142. Okumura E., Kawashita I., Ishida T. (2011). Computerized analysis of pneumoconiosis in digital chest radiography: effect of artificial neural network trained with power spectra. J. Digit. Imaging 24, 1126–1132. 10.1007/s10278-010-9357-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  143. Oloko-Oba M., Viriri S. (2020). “Tuberculosis abnormality detection in chest x-rays: A deep learning approach,” in Computer Vision and Graphics: International Conference, ICCVG 2020, Warsaw, Poland, September 14–16, 2020, Proceedings (Berlin, Heidelberg: Springer-Verlag; ), 121–132. 10.1007/978-3-030-59006-2_11 [DOI] [Google Scholar]
  144. Opelt A., Pinz A., Zisserman A. (2006). “Incremental learning of object detectors using a visual shape alphabet,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), Vol. 1 (New York, NY: IEEE; ), 3–10. [Google Scholar]
  145. Owais M., Arsalan M., Mahmood T., Kim Y. H., Park K. R., et al. (2020). Comprehensive computer-aided decision support framework to diagnose tuberculosis from chest x-ray images: data mining study. JMIR Med. Inform. 8, e21790. 10.2196/21790 [DOI] [PMC free article] [PubMed] [Google Scholar]
  146. Panwar H., Gupta P., Siddiqui M. K., Morales-Menendez R., Bhardwaj P., Singh V. (2020a). A deep learning and grad-cam based color visualization approach for fast detection of COVID-19 cases using chest x-ray and ct-scan images. Chaos Solitons Fractals 140, 110190. 10.1016/j.chaos.2020.110190 [DOI] [PMC free article] [PubMed] [Google Scholar]
  147. Panwar H., Gupta P., Siddiqui M. K., Morales-Menendez R., Singh V. (2020b). Application of deep learning for fast detection of COVID-19 in x-rays using ncovnet. Chaos Solitons Fractals 138, 109944. 10.1016/j.chaos.2020.109944 [DOI] [PMC free article] [PubMed] [Google Scholar]
  148. Papineni K., Roukos S., Ward T., Zhu W.-J. (2002). “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th annual meeting of the Association for Computational Linguistics (Philadelphia, PA: Association for Computational Linguistics; ), 311–318. 10.3115/1073083.1073135 [DOI] [Google Scholar]
  149. Pasa F., Golkov V., Pfeiffer F., Cremers D., Pfeiffer D. (2019). Efficient deep network architectures for fast chest x-ray tuberculosis screening and visualization. Sci. Rep. 9, 1–9. 10.1038/s41598-019-42557-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  150. Pereira R. M., Bertolini D., Teixeira L. O., Silla C. N., Costa Y. M. G. (2020). COVID-19 identification in chest x-ray images on flat and hierarchical classification scenarios. Comput. Methods Programs Biomed. 194, 105532. 10.1016/j.cmpb.2020.105532 [DOI] [PMC free article] [PubMed] [Google Scholar]
  151. Pham T. D. (2021). Classification of COVID-19 chest x-rays with deep learning: new models or fine tuning? Health Inf. Scie. Syst. 9, 1–11. 10.1007/s13755-020-00135-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Punn N. S., Agarwal S. (2021). Automated diagnosis of covid-19 with limited posteroanterior chest x-ray images using fine-tuned deep neural networks. Appl. Intell. 51, 2689–2702. 10.1007/s10489-020-01900-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. Putha P., Tadepalli M., Reddy B., Raj T., Jagirdar A., Pooja Rao A. P. W. (2022). Predicting Lung Cancer Risk. U.S. Patent No US11276173. [Google Scholar]
  154. Putha P., Tadepalli M., Reddy B., Raj T., Jagirdar A., Rao P., et al. (2021). Systems and Methods for Detection of Infectious Respiratory Diseases. U.S. Patent No US20210327055. [Google Scholar]
  155. Qiang D., Zebin G., Yuchen G., Nie F., Chao T. (2020). Lung Disease Classification Method and Device, and Equipment. U.S. Patent No CN111667469. [Google Scholar]
  156. Radford A., Metz L., Chintala S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. 10.48550/arXiv.1511.06434 [DOI] [Google Scholar]
  157. Rahman M., Cao Y., Sun X., Li B., Hao Y. (2021). Deep pre-trained networks as a feature extractor with xgboost to detect tuberculosis from chest x-ray. Comput. Electr. Eng. 93, 107252. 10.1016/j.compeleceng.2021.107252 [DOI] [Google Scholar]
  158. Rahman T., Chowdhury M., Khandakar A. (2020a). Covid-19 Radiography Database. Kaggle. Available online at: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database
  159. Rahman T., Khandakar A., Kadir M. A., Islam K. R., Islam K. F., Mazhar R., et al. (2020b). Reliable tuberculosis detection using chest x-ray with deep learning, segmentation and visualization. IEEE Access 8, 191586–191601. 10.1109/ACCESS.2020.3031384 [DOI] [Google Scholar]
  160. Rahman T., Khandakar A., Qiblawey Y., Tahir A., Kiranyaz S., Kashem S. B. A., et al. (2021). Exploring the effect of image enhancement techniques on COVID-19 detection using chest x-ray images. Comput. Biol. Med. 132, 104319. 10.1016/j.compbiomed.2021.104319 [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Rajagopal R. (2021). Comparative analysis of COVID-19 x-ray images classification using convolutional neural network, transfer learning, and machine learning classifiers using deep features. Pattern Recognit.. Image Anal. 31, 313–322. 10.1134/S1054661821020140 [DOI] [Google Scholar]
  162. Rajaraman S., Antani S. K. (2020). Modality-specific deep learning model ensembles toward improving tb detection in chest radiographs. IEEE Access 8, 27318–27326. 10.1109/ACCESS.2020.2971257 [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Rajaraman S., Candemir S., Thoma G., Antani S. (2019). Visualizing and explaining deep learning predictions for pneumonia detection in pediatric chest radiographs. Med. Imaging 10950, 200–211. 10.1117/12.2512752 [DOI] [Google Scholar]
  164. Rajpurkar P., Irvin J., Zhu K., Yang B., Mehta H., Duan T., et al. (2017). Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225. 10.48550/arXiv.1711.05225 [DOI] [Google Scholar]
  165. Rajpurkar P., O'Connell C., Schechter A., Asnani N., Li J., Kiani A., et al. (2020). Chexaid: deep learning assistance for physician diagnosis of tuberculosis using chest x-rays in patients with hiv. NPJ Digital Med. 3, 1–8. 10.1038/s41746-020-00322-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  166. Redmon J., Farhadi A. (2017). “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI: IEEE; ), 7263–7271. [Google Scholar]
  167. Reis E. P. (2022). Brax, a Brazilian Labeled Chest x-ray Dataset. Available online at: https://physionet.org/content/brax/1.0.0/ [DOI] [PMC free article] [PubMed]
  168. Ren S., He K., Girshick R., Sun J. (2015). “Faster r-cnn: towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015 (Montreal, QC: ), 91–99. [Google Scholar]
  169. Ronneberger O., Fischer P., Brox T. (2015). “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015 –18th International Conference Munich, Germany, October 5–9, 2015, Proceedings, Part III (Springer: ), 234–241. 10.1007/978-3-319-24574-4_28 [DOI] [Google Scholar]
  170. Rosenthal A., Gabrielian A., Engle E., Hurt D. E., Alexandru S., Crudu V., et al. (2017). The tb portals: an open-access, web-based platform for global drug-resistant-tuberculosis data sharing and analysis. J. Clin Microbiol. 55, 3267–3282. 10.1128/JCM.01013-17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  171. Russell C., Kusner M. J., Loftus J., Silva R. (2017). “When worlds collide: integrating different counterfactual assumptions in fairness,” in Advances in Neural Information Processing Systems, Vol. 30. [Google Scholar]
  172. Ryoo S., Kim H. J. (2014). Activities of the korean institute of tuberculosis. Osong Public Health Res. Perspect. 5, S43-S49. 10.1016/j.phrp.2014.10.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  173. Sabour S., Frosst N., Hinton G. E. (2017). “Dynamic routing between capsules,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 (Long Beach, CA: ), 3856–3866. [Google Scholar]
  174. Saha P., Sadi M. S., Islam M. M. (2021). Emcnet: Automated covid-19 diagnosis from x-ray images using convolutional neural network and ensemble of machine learning classifiers. Inform. Med. Unlocked 22, 100505. 10.1016/j.imu.2020.100505 [DOI] [PMC free article] [PubMed] [Google Scholar]
  175. Sahadevan V. (2002). High Resolution Digitized Image Analysis of Chest x-rays for Diagnosis of Difficult to Visualize Evolving Very Early Stage Lung Cancer, Pnumoconiosis and Pulmonary Diseases. .S. Patent No US20020094119. [Google Scholar]
  176. Sahlol A. T., Abd Elaziz M., Tariq Jamal A., Damaševičius R., Farouk Hassan O. (2020). A novel method for detection of tuberculosis in chest radiographs using artificial ecosystem-based optimisation of deep neural network features. Symmetry 12, 1146. 10.3390/sym12071146 [DOI] [Google Scholar]
  177. Santosh K., Vajda S., Antani S., Thoma G. R. (2016). Edge map analysis in chest x-rays for automatic pulmonary abnormality screening. Int. J. Comput. Assist. Radiol. Surg. 11, 1637–1646. 10.1007/s11548-016-1359-6 [DOI] [PubMed] [Google Scholar]
  178. Schultheiss M., Schober S. A., Lodde M., Bodden J., Aichele J., Müller-Leisse C., et al. (2020). A robust convolutional neural network for lung nodule detection in the presence of foreign bodies. Sci. Rep. 10, 1–9. 10.1038/s41598-020-69789-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  179. Selvaraju R. R., Cogswell M., Das A., Vedantam R., Parikh D., Batra D. (2017). “Grad-cam: visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE International Conference on Computer Vision (Venice: IEEE; ), 618–626. [Google Scholar]
  180. Settles M. (2005). An Introduction to Particle Swarm Optimization. Idaho: Department of Computer Science, University of Idaho. [Google Scholar]
  181. Seyyed-Kalantari L., Liu G., McDermott M., Chen I. Y., Ghassemi M. (2021). “Chexclusion: fairness gaps in deep chest x-ray classifiers,” in BIOCOMPUTING 2021: Proceedings of the Pacific Symposium (Kohala Coast, HI: World Scientific; ), 232–243. 10.1142/9789811232701_0022 [DOI] [PubMed] [Google Scholar]
  182. Shankar S., Devi M. R. J, Ananthi S., Lokes M. R. S., K V., et al. (2022). Ai in Imaging Data Acquisition, Segmentation and Diagnosis for COVID-19. U.S. Patent No IN202241024227. [Google Scholar]
  183. Shaoliang P., Xiongjun Z., Xiaoqi W., Deshan Z., Li L., Yingjie J. C. (2020). Multidirectional x-ray Chest Radiography Pneumonia Diagnosis Method Based on Deep Learning. U.S. Patent No CN111951246B. [Google Scholar]
  184. Sherrier R. H., Johnson G. (1987). Regionally adaptive histogram equalization of the chest. IEEE Trans. Med. Imaging 6, 1–7. 10.1109/TMI.1987.4307791 [DOI] [PubMed] [Google Scholar]
  185. Shi W., Tong L., Zhu Y., Wang M. D. (2021). Covid-19 automatic diagnosis with radiographic imaging: explainable attention transfer deep neural networks. IEEE J. Biomed. Health Inform. 25, 2376–2387. 10.1109/JBHI.2021.3074893 [DOI] [PMC free article] [PubMed] [Google Scholar]
  186. Shiraishi J., Katsuragawa S., Ikezoe J., Matsumoto T., Kobayashi T., Komatsu K.-,i., et al. (2000). Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists' detection of pulmonary nodules. Am. J. Roentgenol. 174, 71–74. 10.2214/ajr.174.1.1740071 [DOI] [PubMed] [Google Scholar]
  187. Sim Y., Chung M. J., Kotter E., Yune S., Kim M., Do S., et al. (2020). Deep convolutional neural network-based software improves radiologist detection of malignant lung nodules on chest radiographs. Radiology 294, 199–209. 10.1148/radiol.2019182465 [DOI] [PubMed] [Google Scholar]
  188. Singh A., Lall B., Panigrahi B. K., Agrawal A., Agrawal A., Thangakunam B., et al. (2021). Deep lf-net: Semantic lung segmentation from indian chest radiographs including severely unhealthy images. Biomed. Signal Process. Control 68, 102666. 10.1016/j.bspc.2021.102666 [DOI] [Google Scholar]
  189. Singh R., Kalra M. K., Nitiwarangkul C., Patti J. A., Homayounieh F., Padole A., et al. (2018). Deep learning in chest radiography: detection of findings and presence of change. PLoS ONE 13, e0204155. 10.1371/journal.pone.0204155 [DOI] [PMC free article] [PubMed] [Google Scholar]
  190. Smilkov D., Thorat N., Kim B., Viégas F., Wattenberg M. (2017). Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825. 10.48550/arXiv.1706.03825 [DOI] [Google Scholar]
  191. Soleymanpour E., Pourreza H. R., Yazdi M. S., et al. (2011). Fully automatic lung segmentation and rib suppression methods to improve nodule detection in chest radiographs. J. Med. Signals Sens. 1, 191. 10.4103/2228-7477.95412 [DOI] [PMC free article] [PubMed] [Google Scholar]
  192. Sousa R. T., Marques O., Curado G. T., Costa R. M. D., Soares A. S., et al. (2014). “Evaluation of classifiers to a childhood pneumonia computer-aided diagnosis system,” in 2014 IEEE 27th International Symposium on Computer-Based Medical Systems (New York, NY: IEEE; ), 477–478. [Google Scholar]
  193. Stein A., Wu C., Carr C., Shih G., Dulkowski J., Kalpathy, et al. (2018). Rsna Pneumonia Detection Challenge. Available online at: https://kaggle.com/competitions/rsna-pneumonia-detection-challenge
  194. Subramanian V., Wang H., Wu J. T., Wong K. C., Sharma A., Syeda-Mahmood T. (2019). “Automated detection and type classification of central venous catheters in chest x-rays,” in Medical Image Computing and Computer Assisted Intervention - MICCAI 2019–22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI (Springer: ), 522–530. 10.1007/978-3-030-32226-7_58 [DOI] [Google Scholar]
  195. Sundaram S., Hulkund N. (2021). Gan-based data augmentation for chest x-ray classification. arXiv preprint arXiv:2107.02970. 10.48550/arXiv.2107.02970 [DOI] [Google Scholar]
  196. Syeda-Mahmood T., Wong K. C., Gur Y., Wu J. T., Jadhav A., Kashyap S., et al. (2020). “Chest x-ray report generation through fine-grained label learning,” in Medical Image Computing and Computer Assisted Intervention - MICCAI 2020—23rd International Conference (Lima: Springer; ), 561–571. 10.1007/978-3-030-59713-9_54 [DOI] [Google Scholar]
  197. Tabik S., Gómez-Ríos A., Martín-Rodríguez J. L., Sevillano-García I., Rey-Area M., Charte D., et al. (2020). Covidgr dataset and covid-sdnet methodology for predicting covid-19 based on chest x-ray images. IEEE J. Biomed. Health Inform. 24, 3595–3605. 10.1109/JBHI.2020.3037127 [DOI] [PMC free article] [PubMed] [Google Scholar]
  198. Takemiya R., Kido S., Hirano Y., Mabu S. (2019). Detection of pulmonary nodules on chest x-ray images using R-CNN. Int. Forum Med. Imaging 11050, 147–152. 10.1117/12.2521652 [DOI] [Google Scholar]
  199. Tan M., Le Q. V. (2019). Efficientnet: rethinking model scaling for convolutional neural networks. CoRR, abs/1905.11946. 10.48550/arXiv.1905.11946 [DOI] [Google Scholar]
  200. Tang Y.-B, Tang, Y., Sandfort V., Xiao J., Summers R. M. (2019a). “Tuna-net: task-oriented unsupervised adversarial network for disease recognition in cross-domain chest x-rays,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer: ), 431–440. 10.1007/978-3-030-32226-7_48 [DOI] [Google Scholar]
  201. Tang Y.-B., Tang Y.-X., Xiao J., Summers R. M. (2019b). “Xlsor: a robust and accurate lung segmentor on chest x-rays using criss-cross attention and customized radiorealistic abnormalities generation,” in International Conference on Medical Imaging with Deep Learning (London: PMLR; ), 457–467. [Google Scholar]
  202. Team N. L. S. T. R. (2011). Reduced lung-cancer mortality with low-dose computed tomographic screening. N. Engl. J. Med. 365, 395–409. 10.1056/NEJMoa1102873 [DOI] [PMC free article] [PubMed] [Google Scholar]
  203. Thian Y. L., Ng D., Hallinan J. T. P. D., Jagmohan P., Sia S. Y., Tan C. H., et al. (2021). Deep learning systems for pneumothorax detection on chest radiographs: a multicenter external validation study. Radiol. Artif. Intell. 3, 190. 10.1148/ryai.2021200190 [DOI] [PMC free article] [PubMed] [Google Scholar]
  204. This H.. (2020). Chest radiograph interpretation with deep learning models: assessment with radiologist-ad-judicated reference standards and population-adjusted evaluation. Radiology 294, 421–431. 10.1148/radiol.2019191293 [DOI] [PubMed] [Google Scholar]
  205. Ting H., Tieqiang L., Xia L. (2021). Lung Inflammation Recognition and Diagnosis Method Based on Deep Learning Convolutional Neural Network. U.S. Patent No CN113192041. [Google Scholar]
  206. Ucar F., Korkmaz D. (2020). Covidiagnosis-net: deep bayes-squeezenet based diagnosis of the coronavirus disease 2019 (COVID-19) from x-ray images. Med Hypotheses 140, 109761. 10.1016/j.mehy.2020.109761 [DOI] [PMC free article] [PubMed] [Google Scholar]
  207. Ul Abideen Z., Ghafoor M., Munir K., Saqib M., Ullah A., Zia T., et al. (2020). Uncertainty assisted robust tuberculosis identification with bayesian convolutional neural networks. IEEE Access 8, 22812–22825. 10.1109/ACCESS.2020.2970023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  208. Van Ginneken B., Stegmann M. B., Loog M. (2006). Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database. Med. Image Anal. 10, 19–40. 10.1016/j.media.2005.02.002 [DOI] [PubMed] [Google Scholar]
  209. Vayá M. D. L. I., Saborit J. M., Montell J. A., Pertusa A., Bustos A., Cazorla M., et al. (2020). Bimcv COVID-19+: a large annotated dataset of rx and ct images from COVID-19 patients. arXiv preprint arxiv:2006.01174. 10.48550/arXiv.2006.01174 [DOI] [Google Scholar]
  210. Vedantam R., Lawrence Zitnick C., Parikh D. (2015). “Cider: consensus-based image description evaluation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Boston, MA|: IEEE; ), 4566–4575. [Google Scholar]
  211. Venkata Hari G. P. (2022). Tuberculosis Detection Using Artificial Intelligence. U.S. Patent No N202241001179. [Google Scholar]
  212. Wang C., Elazab A., Jia F., Wu J., Hu Q. (2018). Automated chest screening based on a hybrid model of transfer learning and convolutional sparse denoising autoencoder. Biomed. Eng. Online 17, 1–19. 10.1186/s12938-018-0496-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Wang D., Arzhaeva Y., Devnath L., Qiao M., Amirgholipour S., Liao Q., et al. (2020). “Automated pneumoconiosis detection on chest x-rays using cascaded learning with real and synthetic radiographs,” in 2020 Digital Image Computing: Techniques and Applications (DICTA), 1–6. 10.1109/DICTA51227.2020.9363416 [DOI] [Google Scholar]
  214. Wang H., Wang Z., Du M., Yang F., Zhang Z., Ding S., et al. (2020). “Score-cam: score-weighted visual explanations for convolutional neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 24–25. 10.1109/CVPRW50498.2020.00020 [DOI] [Google Scholar]
  215. Wang L., Wong A., Lin Z. Q., McInnis P., Chung A., Gunraj H. (2020). Actualmed COVID-19 Chest X-ray Dataset Initiative. Available online at: https://github.com/agchung/actualmed-covid-chestxray-dataset
  216. Wang N., Liu H., Xu C. (2020). “Deep learning for the detection of COVID-19 using transfer learning and model integration,” in 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC) (Beijing: IEEE; ), 281–284. [Google Scholar]
  217. Wang X., Peng Y., Lu L., Lu Z., Bagheri M., Summers R. M. (2017). “Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI: IEEE; ), 2097–2106. [Google Scholar]
  218. Wang X., Yu J., Zhu Q., Li S., Zhao Z., Yang B., et al. (2020). Potential of deep learning in assessing pneumoconiosis depicted on digital chest radiography. Occup Environ. Med. 77, 597–602. 10.1136/oemed-2019-106386 [DOI] [PubMed] [Google Scholar]
  219. Wang Z., Qian Q., Zhang J., Duo C., He W., Zhao L. (2021). Deep learning for computer-aided diagnosis of pneumoconiosis. Res. Squ. 1–14. 10.21203/rs.3.rs-460896/v134879818 [DOI] [Google Scholar]
  220. Wanli J., Xingwang L., Donglei Y. (2021). Deep Learning-Based Pneumoconiosis Grading Method and Device, Medium and Equipment. U.S. Patent No Cn112819819. [Google Scholar]
  221. Wessel J., Heinrich M. P., von Berg J., Franz A., Saalbach A. (2019). Sequential rib labeling and segmentation in chest x-ray using mask r-cnn. arXiv preprint arXiv:1908.08329. 10.48550/arXiv.1908.08329 [DOI] [Google Scholar]
  222. WHO (2013). Global Tuberculosis Report 2013. Geneva: World Health Organization. [Google Scholar]
  223. WHO (2016). World Health Statistics 2016: Monitoring Health for the SDGs Sustainable Development Goals. Geneva: World Health Organization. [Google Scholar]
  224. WHO (2021). Meeting Report of the WHO Expert Consultation on the Definition of Extensively Drug-Resistant Tuberculosis, 27–29 October 2020. Geneva: World Health Organization. [Google Scholar]
  225. Xu H., Tao X., Sundararajan R., Yan W., Annangi P., Sun X., et al. (2010). “Computer aided detection for pneumoconiosis screening on digital chest radiographs,” in Proceedings of the Third International Workshop on Pulmonary Image Analysis, 129–138.20174852 [Google Scholar]
  226. Xue Y., Xu T., Rodney Long L., Xue Z., Antani S., Thoma G. R., et al. (2018). “Multimodal recurrent model with attention for automated radiology report generation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer: ), 457–466. 10.1007/1337978-3-030-00928-1_52 [DOI] [Google Scholar]
  227. Yang F., Tang Z.-R., Chen J., Tang M., Wang S., Qi W., et al. (2021). Pneumoconiosis computer aided diagnosis system based on x-rays and deep learning. BMC Med. Imaging 21, 1–7. 10.1186/s12880-021-00723-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  228. Yao Z., Lai Z., Wang C. (2016). “Image enhancement based on equal area dualistic sub-image and non-parametric modified histogram equalization method,” in 2016 9th International Symposium on Computational Intelligence and Design (ISCID), Vol. 1 (Hangzhou: IEEE; ), 447–450. [Google Scholar]
  229. Yu D., Zhang K., Huang L., Zhao B., Zhang X., Guo X., et al. (2020). Detection of peripherally inserted central catheter (picc) in chest x-ray images: a multi-task deep learning model. Comput. Methods Programs Biomed. 197, 105674. 10.1016/j.cmpb.2020.105674 [DOI] [PubMed] [Google Scholar]
  230. Yu P., Xu H., Zhu Y., Yang C., Sun X., Zhao J. (2011). An automatic computer-aided detection scheme for pneumoconiosis on digital chest radiographs. J. Digit. Imaging 24, 382–393. 10.1007/s10278-010-9276-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  231. Yue Z., Ma L., Zhang R. (2020). Comparison and validation of deep learning models for the diagnosis of pneumonia. Comput. Intell. Neurosci. 2020, 8876798. 10.1155/2020/8876798 [DOI] [PMC free article] [PubMed] [Google Scholar]
  232. Zadbuke A. S. (2012). Brightness preserving image enhancement using modified dualistic sub image histogram equalization. Int. J. Sci. Eng. Res. 3, 1–6. Available online at: https://www.ijser.org/researchpaper/Brightness-Preserving-Image-Enhancementusing-Modifieddualistic-Sub-Image-Histogram-Equalization.pdf [Google Scholar]
  233. Zafar M. B., Valera I., Rogriguez M. G., Gummadi K. P. (2017). “Fairness constraints: mechanisms for fair classification,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, eds A. Singh and X. J. Zhu (Fort Lauderdale, FL: PMLR), 962–970. [Google Scholar]
  234. Zech J. R., Badgeley M. A., Liu M., Costa A. B., Titano J. J., Oermann E. K. (2018). Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 15, e1002683. 10.1371/journal.pmed.1002683 [DOI] [PMC free article] [PubMed] [Google Scholar]
  235. Zhang D., Ren F., Li Y., Na L., Ma Y. (2021). Pneumonia detection from chest x-ray images based on convolutional neural network. Electronics 10, 1512. 10.3390/electronics1013151233777251 [DOI] [Google Scholar]
  236. Zhang J., Xie Y., Pang G., Liao Z., Verjans J., Li W., et al. (2021). Viral pneumonia screening on chest x-rays using confidence-aware anomaly detection. IEEE Trans. Med. Imaging 40, 879–890. 10.1109/TMI.2020.3040950 [DOI] [PMC free article] [PubMed] [Google Scholar]
  237. Zhang L., Rong R., Li Q., Yang D. M., Yao B., Luo D., et al. (2021). A deep learning-based model for screening and staging pneumoconiosis. Sci. Rep. 11, 1–7. 10.1038/s41598-020-77924-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  238. Zhang R., Duan H., Cheng J., Zheng Y. (2020). “A study on tuberculosis classification in chest x-ray using deep residual attention networks,” in 2020 42nd Annual International Conference of the IEEE Engineering in Medicine &Biology Society (EMBC) (Montreal, QC: IEEE; ), 1552–1555. [DOI] [PubMed] [Google Scholar]
  239. Zhang W., Li G., Wang F., Yu Y., Lin L., Liang H., et al. (2019). “Simultaneous lung field detection and segmentation for pediatric chest radiographs,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer: ), 594–602. 10.1007/978-3-030-32226-7_6 [DOI] [Google Scholar]
  240. Zhao B., Guo Y., Zheng C., Zhang M., Lin J., Luo Y., et al. (2019). Using deep-learning techniques for pulmonary-thoracic segmentations and improvement of pneumonia diagnosis in pediatric chest radiographs. Pediatr Pulmonol. 54, 1617–1626. 10.1002/ppul.24431 [DOI] [PubMed] [Google Scholar]
  241. Zhao W., Wang L., Zhang Z. (2020). Artificial ecosystem-based optimization: a novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 32, 9383–9425. 10.1007/s00521-019-04452-x [DOI] [Google Scholar]
  242. Zhu J.-Y., Park T., Isola P., Efros A. A. (2017). “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (Venice: IEEE; ), 2223–2232. [Google Scholar]

Articles from Frontiers in Big Data are provided here courtesy of Frontiers Media SA

RESOURCES