Abstract
Simple Summary
Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI.
Abstract
Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Keywords: pulmonary nodule, artificial intelligence, deep learning, neural networks
1. Introduction
Lung cancer screening is a very important issue as the disease is the second most common type of cancer in both males and females. Lung cancer is responsible for of all cancer cases in USA [1]. It is obvious that early detection was associated with a higher 5-year survival rate. Risk factors for developing lung cancer include all types of smoking (even electronic cigarettes and passive smoking) [2,3,4], family history either of single or multiple relatives especially those who developed cancer at young age [5], chronic obstructive lung disease [6], and human papilloma virus [7]. Recently, the United States Preventive Services Task Force recommended annual screening for lung cancer with low dose computed tomography (LDCT) for asymptomatic individuals aged 55 to 80 years who have a 30-pack year smoking history and currently smoke or have quit smoking within the past 15 years. Patients who have stopped smoking for 15 years, have a co-existing health problem limiting life expectancy, or are not candidates for surgical resection are excluded from annual screening. The algorithm of screening includes the number, the density, and size of solid, part solid or non-solid component of the nodules and according to these parameters, a follow-up schedule was designed [8,9]. Artificial intelligence was invented to enhance the computational abilities of computers and teach them to think, solve problems, and perform tasks in the same way as human beings. Recently, medical image analysis and diseases prediction and detection are among the most exciting applications of artificial intelligence. Using artificial intelligence techniques, computer aided diagnosis (CAD) systems have been developed and used in the analysis of medical imaging and have proved to be very helpful tools. AI techniques could be used to create a proper learning model to be used in clinical practice for lung cancer screening. The learning model should consist of four main steps; lung segmentation, followed by nodule segmentation/detection, then feature analysis, and the exclusion of false positive nodules (see Figure 1). Classification of detected pulmonary nodules into benign and malignant is based upon a preset of characteristic features including shape analysis, estimation of growth rate, and appearance analysis [10,11,12]. In this review, we will briefly discuss the current applications of AI in lung segmentation and pulmonary nodule detection and classification. This study reviews recent CT-based studies as well as studies published in the last two decades.
2. Lung Segmentation
The first step in almost every CAD system dealing with lung disease is the segmentation. In this step, a preferred structure is delineated from its surrounding prior to analysis. Lung segmentation is very challenging due to different existing structures with near-similar densities such as the bronchi, bronchioles, pulmonary artery, and vein branches. Lung segmentation techniques can be categorized into four main categories based on: (1) Hounsfield unit (HU) threshold, (2) deformable boundaries, (3) shape models, (4) region/edge-based models, in addition to machine learning (ML) based methods and hybrid techniques which utilize a combination of methods to overcome the drawbacks of using single method (Figure 2). Details of the different categories are given below.
Hounsfield unit (HU) thresholding: Normal lung parenchyma displays low HU and appears hypodense in thoracic CT scan images in contrast to other structures such as heart, blood vessels or bronchial walls. Researchers tried to determine a threshold of HU to define lung parenchyma using different methods. Hu et al. [13] proposed a 3-step technique to perform lung segmentation. Their method started with extracting lung parenchyma utilizing a proper grey scale threshold. Then, separation of right and left lungs was performed using dynamic program. Lastly, a series of morphological operations were used to refine the pulmonary margins. This method was further used in the works of Ukil and Reinhardt [14], as well as Van Rikxoort [15]. Amato et al. [16,17] used grey scale thresholding once to extract the thorax from surrounding structures, and another time for extracting the lung from the rest of thoracic structures. A rolling ball algorithm is applied to lung periphery aiming not to miss any juxta-pleural nodule and exclude partial volume pixels. Pu et al. [18] designed an adaptive border marching (ABM) algorithm to reach the same purpose through refining lung margins. Gao et al. [19] proposed a 4-step method to separate the pulmonary vessels, and airways from lung parenchyma as well as separating right and left lungs based on a grey scale threshold. Other researchers used more sophisticated methods to define threshold used for lung extraction such as histogram analysis [20], and 3D fuzzy adaptive thresholding [21]. Limitations of lung segmentation using thresholding method are mainly related to its reliance on image resolution and type of scanners used (i.e., GE, Philips…). Another important issue is that there might be an overlap between densities of different lung structures making differentiation based on HU difficult.
Deformable boundary models: The second method used for lung segmentation is deformable boundary models including snakes, active contours, and level sets. These models start with an initial point then follow the shape of the desired structure influenced by internal and external forces. Itai et al. [22] utilized a 2D parametric deformable model to extract lung from computed tomography (CT) image using lung borders as an external guiding force. Silveria et al. [23,24] presented a technique that uses active contour and Level sets. They begin with a thresholding technique, then edge detection is initiated using a robust geometric active contour model around the lung. It divides into two and continues by multiple strokes which are categorized into valid and invalid according to confidence degrees. The major limitation of deformable boundary models is the high sensitivity of the selection of the initial point, in addition to inhomogeneity of lung structure that may lead to unsuccessful adaptation of lung boundaries [25].
Shape-based models: In this method, the stored data in the CAD system is used to improve the accuracy of lung segmentation. It utilizes either a statistical shape or lung appearance model. Unlike previously discussed methods, this approach of lung segmentation is more effective in dealing with lungs with moderate to severe pathology and with variations in lung anatomy as it gets benefit from trained models [26]. Sun et al. [27] proposed a 2-step lung segmentation technique that used a robust active shape model (RASM) matching method to segment the outline of the lungs guided by rib cage detection method, followed by using an optimal surface finding approach that was created by Li et al. [28] to fit the initial segmentation result to the lung. The right and left lungs were segmented separately. Sofka et al. [29] designed a multistage learning model that used predefined anatomical data to initiate a statistical shape model. Hau et al. [30] developed a graph-based search algorithm via cost function that takes into consideration the intensity, gradient, boundary smoothness, and rib anatomical information. Other researchers proposed a user interface framework [31] or Bayesian classification refined by Markov Gibbs Random Field (MGRF) method [32,33,34]. Similar approach was introduced by Chung et al. [35] who developed a Bayesian approach based on the Chan Vese (CV) model [36], where the data obtained from previous or upper frame image was used to predict lung image. False positive juxta-pleural nodule candidates were excluded via concave points detection and circle/ellipse Hough transform. Modification of lung contour by adding the final nodule candidates to the area of the CV model was the final step. More recently, Sun et al. [37] presented a new active shape model (ASM) algorithm to detect the outlier marker points by distance method aiming to get better assessment of lung periphery and juxta-pleural lung nodules. They also used a robust principal component analysis (RPCA) of low rank theory to remove noise from images in order to construct ASM. Despite the many advantages of shape model over other lung segmentation methods, its main limitation depends on the accurateness of the used stored data [25].
Region-based method: The main idea of region-based segmentation is that neighboring pixels in a certain region will have similar values [38]. An example of this method is the region growing method. If one pixel showed similar criteria to a predefined set then it is included in that region [38,39,40,41,42]. Other examples include watershed segmentation [43], random walks segmentation [44], graph cuts segmentation [45], and fuzzy connectedness [46]. This method of segmentation is suitable for homogenous structures such as lungs with no or mild pathology, airway and pathologic lesions with homogeneous density [25].
Machine learning-based methods: This method uses learning models composed of predefined measurable characteristics (called features) to identify normal and abnormal lung regions as well as different anatomical structures and finally construct the proper lung segmentation. Small image patches are labelled either as normal, abnormal, or neighboring soft tissue. The most common pathological patches used in clinical practice include consolidation, ground glass opacities, and fibrosis. A supervised training process uses data systems to extract features from each pixel/voxel and further classify them to predict lung field boundaries and reach final segmentation. A proper lung segmentation should include identification of both normal and pathological lung regions in the same process, and this is performed via examining each voxel in the CT image [47,48,49,50,51]. Multiple sophisticated algorithms were developed to reach this task, for example, Mansoor et al. [52] designed an ML algorithm that identifies a large spectrum of pulmonary pathologic lesions combined with region-based and neighboring anatomy guided correction segmentation. Obviously, this method is computationally expensive, but its remarkably high accuracy along with development of parallel computing and efficient well-processed workstations make this method feasible in clinical practice. One of the limitations of this method is that it uses small image patches which makes it impossible to predict structural information such as global shape of the lung. It is impossible to get feature data sets that can fit anatomical and physiologic lung variations in different subjects. Lastly, pixel by pixel assessment was the reason that this method had the least efficiency as compared to the other four major classes of lung segmentation [51,53,54,55,56].
Hybrid approaches of lung segmentation: No single lung segmentation method could fit with anatomical and pathological variants alone, this encouraged the development of combined approaches. As in the works of Mansoor et al. [52] and Hau et al. [30].
In summary, the literature reviews of lung segmentation system using these four different categories are presented in Table 1.
Table 1.
Study | Method | # Subjects | System Evaluation |
---|---|---|---|
Amato et al. [16,17] |
1. Grey scale thresholding 2. Rolling ball algorithm. |
17 CT patients. | The area under the ROC curve (AUC) of the system was . |
Hu et al. [13] |
1.
Grey scale thresholding. 2. Dynamic programming. 3. Morphological operations. |
eight normal CT patients. | The average intrasubject change was . |
Itai et al. [22] |
1. Grey scale thresholding. 2. Active contour model. |
9 CT Patients. | Qualitative evaluation only. |
Silveria et al. [23,24] |
1. Grey scale thresholding. 2. Geometric active contour. 3. Level sets. 4. Expectation-maximization (EM) algorithm. |
Stack of chest CT slices. | Qualitative evaluation only. |
Gao et al. [19] |
1. Grey scale thresholding. 2. Anisotropic diffusion. 3. 3D region growing. 4. Dynamic programming. 5. Rolling ball algorithm. |
eight CT scans. | The average overlap coefficient of the system was . |
Pu et al. [18] |
1. Grey scale thresholding. 2. Geometric border marching. |
20 CT patients. | Average over-segmentation and under-segmentation ratio were and , respectively. |
Korfiatis et al. [57] |
1. k-means clustering 2. Support vector machine (SVM) |
22 CT patients. | The mean overlap coefficient of the system was higher than . |
Wang et al. [58] |
1. Gray scale thresholding. 2. 3D gray-level co-occurrence matrix (GLCM) [59,60]. |
76 CT patients. | The mean overlap coefficient of the system was . |
Van Rikxoort et al. [15] |
1. Region growing. 2. Grey scale thresholding. 3. Dynamic programming. 4. 3D hole filling. 5. Morphological closing. |
100 CT Patients. | The accuracy of the system was . |
Wei et al. [20] |
1. Histogram analysis and connected-component labeling. 2. Wavelet transform. 3. Otsu’s algorithm. |
nine CT patients. | The accuracy range of the system was . |
Ye et al. [21] |
1. 3D fuzzy adaptive thresholding. 2. Expectation–maximization (EM) algorithm. 3. Antigeometric diffusion. 4. Volumetric shape index map. 5. Gaussian filter. 6. Dot map. 7. Weighted support vector machine (SVM) classification. |
108 CT patients. | The average detection rate of the system was . |
Sun et al. [27] |
1. Active shape model matching method. 2. Rib cage detection method. 3. Surface finding approach. |
60 CT patients. | The Dice similarity coefficient (DSC) and mean absolute surface distance of the system were and , respectively. |
Sofka et al. [29] |
1. Shape model. 2. Boundary detection. |
260 CT patients. | The errors in segmenting left and right lung were and , respectively. |
Hua et al. [30] | Graph-based search algorithm. | 19 pathological lung CT patients. | The sensitivity, specificity, and Hausdorff distance of the system were , , and , respectively. |
Nakagomi et al. [61] | Min-cut graph algorithm. | 97 CT patients | The sensitivity and Jaccard index of the system were , and , respectively. |
Mansoor et al. [52] |
1. Fuzzy connectedness segmentation algorithm. 2. Texture-based random forest classification. 3. Region-based and neighboring anatomy guided correction segmentation. |
more than 400 CT patients. | The DSC, Hausdorff distance, sensitivity, and specificity of the system were , , , and , respectively. |
Yan et al. [62] | Convolution neural network (CNN). | 861 CT COVID-19 patients. | The system achieved DSC of and , sensitivity of and , and specificity of and for normal and COVID-19-infected lung, respectively. |
Fan et al. [63] |
1. COVID-19-infected lung segmentation convolution neural network (Inf-Net). 2. Semi-supervised Inf-Net (Semi-Inf-Net). |
100 CT images. | The DSC (sensitivity, specificity) of Inf-Net and Semi-Inf-Net were (, ) and (, ), respectively. |
Oulefki et al. [64] | Multi-level entropy-based threshold approach. | 297 CT COVID-19 patients. | The DSC, sensitivity, specificity, and precision of the system were , , , and , respectively. |
Sharafeldeen et al. [65] |
1. Linear combination of Gaussian. 2. Expectation-maximization (EM) algorithm. 3. Modified k-means clustering approach. 4. 3D MGRF-based morphological constraints. |
32 CT COVID-19 patients. | The Overlap coefficient, DSC, absolute lung volume difference (ALVD), and 95th-percentile bidirectional Hausdorff distance (BHD) were , , , and , respectively. |
Zhao et al. [66] |
1. Grey scale thresholding. 2. 3D V-Net. 3. Deformation module. |
112 CT patients. | DSC, sensitivity, specificity, and mean surface distance error of the system were , , , and , respectively. |
Sousa et al. [67] | Hybrid deep learning model, consisted of U-Net [68] and ResNet-34 [69] architectures. | 385 CT patients, collected from five different datasets. | The mean DSC of the system was higher than , and the average Hausdorff distance was less than . |
Kim et al. [70] | Otsu’s algorithm. | 447 CT patients. | Sensitivity, specificity, accuracy, AUC, and F1-score of the system were , , , , and , respectively. |
3. Pulmonary Nodule Detection and Segmentation
Lung cancer screening programs rely mainly on early detection of pulmonary nodules utilizing LDCT [71,72,73,74,75,76,77]. LDCT is capable of providing imaging of the thoracic region of high contrast, temporal, and spatial resolution in a very short acquisition time (single breath hold). However, detection of lung nodules is not as simple as it looks, as pulmonary nodules usually appear as a white spherical structure that could mimic a nearby small blood vessel or a collapsed bronchiole. In addition, the inter-reader variations in detection and the characterization of pulmonary nodules are merely subjective issues [10,78,79]. This opens the way for artificial intelligence and deep learning to overcome human errors and provide more effective procedures. The process of lung nodule detection passes into two stages; first detection of the pulmonary nodule candidates, second exclusion of the false positive nodules (FPN) and keeping only the true positive nodules (TPN). In other words, detection followed by classification [10,78,79].
Computer-aided diagnosis (CAD) systems: A large public database was generated to provide data that can be used to assess the performance of CAD detection and diagnostic systems and help further development. It is called the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The creation of this database required great efforts as CAD was not used in annotation of images included [80]. Other databases such as data derived from the Dutch-Belgian NELSON lung cancer screening trial and LUNA16, LIDC, DSB2017, NLST, TianChi, and ELCAP datasets were utilized by most of the current research works dealing with CAD and deep learning (DL) [81]. The first step in the process of nodule detection is to unsharp the CT images by changing the image threshold which improves discrimination of pulmonary nodules from the rest of the surrounding lung parenchyma. A series of 3D cylindrical and spherical filters and template matching were used to detect small lung nodules [82,83,84,85,86,87,88,89]. However, the geometry of the candidate nodules doesn’t always fit into these spherical, cylindrical, or circular assumptions as it may be spiculated by its nature or due to attachment to nearby pleural surface or blood vessel [90]. Other studies proposed methods to detect lung nodules using k-means clustering technique [91,92,93] with further utilization of rule-based classifiers and linear discriminate analysis (LDA) to eliminate normal lung structures and reduce FPN. One study tried to solve the problem of eliminating an overlapping or contacting blood vessel by choosing a proper region of interest (ROI) in a 3-step model [94]. On the other hand, Oda et al. [95] and Siata et al. [96] used 3D algorithms; 3D filter by orientation map of gradient vectors and 3D distance transformation to overcome the same problem. Brown et al. [97] used prior patient images to create a specific model, so that any change in size and morphology of pulmonary nodules could be detected in follow up images easily. Messay et al. [98] used a fully automated CAD system that utilizes intensity thresholding and morphological operations to detect pulmonary nodules with a sensitivity of with 3 FPN/scan. A set of 245 features was computed for each segmented lung nodule and Fisher Linear Discriminant (FLD) classifier was utilized. Similarly, Setio et al. [99] designed a CAD system to detect pulmonary nodules larger than 10 mm. They also used a multi-stage process of thresholding and morphological operations, then the extracted nodules were segmented and a set of 24 features was computed, finally the nodules were classified via a radial based vector supporting machine (VSM). A recent study aimed to solve the problem of using uncertain class data through the application of a CAD system based upon semi-supervised extreme learning machines (SS-ELM). This was done by using both certain class feature sets with labels, and unlabeled feature sets for training [100].
Deep learning: Deep learning is an advanced type of machine learning that uses complicated algorithms to model high level features and recognize characteristics. It is composed of statistical models that predict results depending on previous training on annotated or un-labelled datasets [101]. The algorithm could predict the presence of pulmonary nodule or predict its nature whether benign or malignant [102]. Convolutional neural network (CNN) is one of the most commonly used DL algorithms in the clinical practice. It was originally implemented in LeNet, which was designed by Yann LeCun et al. [103]. Since then, it gained more popularity and outperformed the existing state of the art texture analysis and support vector machine(SVM) methods. CNN model can build itself from the beginning even when dealing with new un-labelled features without the need for predefined set of features or complex human led pipes, in contrast to tissue radiomics or feature analysis. Another advantage of CNN over other methods is that all its components reach ultimate level at the same time, while in the case of tissue radiomics for instance, there is no guarantee that all components will fulfill high level. Additionally, it requires limited human supervision [10,104,105]. In the last decade, several research works emerged with different CNN algorithms and models designed for pulmonary nodule detection. Two studies showed exceptionally high accuracy (99–), sensitivity (97.5–96.9) and specificity (97.5–96.3). They proposed algorithms that either combined 2D and 3D artificial neural networks with intensity based statistical features [106] or used CAD system for different dimensions of angular histograms of surface normals (AHSN) features [107]. Other researchers used 2D and 3D subsets of features [108], local shape analysis and data-driven local contextual feature learning [109], geometric and intensity statistical features [110], or deep neural networks (DNN) [111]. Bergtholdt et al. [112] found that using support vector machine classifier improved the accuracy, sensitivity, and specificity of pulmonary nodule detection. One study [113] used deep believe network (DBN) to detect large nodules (>30 mm) with high accuracy of about . Jakobs et al. [114] compared the performance of two commercial and one academic state of the art CAD systems and found that the updated commercial CAD system (Herakles) had the highest sensitivity reaching with 3.1 FPN/scan. They found that about one third of the missed nodules were subsolid. They recommended the addition of a CAD scheme designed for subsolid nodules to improve the sensitivity of nodule detection. Another recent study reviewed several research works and found high sensitivity of DL algorithms when utilizing LUNA 16 dataset (in the range of 94.4–) with an average of 4 FPN/scan and LIDC-IDRI dataset (in the range of –) [115].
Pulmonary nodule segmentation: Nodule size is a strong predictor of neoplastic nature along with its progressive increase on follow up [116]. One large study demonstrated that risk of developing cancer in nodule less than 100 mm3 equals those with no nodules [117]. Nodule size was better assessed through volumetry rather than diameter as 2D measurements were found to be unreliable and showed wide inter and intra-observer variations [118]. Automated 3D measurement of pulmonary nodules provides better assessment of its morphology and growth rate [119]. Accurate nodule volumetry requires good nodule segmentation. Manual segmentation of lung nodules is time consuming and is far less accurate in comparison to deep learning semiautomated methods [120]. Most of the available algorithms concerned with pulmonary nodule detection rely on growing edge method where a predefined threshold acts as a seed that connects all nearby voxels of higher density [121]. As mentioned before, solid pulmonary nodules display higher density than surrounding lung parenchyma promoting easy discrimination by growing edge method, but difficulties occur when a vessel contacts or passes beside a pulmonary nodule or when it approximates the pleura [121,122]. The detection of ground glass nodules with indistinct margins is very problematic in manual segmentation. Tao et al., and Zhou et al., proposed novel methods via a multi-level statically based method [123] and a classifier by boosting k-nearest neighbor (kNN), whose distance measure is the Euclidean distance between the nonparametric density estimates of two regions [124]. Another more recent study segmented subsolid nodule through voxel classification that automatically eliminate blood vessels [125]. Other studies described more complex approaches to segment of pulmonary nodules of different densities and those with either vascular or pleural attachment via analysis of the core of the nodule [79,126,127]. Table 2 presents a summary of the state-of-the-art pulmonary nodule detection and segmentation systems.
Table 2.
Study | Method | # Subjects | System Evaluation |
---|---|---|---|
Brown et al. [97] |
1. Priori model. 2. Region growing. 3. Mathematical morphology. |
31 CT patients. | The accuracy of the system was . |
Oda et al. [95] |
1. 3D filter by orientation map of gradient vectors. 2. 3D distance transformation. |
33 CT patients. | The accuracy of the system was . |
Chang et al. [82] |
1. Cylinder filter. 2. Spherical filter. 3. Sphericity test. |
eight CT patients. | The detection rate of the system was . |
Way et al. [78] |
1. k-means clustering. 2. 3D active contour model |
96 CT patients. | Qualitative evaluation only. |
Kuhnigk et al. [121] | Automatic morphological and partial volume analysis based method. | Low-dose data from 8 clinical metastasis patients. | Results of proposed method outperformed conventional methods both systematic and absolute errors were substantially reduced. Method could successfully account for slice thickness and variations of kernel reconstruction compared to conventional methods. |
Zhou et al. [124] |
1. Detection: boosted KNN with Euclidean distance measure between the non-parametric density estimates of two regions. 2. Segmentation: analysis of 3-D texture likelihood map of nodule region. |
10 ground Glass Opacity nodules. | All 10 nodules detected with only 1 false positive nodule. |
Dehmeshki et al. [122] | Adaptive sphericity oriented contrast region growing on the fuzzy connectivity map of the object of interest. |
1. Database 1: 608 pulmonary nodules from 343 scans, 2. Database 2: 207 pulmonary nodules from 80 CT scans. |
Visual inspection found that of the segmented nodules were correct, while the other nodules required other segmentation solutions. |
Tao et al. [123] | A multi-level statistical learning-based approach for segmentation and detection of ground glass nodule. | Database: 1100 subvolumes (100 contains ground glass nodule) acquired from 200 subjects. | Classification accuracy: (overall), and (ground glass nodule). |
Messay et al. [98] |
1. Thresholding. 2. Morphological operations. 3. Fisher Linear Discriminant (FLD) classifier. |
84 CT patients. | The sensitivity of the system was . |
Kubota et al. [126] | Region Growing. |
1. LIDC 1: 23 nodule, 2. LIDC 2: 82 nodule, 3. A dataset of 820 nodules with manual diameter measurements. |
1. LIDC 1: average overlap, 2. LIDC 2: average overlap. |
Liu et al. [128] |
1. Selective enhancement filter [129]. 2. Hidden conditional random field (HCRF) [130]. |
24 CT patients. | The sensitivity of the system was with false positive/scan. |
Choi et al. [107] |
1. Dot enhancement filter. 2. Angular histograms of surface normals (AHSN). 3. Iterative wall elimination method. 4. Support vector machine (SVM) classifier. |
84 CT patients. | The sensitivity of the system was with false positive/scan. |
Alilou et al. [108] |
1. Thresholding. 2. Morphological opening. 3. 3D region growing. |
60 CT patients. | The sensitivity of the system was with false positive/scan. |
Bai et al. [109] |
1. Local shape analysis. 2. Data-driven local contextual feature learning. 3. Principal component analysis (PCA). |
99 CT patients | The number of false positive were reduced by more than . |
Setio et al. [99] |
1. Thresholding. 2. Morphological operations. 3. Vector supporting machine (VSM) classifier. |
888 CT patients. | The sensitivity of the system was and with an average of 1 and 4 false positive/scan, respectively. |
Bai et al. [109] |
1. Local shape analysis. 2. Data-driven local contextual feature learning. 3. Principal component analysis (PCA). |
99 CT patients | The number of false positive were reduced by more than . |
Setio et al. [99] |
1. Thresholding. 2. Morphological operations. 3. Vector supporting machine (VSM) classifier. |
888 CT patients. | The sensitivity of the system was and with an average of 1 and 4 false positive/scan, respectively. |
Akram et al. [106] |
1. Artificial neural network (ANN). 2. Geometric and intensity-based features. |
84 CT patients. | The accuracy and sensitivity of the system were and , respectively. |
Golan et al. [111] | Deep convolutional neural network (CNN). | 1018 CT patients | The sensitivity of the system was with 20 false positive/scan. |
Bergtholdt et al. [112] |
1. Geometric features. 2. Grayscale features. 3. Location features. 4. Support vector machine (SVM) classifier. |
1018 CT patients. | The sensitivity of the system was with false positive/scan. |
Sudipta Mukhopadhyay [127] | Thresholding approach based on internal texture (solid/part-solid and non-solid), and external attachment (juxta-plural and juxta-vascular). | 891 nodules from (LIDC/IDRI). | Average segmentation accuracy: (for soild/part-solid), (for non-solid). |
El-Regaily et al. [110] |
1. Canny edge detector. 2. Thresholding. 3. Region growing. 4. Rule-based classifier. |
400 CT patients. | The accuracy, sensitivity, and specificity of the system were , , and , respectively with an average of false positive/scan. |
Zhang et al. [113] | Deep believe network (DBN). | 1018 CT patients. | The accuracy of system was . |
Wang et al. [100] | Semi-supervised extreme learning machines (SS-ELM) | 1018 CT patients. | The accuracy of the system was . |
Zhao et al. [131] |
1. 3D U-Net [132]. 2. Generative adversarial network (GAN) [133]. |
800 CT scans. | Qualitative evaluation only. |
Charbonnier et al. [125] | Subsolid nodule segmentation using voxel classification that eliminated blood vessels. | 170 subsolid nodules from the Multicentric Italian Lung Disease trial. | of segmented vessels, and of segmented solid core were accepted observers. |
Luo et al. [134] | 3D sphere center-points matching detection network (SCPM-Net). | 888 CT scans. | The sensitivity of the system was . |
Yin et al. [135] | Squeeze and attention, and dense atrous spatial pyramid pooling U-Net (SD-U-Net). | 2236 CT slices. | The Dice similarity coefficient (DSC), sensitivity, specificity, and accuracy of the system were , , , and , respectively. |
Bianconi et al. [120] |
1. 12 conventional semi-automated methods (Active contours (MorphACWE, MprphGAC), cluserting (K-means, SlIC), graph-based (Felzenszwalb), region-growing (flood fill), thresholding (Kapur, Kittler, Otsu, MultiOtsu, others (MSER, Watershed)), and 2. 12 deep learning semi-automated methods (12 CNNS designed using 4 standard segmentation models (FPN, LinkNet, PSPNet, U-Net) and 3 well-known encoders (InceptionV3, MobileNet, ResNet34)). |
1. Dataset 1: 383 images from a cohort of 111 patients. 2. Dataset 2: 259 images from a cohort of 100. |
Semi-automated deep learning methods outperformed the conventional methods. DSCs of the deep learning based methods recorded and for dataset 1, and dataset 2 respectively. Conventional methods recorded DSCs of and . |
4. Nodule Classification
One of the major limitations of using CAD systems in the detection of lung nodule is the high false positive rate which hinders the accuracy and lowers its efficacy as a screening framework that could be used on a large scale population. False positive nodules are associated with extra costs and hazards as they lead to unnecessary biopsies, more prolonged follow up imaging, and extra worry by patients and their families. So, accurate classification of detected pulmonary nodule is of utmost importance to overcome these problems. After nodule detection and segmentation, comes nodule classification. TPNs are classified by two large architectures: either radiomics feature-based scheme or deep learning models [136,137,138,139] (Figure 3). The feature radiomic scheme uses different sets of features, that could be morphological/shape (spherical disproportion, circularity … etc.), texture features, gray scale/histogram features (average, standard deviation, skewness…), gradient features (average, standard deviation, kurtosis…), and spatial features (location of the nodule) [140,141]. The extracted data from image voxels are then gathered and transformed into numeric form called feature radiomics [142]. A group of numeric features (radiomics) represent what is called feature vector. Then, a classifier (which is a machine learning model) differentiates feature vectors according to training algorithms and labelled data [143]. Famous classifiers include support vector machine, and random forest [144]. The advantage of radiomics model is that it could build models of high performance out of limited datasets, yet it requires manual tumor segmentation and hand-crafted feature extraction [145,146,147].
On the other hand, classifiers are used to build end to end convolutional neural networks, fully connected neural network, or deep neural network to reach final nodule classification through semantic feature analysis [12,147,148,149,150,151]. As mentioned earlier, ML and neural networks do not require segmentation or hand-crafted feature extraction [152,153]. DNN could assess difficult cases which does not fit in the predefined feature characteristics, yet still with satisfactory results. Deep layers such as ResNet and DenseNet are usually used to train the DNN model [69,136,154,155].
The process of nodule classification requires analysis of data obtained from 3D images. However, most of the available models either use 2D data to build a 3D CNN model [156] or a multi-view 2D CNN model [157,158,159]. Uthoff et al. [156] developed a ML pipeline using k-medoids clustering and information theory to pick efficient predictor sets for different amounts of parenchyma. Their method had high sensitivity of 100% and specificity of 96%. On the other hand, Shen et al. [157] used a multiscale 2-layered CNN to diagnose lung cancer in CT chest images, reaching an accuracy of 84.86%, while Jung et al. [160] used a 3D deep convolutional neural network (DCNN) with shortcut and dense connections to classify lung nodules. These connections allow gradients to pass directly and quickly, thus overcome gradient vanishing problems. In addition to acquiring three dimensional features instead of two. Their method had higher competition performance metric (CPM) of about 0.9 as compared to other state of the art methods. Chen et al. [160] used a neural network ensemble (NNE) to evaluate lung nodules and differentiate between probably malignant, uncertain, and probably benign nodules with an accuracy of 78.7%. Another study using texture features and artificial neural networks found that feed forward back propagation showed more accurate nodule classification as compared to feed forward neural networks and that skewness was the most accurate parameter [161]. Kumar et al. [149] proposed another type of neural network for lung nodule classification called stacked autoencoder (SAE) with an accuracy of 75.01%. Wilms et al. [78] presented a model-based 4D segmentation of lungs with large tumors in 4D CT data sets in which a 4D statistical shape model is fitted to the 4D image sequence respecting inter and intra-patient variation. Ardila et al., proposed a DL model that extracts data from patient’s prior and current CT images to predict the risk of development of bronchogenic carcinoma [162]. This model had high accuracy when applied on lung cancer screening trial cases and on independent validation group. They compared their results with a group of 6 radiologists. Interestingly, their model was comparable to radiologists in the evaluation of prior and recent CT images, but it outperformed the radiologists when evaluating recent CT image only. Li et al. [163] evaluated the diagnostic performance of a CAD commercial software program called InferRead CT Lung Research (ICLR) which was based on 3D CNN. They found that ICLR had high accuracy in risk prediction of bronchogenic carcinoma unlike benign or metastatic lesions. One recent research [164] utilized a 2-level classification of pulmonary nodules into benign and malignant with further subdivision of malignant nodules into serious and mild malignant nodules using CNN with transfer learning, they attained high accuracy similar to other published research.
Other studies were more concerned in correlating between pulmonary nodules morphological features and finger print of genetic mutations of pathological types of lung cancer (radio-genomics). This is particularly important in the assessment of success of gene inhibiting therapy [164,165,166,167,168].
Regarding the diagnostic performance, a bunch of studies proved that deep leaning is superior to ML models, owing to self-learning capabilities of the later [78,149,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175]. Song et al. [176] compared three types of neural networks; convolutional neural network, deep neural network, and stacked autoencoder (SAE). They found that CNN had the highest accuracy (84.15%), while another more recent study showed high accuracy (AUC of 0.99) using CNN based DL systematic approach called NoduleX [177]. Table 3 presents a summary of state-of-the-art pulmonary nodules classifications.
Table 3.
Study | Method | # Subjects | System Evaluation |
---|---|---|---|
Dehmeshki et al. [148] | Shape-based region growing. | 3D lung CT data where nodules are attached to blood vessels or lung wall. | Qualitative evaluation only. |
Lee et al. [169] | Commercial CAD system (IQQA-Chest, EDDA Technology, Princeton Junction, NJ, USA). | 200 chest radiographs (100 normal, 100 with malignant solitary nodules. | Sensitivity of , false positive rate of . |
Kuruvilla et al. [161] | Feed forward and feed forward back propagation neural networks. | 155 patients from LIDC | Classification accuracy of . |
Yamamoto et al. [165] | Random forest. | 172 patients with NSCLC. | Sensitivity of , specificity of , accuracy of in independent testing. |
Orozco et al. [147] |
1. Wavelet feature descriptor, 2. SVM. |
45 CT scans from ELCAP and LIDC. | Total preciseness in classifying cancerous from non-cancerous nodules was ; sensitivity of , and specificity of . |
Kumar et al. [149] | Deep Features using autoencoder. | 4323 nodules from NCI-LIDC dataset. | overall accuracy, sensitivity, and false positive of 0.39/patient (10-fold cross validation). |
Hua et al. [175] |
1. A deep belief network (DBN), 2. CNN. |
LIDC | Sensitivity (DBN: , CNN: ), Specificity (DBN: , CNN: ). |
Kang et al. [171] | 3D multi-view CNN (MV-CNN). | LIDC-IDRI | Error rate of for binary classification (benign and malignant) and for ternary classification(benign, primary malignant and metastatic malignant). |
Ciompi et al. [173] | Multi-stream multi-scale convolutional networks. |
1. Italian MILD screening trial, 2. Danish DLCST screening trial. |
Best accuracy of . |
Song et al. [176] |
1. CNN, 2. Deep neural network (DNN), 3. Stacked autoencoder (SAE). |
LIDC-IDRI | Accuracy of , sensitivity of , and specificity of . |
Tajbakhsh et al. [138] |
1. Massive training artificial neural networks (MTANN), 2. CNN. |
LDCT acquired from 31 patients. | AUC = ( confidence interval (CI): ). |
Li et al. [145] | Support vector machine (SVM). | 248 GGNs. | Accuracy of classifying GGNs into atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IA) was . Accuracy of classification between AIS and MIA nodules is , and between indolent versus invasive lesions is . |
Huang et al. [154] | Dense convolutional network (DenseNet). |
1. CIFAR, 2. SVHN, 3. ImageNet. |
Error rates for CIFAR (C10: , C10+: , C100: , C100+: ), SVHN (), ImageNet (error rates with single-crop (10-crop) are: top-1 (25.02 (23.61), 23.80 (22.08), 22.58 (21.46), 22.33 (20.85)), top-5 (7.71 (6.66), 6.85 (5.92), 6.34 (5.54), 6.15 (5.30))). |
Nibali et al. [158] | ResNet | LIDC/IDRI | Sensitivity of , specificity of , precision of , AUC of , and accuracy of . |
Liu et al. [159] | Multi-view multi-scale CNNs | LIDC-IDRI and ELCAP | Classification rate as . |
Zhao et al. [152] | A deep learning system based on 3D CNNs and multitask learning | 651 nodules with labels of AAH, AIS, MIA, IA. | Classification accuracy using 3 class weighted average F1 score is: compared to radiologists who achieved , , , and . |
Li et al. [150] | Multivariable linear predictor model built on semantic features. | 100 patients from NLST-LDCT. | AUC at baseline screening: , at first followup: , and at second followup: . |
Lyu et al. [172] | Multi-level CNN (ML-CNN). | LIDC, IDRI (1018 cases from 1010 patients) | Accuracy: . |
Shaffie et al. [174] |
1. Seventh-order Markov Gibbs random field (MGRF) model [178,179,180], 2. Geometric features, 3. Deep autoencoder classifier. |
727 nodules from 467 patients (LIDC). | Classification accuracy of . |
Causey et al. [177] | Deep learning CNN. | LIDC-IDRI | Accuracy of malignancy classification with AUC of approximately of . |
Uthoff et al. [156] | k-medoids clustering and information theory. | Training: (74 malignant, 289 benign), Validation (50 malignant, 50 benign). | AUC = , sensitivity and specificity. |
Ardila et al. [162] | A deep learning CNN. | 6716 National Lung Cancer Screening Trial cases, independent clinical validation set of 1139 cases. | AUC = . |
Liu et al. [151] |
1. Multivariate logistic regression analysis, 2. Least absolute shrinkage and selection operator (LASSO). |
Benign and malignant nodules from 875 patients. | Training: AUC = ; CI: 0.793–0.879) and validation (AUC = ; CI: 0.745–0.872). |
Gong et al. [136] | A deep learning–based artificial intelligence system for classifying ground-glass nodule(GGN) into invasive adenocarcinoma (IA) or non-invasive IA. | 828 GGNs of 644 patients (209 are IA and 619 non-IA, including 409 adenocarcinomas in situ and 210 minimally invasive adenocarcinomas). | AUC = . |
Sim et al. [137] | Radiologists assisted by deep learning–based CNN. | 600 lung cancer–containing chest radiographs and 200 normal chest radiographs. | Average sensitivity improved from to , and number of false positives per radiograph declined from to . |
Wang et al. [153] | A two-stage deep learning strategy: prior-feature learning followed by adaptive-boost deep learning. | 1357 nodules (765 noninvasive (AAH and AIS) and 592 invasive nodules (MIA and IA)). | Classification accuracy of compared to specialists who achieved , , and . AUC= . |
Xia et al. [155] | 1. Recurrent residual CNN based on U-Net, 2. Information fusion method. |
373 GGNs from 323 patients. | AUC= , accuracy: . |
Li et al. [163] | CLR software based on 3D CNN with DenseNet architecture as a backbone. | 486 consecutive resected lung lesions(320 adenocarcinomas, 40 other malignancies, 55 metastases, and 71 benign lesions). | Classification accuracy for adenocarcinomas, other malignancies, metastases, and benign lesions was , , , and , respectively. |
Hu et al. [139] |
1. 3D U-NET, 2. Deep neural network. |
513 GGNs (100 benign, 413 malignant). | Accuracy of , F1 score of , weighted average F1 score of , and Matthews correlation coefficient of . |
Farahat et al. [181] | 1. Three MGRF energies, extracted from three different grades of COVID-19 patients, 2. Artificial neural network. |
76 CT COVID-19 patients. | accuracy, and Cohen kappa. |
5. Limitations and Future Prospects
The scale of dataset used in CNN model is a crucial factor in the determination of whether it is a good model for training or not [182]. Collecting a large number annotated images could be a year-long process or even impossible owing to nature of medical imaging. To overcome this problem, large public datasets were introduced. Another solution is to artificially generate datasets that are similar to those used in the training of CNN. One example is the generative adversarial network (GAN) [133]. Another suggested solution is to implement transfer learning. Transfer model and LeNet5 were suggested to deal with conditions where large datasets are not available. Transfer-learning simply uses pre-existing data from source task to analyze data obtained from target task, which is useful in situations where target task has few datasets [183]. Recent study used CNN and LeNet5 to classify pulmonary nodules into benign or malignant with further sub-classification of various types of malignancies [184]. A limitation that comes along with data sharing and data transfer is the legal aspects of patient’s privacy. Another limitation is the lack of uniform terms between radiologists (for example when to describe a nodule as subsolid or non-solid) or between pathologists (minimally invasive carcinoma or carcinoma in situ), which in turn leads to non-uniform labelling of data which may affect the trained model. Of course, the solution for this problem will be the creation of a data-reporting system to unify medical terms like what happened in BI-RADS and LI-RADS. In the clinical practice, radiologists usually get benefit from clinical data to direct differential diagnosis and reach proper decision. However most of the available algorithms depend only on features derived from the images with little or no consideration to clinical data such as age, presence or absence of risk factors (smoking). Algorithms that combine clinical and imaging data are the solution to such limitation [185]. Finally, many algorithms and models are proposed but they lack generalizability and are used mainly in research works.
6. Conclusions
AI and its multiple arms including CAD, ML and DL are used to design complex algorithms to detect and further characterize pulmonary nodules in order to predict malignancy risk. Along the last decade, large number of radiomic features and artificial networks were proposed, each had its own advantages and drawbacks, till now no specific method gained popular acceptance to be applied on a general population.
Acknowledgments
This research is supported by Abu Dhabi’s Advanced Technology Research Council via the ASPIRE Award for Research Excellence program.
Abbreviations
The following abbreviations are used in this manuscript:
HU | Hounsfield Unit |
ABM | Adaptive Border Marching |
A-CNN | Amalgamated Convolutional Neural Network |
ASM | Active Shape Model |
CAD | Computer-Aided Diagnosis |
CADe | Computer-Aided Detection System |
CADx | Computer-Aided Diagnosis System |
DL | Deep Learning |
CNN | Convolutional Neural Network |
MV-CNN | Multi-view CNN |
ML-CNN | Multi-level CNN |
AHSN | Angular Histograms of Surface Normals |
CPM | Competition Performance Metrics |
CT | Computed Tomography |
CV | Chan Vese |
DBN | Deep Belief Network |
DCNN | Deep Convolutional Neural Network |
DNN | Deep Neural Network |
ELM | Extreme Learning Machines |
FLD | Fisher Linear Discriminant |
FPN | False Positive Nodule |
GAN | Generative Adversarial Network |
GGO | Ground Glass Opacity |
GGN | Ground Glass Nodule |
ICLR | InferRead CT Lung Research |
KB | Knowledge Bank |
k-NN | K-nearest Neighbor |
LDA | Linear Discriminate Analysis |
LDCT | Low Dose Computed Tomography |
LIDC-IDRI | Lung Image Database Consortium and Image Database Resource Initiative |
MGRF | Markov Gibbs Random Field |
ML | Machine Learning |
MPP | Multi Player Perception |
NNE | Neural Network Ensemble |
PNN | Probabilistic Neural Network |
RASM | Robust Active Shape Model |
ROI | Region of Interest |
RPCA | Robust Principal Component Analysis |
SAE | Stacked Autoencoder |
SS-ELM | Semi-Supervised Extreme Learning Machines |
SVM | Support Vector Machine |
TPN | True Positive Nodule |
AUC | Area Under the Curve |
IA | Invasive Adenocarcinoma |
MTANN | Massive training artificial neural networks |
NCI | National Cancer Institute |
SVHN | Street View House Numbers Dataset |
LASSO | Least Absolute Shrinkage and Selection Operator |
AAH | Atyoical Adenomatous Hyperplasia |
MIA | minimally invasive adenocarcinoma |
AIS | Adenocarcinoma in Situ |
GLCM | Gray-Level Co-occurrence Matrix |
EM | Expectation–maximization method |
DSC | Dice Similarity Coefficient |
Inf-Net | COVID-19-infected lung segmentation convolution neural network |
Semi-Inf-Net | semi-supervised Inf-Net |
ALVD | absolute lung volume difference |
BHD | bidirectional Hausdorff distance |
HCRF | Hidden conditional random field |
SCPM-Net | sphere center-points matching detection network |
SD-U-Net | Squeeze and attention, and dense atrous spatial pyramid pooling U-Net |
Author Contributions
Conceptualization, D.F., H.K., A.K., M.Y., M.G., A.S., A.M. and A.E.-B.; Project administration, A.E.-B.; Supervision, A.E.-B.; Writing—original draft, D.F., H.K., A.K., M.Y., M.G., A.S., A.M. and A.E.-B.; Writing—review & editing, D.F., H.K., A.K., M.Y., M.G., A.S., A.M. and A.E.-B. All authors have read and agreed to the published version of the manuscript.
Funding
This research is supported by Abu Dhabi’s Advanced Technology Research Council via the ASPIRE Award for Research Excellence program.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.American Cancer Society: Cancer Facts and Figures 2017. [(accessed on 13 November 2021)]. Available online: https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2017/cancer-facts-and-figures-2017.pdf.
- 2.Centers for Disease Control and Prevention (CDC): Smoking and Tobacco Use: Secondhand Smoke (SHS) Facts. [(accessed on 11 November 2021)]; Available online: https://www.cdc.gov/tobacco/data_statistics/fact_sheets/secondhand_smoke/general_facts/index.htm.
- 3.Madsen L.R., Krarup N.H.V., Bergmann T.K., Bærentzen S., Neghabat S., Duval L., Knudsen S.T. A cancer that went up in smoke: Pulmonary reaction to e-cigarettes imitating metastatic cancer. Chest. 2016;149:e65–e67. doi: 10.1016/j.chest.2015.09.003. [DOI] [PubMed] [Google Scholar]
- 4.Jenks S. Is Lung Cancer Incidence Increasing Among Never-Smokers? Jnci J. Natl. Cancer Inst. 2016;108:djv418. doi: 10.1093/jnci/djv418. [DOI] [PubMed] [Google Scholar]
- 5.Coté M.L., Liu M., Bonassi S., Neri M., Schwartz A.G., Christiani D.C., Spitz M.R., Muscat J.E., Rennert G., Aben K.K., et al. Increased risk of lung cancer in individuals with a family history of the disease: A pooled analysis from the International Lung Cancer Consortium. Eur. J. Cancer. 2012;48:1957–1968. doi: 10.1016/j.ejca.2012.01.038. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.de Torres J.P., Wilson D.O., Sanchez-Salcedo P., Weissfeld J.L., Berto J., Campo A., Alcaide A.B., García-Granero M., Celli B.R., Zulueta J.J. Lung cancer in patients with chronic obstructive pulmonary disease. Development and validation of the COPD Lung Cancer Screening Score. Am. J. Respir. Crit. Care Med. 2015;191:285–291. doi: 10.1164/rccm.201407-1210OC. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Zhai K., Ding J., Shi H.Z. Author’s Reply to “Comments on HPV and Lung Cancer Risk: A Meta-Analysis” [J. Clin. Virol. (In Press)] J. Clin. Virol. Off. Publ. Pan Am. Soc. Clin. Virol. 2015;63:92–93. doi: 10.1016/j.jcv.2014.12.002. [DOI] [PubMed] [Google Scholar]
- 8.Team N.L.S.T.R. The national lung screening trial: Overview and study design. Radiology. 2011;258:243–253. doi: 10.1148/radiol.10091808. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Global Resource for Advancing Cancer Education: Lung Cancer Screening, Part I: The Arguments for CT Screening. [(accessed on 14 November 2021)]. Available online: http://cancergrace.org/lung/2007/01/23/ct-screening-for-lung-ca-advantages/
- 10.Ather S., Kadir T., Gleeson F. Artificial intelligence and radiomics in pulmonary nodule management: Current status and future applications. Clin. Radiol. 2020;75:13–19. doi: 10.1016/j.crad.2019.04.017. [DOI] [PubMed] [Google Scholar]
- 11.Prabhakar B., Shende P., Augustine S. Current trends and emerging diagnostic techniques for lung cancer. Biomed. Pharmacother. 2018;106:1586–1599. doi: 10.1016/j.biopha.2018.07.145. [DOI] [PubMed] [Google Scholar]
- 12.Firmino M., Morais A.H., Mendoça R.M., Dantas M.R., Hekis H.R., Valentim R. Computer-aided detection system for lung cancer in computed tomography scans: Review and future prospects. Biomed. Eng. Online. 2014;13:1–16. doi: 10.1186/1475-925X-13-41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Hu S., Hoffman E.A., Reinhardt J.M. Automatic lung segmentation for accurate quantitation of volumetric X-ray CT images. IEEE Trans. Med. Imaging. 2001;20:490–498. doi: 10.1109/42.929615. [DOI] [PubMed] [Google Scholar]
- 14.Ukil S., Reinhardt J.M. Anatomy-guided lung lobe segmentation in X-ray CT images. IEEE Trans. Med. Imaging. 2008;28:202–214. doi: 10.1109/TMI.2008.929101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Van Rikxoort E.M., De Hoop B., Van De Vorst S., Prokop M., Van Ginneken B. Automatic segmentation of pulmonary segments from volumetric chest CT scans. IEEE Trans. Med. Imaging. 2009;28:621–630. doi: 10.1109/TMI.2008.2008968. [DOI] [PubMed] [Google Scholar]
- 16.Armato S.G., Giger M.L., Moran C.J., Blackburn J.T., Doi K., MacMahon H. Computerized detection of pulmonary nodules on CT scans. Radiographics. 1999;19:1303–1311. doi: 10.1148/radiographics.19.5.g99se181303. [DOI] [PubMed] [Google Scholar]
- 17.Armato III S.G., Sensakovic W.F. Automated lung segmentation for thoracic CT: Impact on computer-aided diagnosis1. Acad. Radiol. 2004;11:1011–1021. doi: 10.1016/j.acra.2004.06.005. [DOI] [PubMed] [Google Scholar]
- 18.Pu J., Roos J., Chin A.Y., Napel S., Rubin G.D., Paik D.S. Adaptive border marching algorithm: Automatic lung segmentation on chest CT images. Comput. Med. Imaging Graph. 2008;32:452–462. doi: 10.1016/j.compmedimag.2008.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Gao Q., Wang S., Zhao D., Liu J. Accurate lung segmentation for X-ray CT images; Proceedings of the Third International Conference on Natural Computation (ICNC 2007); Haikou, China. 24–27 August 2007; pp. 275–279. [Google Scholar]
- 20.Wei Q., Hu Y., Gelfand G., MacGregor J.H. Segmentation of lung lobes in high-resolution isotropic CT images. IEEE Trans. Biomed. Eng. 2009;56:1383–1393. doi: 10.1109/TBME.2009.2014074. [DOI] [PubMed] [Google Scholar]
- 21.Ye X., Lin X., Dehmeshki J., Slabaugh G., Beddoe G. Shape-based computer-aided detection of lung nodules in thoracic CT images. IEEE Trans. Biomed. Eng. 2009;56:1810–1820. doi: 10.1109/TBME.2009.2017027. [DOI] [PubMed] [Google Scholar]
- 22.Itai Y., Kim H., Ishikawa S., Katsuragawa S., Ishida T., Nakamura K., Yamamoto A. Automatic segmentation of lung areas based on SNAKES and extraction of abnormal areas; Proceedings of the 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’05); Hong Kong, China. 14–16 November 2005; p. 5. [Google Scholar]
- 23.Silveira M., Marques J. Automatic segmentation of the lungs using multiple active contours and outlier model; Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society; New York, NY, USA. 30 August–3 September 2006; pp. 3122–3125. [DOI] [PubMed] [Google Scholar]
- 24.Silveira M., Nascimento J., Marques J. Automatic segmentation of the lungs using robust level sets; Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; Lyon, France. 22–26 August 2007; pp. 4414–4417. [DOI] [PubMed] [Google Scholar]
- 25.Rani K.V., Jawhar S. Emerging trends in lung cancer detection scheme—A review. Int. J. Res. Anal. Rev. 2018;5:530–542. [Google Scholar]
- 26.Mansoor A., Bagci U., Foster B., Xu Z., Papadakis G.Z., Folio L.R., Udupa J.K., Mollura D.J. Segmentation and image analysis of abnormal lungs at CT: Current approaches, challenges, and future trends. Radiographics. 2015;35:1056–1076. doi: 10.1148/rg.2015140232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Sun S., Bauer C., Beichel R. Automated 3-D segmentation of lungs with lung cancer in CT data using a novel robust active shape model approach. IEEE Trans. Med. Imaging. 2011;31:449–460. doi: 10.1109/TMI.2011.2171357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Li K., Wu X., Chen D.Z., Sonka M. Optimal surface segmentation in volumetric images-a graph-theoretic approach. IEEE Trans. Pattern Anal. Mach. Intell. 2005;28:119–134. doi: 10.1109/TPAMI.2006.19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Sofka M., Wetzl J., Birkbeck N., Zhang J., Kohlberger T., Kaftan J., Declerck J., Zhou S.K. Multi-stage learning for robust lung segmentation in challenging CT volumes; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Toronto, ON, Canada. 18–22 September 2011; pp. 667–674. [DOI] [PubMed] [Google Scholar]
- 30.Hua P., Song Q., Sonka M., Hoffman E.A., Reinhardt J.M. Segmentation of pathological and diseased lung tissue in CT images using a graph-search algorithm; Proceedings of the 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro; Chicago, IL, USA. 30 March–2 April 2011; pp. 2072–2075. [Google Scholar]
- 31.Kockelkorn T.T., van Rikxoort E.M., Grutters J.C., van Ginneken B. Interactive lung segmentation in CT scans with severe abnormalities; Proceedings of the 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro; Rotterdam, The Netherlands. 14–17 April 2010; pp. 564–567. [Google Scholar]
- 32.El-Baz A., Gimel’farb G., Falk R., El-Ghar M.A. A novel three-dimensional framework for automatic lung segmentation from low dose computed tompgraphy images. In: El-Baz A., Suri J., editors. Lung Imaging and Computer Aided Diagnosis. CRC Press; Boca Raton, FL, USA: 2011. pp. 1–15. [Google Scholar]
- 33.El-Ba A., Gimel’farb G., Falk R., Holland T., Shaffer T. A new stochastic framework for accurate lung segmentation; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; New York, NY, USA. 6–10 September 2008; pp. 322–330. [DOI] [PubMed] [Google Scholar]
- 34.El-Baz A., Gimel’farb G.L., Falk R., Holland T., Shaffer T. A Framework for Unsupervised Segmentation of Lung Tissues from Low Dose Computed Tomography Images; Proceedings of the BMVC; Aberystwyth, UK. 31 August–3 September 2008; pp. 1–10. [Google Scholar]
- 35.Chung H., Ko H., Jeon S.J., Yoon K.H., Lee J. Automatic lung segmentation with juxta-pleural nodule identification using active contour model and bayesian approach. IEEE J. Transl. Eng. Health Med. 2018;6:1–13. doi: 10.1109/JTEHM.2018.2837901. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Chan T.F., Vese L.A. Active contours without edges. IEEE Trans. Image Process. 2001;10:266–277. doi: 10.1109/83.902291. [DOI] [PubMed] [Google Scholar]
- 37.Sun S., Ren H., Dan T., Wei W. 3D segmentation of lungs with juxta-pleural tumor using the improved active shape model approach. Technol. Health Care. 2021;29:385–398. doi: 10.3233/THC-218037. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Adams R., Bischof L. Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 1994;16:641–647. doi: 10.1109/34.295913. [DOI] [Google Scholar]
- 39.Hojjatoleslami S., Kittler J. Region growing: A new approach. IEEE Trans. Image Process. 1998;7:1079–1084. doi: 10.1109/83.701170. [DOI] [PubMed] [Google Scholar]
- 40.Pavlidis T., Liow Y.T. Integrating region growing and edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1990;12:225–233. doi: 10.1109/34.49050. [DOI] [Google Scholar]
- 41.Tremeau A., Borel N. A region growing and merging algorithm to color segmentation. Pattern Recognit. 1997;30:1191–1203. doi: 10.1016/S0031-3203(96)00147-1. [DOI] [Google Scholar]
- 42.Zhu S.C., Yuille A. Region competition: Unifying snakes, region growing, and Bayes/MDL for multiband image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 1996;18:884–900. [Google Scholar]
- 43.Mangan A.P., Whitaker R.T. Partitioning 3D surface meshes using watershed segmentation. IEEE Trans. Vis. Comput. Graph. 1999;5:308–321. doi: 10.1109/2945.817348. [DOI] [Google Scholar]
- 44.Grady L. Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2006;28:1768–1783. doi: 10.1109/TPAMI.2006.233. [DOI] [PubMed] [Google Scholar]
- 45.Boykov Y., Jolly M.P. Interactive organ segmentation using graph cuts; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Pittsburgh, PA, USA. 11–14 October 2000; pp. 276–286. [Google Scholar]
- 46.Udupa J. Fuzzy connectedness and object definition: Theory, algorithms, and applications in image segmentation. Graph. Model. Image Process. 1999;9:85–90. [Google Scholar]
- 47.Song Y., Cai W., Zhou Y., Feng D.D. Feature-based image patch approximation for lung tissue classification. IEEE Trans. Med. Imaging. 2013;32:797–808. doi: 10.1109/TMI.2013.2241448. [DOI] [PubMed] [Google Scholar]
- 48.Xu Y., Sonka M., McLennan G., Guo J., Hoffman E.A. MDCT-based 3-D texture classification of emphysema and early smoking related lung pathologies. IEEE Trans. Med. Imaging. 2006;25:464–475. doi: 10.1109/TMI.2006.870889. [DOI] [PubMed] [Google Scholar]
- 49.Yao J., Dwyer A., Summers R.M., Mollura D.J. Computer-aided diagnosis of pulmonary infections using texture analysis and support vector machine classification. Acad. Radiol. 2011;18:306–314. doi: 10.1016/j.acra.2010.11.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Korfiatis P.D., Karahaliou A.N., Kazantzi A.D., Kalogeropoulou C., Costaridou L.I. Texture-based identification and characterization of interstitial pneumonia patterns in lung multidetector CT. IEEE Trans. Inf. Technol. Biomed. 2009;14:675–680. doi: 10.1109/TITB.2009.2036166. [DOI] [PubMed] [Google Scholar]
- 51.Bagci U., Yao J., Wu A., Caban J., Palmore T.N., Suffredini A.F., Aras O., Mollura D.J. Automatic detection and quantification of tree-in-bud (TIB) opacities from CT scans. IEEE Trans. Biomed. Eng. 2012;59:1620–1632. doi: 10.1109/TBME.2012.2190984. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Mansoor A., Bagci U., Xu Z., Foster B., Olivier K.N., Elinoff J.M., Suffredini A.F., Udupa J.K., Mollura D.J. A generic approach to pathological lung segmentation. IEEE Trans. Med. Imaging. 2014;33:2293–2310. doi: 10.1109/TMI.2014.2337057. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Van Rikxoort E.M., Van Ginneken B. Automated segmentation of pulmonary structures in thoracic computed tomography scans: A review. Phys. Med. Biol. 2013;58:R187. doi: 10.1088/0031-9155/58/17/R187. [DOI] [PubMed] [Google Scholar]
- 54.Bağci U., Yao J., Caban J., Palmore T.N., Suffredini A.F., Mollura D.J. Automatic detection of tree-in-bud patterns for computer assisted diagnosis of respiratory tract infections; Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society; Boston, MA, USA. 30 August–3 September 2011; pp. 5096–5099. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Bagci U., Yao J., Caban J., Suffredini A.F., Palmore T.N., Mollura D.J. Learning shape and texture characteristics of CT tree-in-bud opacities for CAD systems; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Toronto, ON, Canada. 18–22 September 2011; pp. 215–222. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Caban J.J., Yao J., Bagci U., Mollura D.J. Monitoring pulmonary fibrosis by fusing clinical, physiological, and computed tomography features; Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society; Boston, MA, USA. 30 August–3 September 2011; pp. 6216–6219. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Korfiatis P., Kalogeropoulou C., Karahaliou A., Kazantzi A., Skiadopoulos S., Costaridou L. Texture classification-based segmentation of lung affected by interstitial pneumonia in high-resolution CT. Med. Phys. 2008;35:5290–5302. doi: 10.1118/1.3003066. [DOI] [PubMed] [Google Scholar]
- 58.Wang J., Li F., Li Q. Automated segmentation of lungs with severe interstitial lung disease in CT. Med. Phys. 2009;36:4592–4599. doi: 10.1118/1.3222872. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Haralick R.M., Shanmugam K., Dinstein I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973;SMC-3:610–621. doi: 10.1109/TSMC.1973.4309314. [DOI] [Google Scholar]
- 60.Sharafeldeen A., Elsharkawy M., Khalifa F., Soliman A., Ghazal M., AlHalabi M., Yaghi M., Alrahmawy M., Elmougy S., Sandhu H.S., et al. Precise higher-order reflectivity and morphology models for early diagnosis of diabetic retinopathy using OCT images. Sci. Rep. 2021;11 doi: 10.1038/s41598-021-83735-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Nakagomi K., Shimizu A., Kobatake H., Yakami M., Fujimoto K., Togashi K. Multi-shape graph cuts with neighbor prior constraints and its application to lung segmentation from a chest CT volume. Med. Image Anal. 2013;17:62–77. doi: 10.1016/j.media.2012.08.002. [DOI] [PubMed] [Google Scholar]
- 62.Yan Q., Wang B., Gong D., Luo C., Zhao W., Shen J., Shi Q., Jin S., Zhang L., You Z. COVID-19 Chest CT Image Segmentation—A Deep Convolutional Neural Network Solution. arXiv. 2020 doi: 10.48550/ARXIV.2004.10987.2004.10987 [DOI] [Google Scholar]
- 63.Fan D.P., Zhou T., Ji G.P., Zhou Y., Chen G., Fu H., Shen J., Shao L. Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images. IEEE Trans. Med. Imaging. 2020;39:2626–2637. doi: 10.1109/TMI.2020.2996645. [DOI] [PubMed] [Google Scholar]
- 64.Oulefki A., Agaian S., Trongtirakul T., Laouar A.K. Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images. Pattern Recognit. 2021;114:107747. doi: 10.1016/j.patcog.2020.107747. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Sharafeldeen A., Elsharkawy M., Alghamdi N.S., Soliman A., El-Baz A. Precise Segmentation of COVID-19 Infected Lung from CT Images Based on Adaptive First-Order Appearance Model with Morphological/Anatomical Constraints. Sensors. 2021;21:5482. doi: 10.3390/s21165482. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Zhao C., Xu Y., He Z., Tang J., Zhang Y., Han J., Shi Y., Zhou W. Lung segmentation and automatic detection of COVID-19 using radiomic features from chest CT images. Pattern Recognit. 2021;119:108071. doi: 10.1016/j.patcog.2021.108071. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Sousa J., Pereira T., Silva F., Silva M.C., Vilares A.T., Cunha A., Oliveira H.P. Lung Segmentation in CT Images: A Residual U-Net Approach on a Cross-Cohort Dataset. Appl. Sci. 2022;12:1959. doi: 10.3390/app12041959. [DOI] [Google Scholar]
- 68.Ronneberger O., Fischer P., Brox T. Lecture Notes in Computer Science. Springer; Berlin, Germany: 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation; pp. 234–241. [DOI] [Google Scholar]
- 69.He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA. 27–30 June 2016; pp. 770–778. [Google Scholar]
- 70.Kim H.M., Ko T., Choi I.Y., Myong J.P. Asbestosis diagnosis algorithm combining the lung segmentation method and deep learning model in computed tomography image. Int. J. Med. Inform. 2022;158:104667. doi: 10.1016/j.ijmedinf.2021.104667. [DOI] [PubMed] [Google Scholar]
- 71.Miettinen O.S., Henschke C.I. CT screening for lung cancer: Coping with nihilistic recommendations. Radiology. 2001;221:592–596. doi: 10.1148/radiol.2213001644. [DOI] [PubMed] [Google Scholar]
- 72.Henschke C.I., Naidich D.P., Yankelevitz D.F., McGuinness G., McCauley D.I., Smith J.P., Libby D., Pasmantier M., Vazquez M., Koizumi J., et al. Early Lung Cancer Action Project: Initial findings on repeat screening. Cancer. 2001;92:153–159. doi: 10.1002/1097-0142(20010701)92:1<153::AID-CNCR1303>3.0.CO;2-S. [DOI] [PubMed] [Google Scholar]
- 73.Swensen S.J., Jett J.R., Hartman T.E., Midthun D.E., Sloan J.A., Sykes A.M., Aughenbaugh G.L., Clemens M.A. Lung cancer screening with CT: Mayo Clinic experience. Radiology. 2003;226:756–761. doi: 10.1148/radiol.2263020036. [DOI] [PubMed] [Google Scholar]
- 74.Rusinek H., Naidich D.P., McGuinness G., Leitman B.S., McCauley D.I., Krinsky G.A., Clayton K., Cohen H. Pulmonary nodule detection: Low-dose versus conventional CT. Radiology. 1998;209:243–249. doi: 10.1148/radiology.209.1.9769838. [DOI] [PubMed] [Google Scholar]
- 75.Garg K., Keith R.L., Byers T., Kelly K., Kerzner A.L., Lynch D.A., Miller Y.E. Randomized controlled trial with low-dose spiral CT for lung cancer screening: Feasibility study and preliminary results. Radiology. 2002;225:506–510. doi: 10.1148/radiol.2252011851. [DOI] [PubMed] [Google Scholar]
- 76.Nawa T., Nakagawa T., Kusano S., Kawasaki Y., Sugawara Y., Nakata H. Lung cancer screening using low-dose spiral CT: Results of baseline and 1-year follow-up studies. Chest. 2002;122:15–20. doi: 10.1378/chest.122.1.15. [DOI] [PubMed] [Google Scholar]
- 77.Sone S., Li F., Yang Z., Honda T., Maruyama Y., Takashima S., Hasegawa M., Kawakami S., Kubo K., Haniuda M., et al. Results of three-year mass screening programme for lung cancer using mobile low-dose spiral computed tomography scanner. Br. J. Cancer. 2001;84:25–32. doi: 10.1054/bjoc.2000.1531. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Way T.W., Hadjiiski L.M., Sahiner B., Chan H.P., Cascade P.N., Kazerooni E.A., Bogot N., Zhou C. Computer-aided diagnosis of pulmonary nodules on CT scans: Segmentation and classification using 3D active contours. Med. Phys. 2006;33:2323–2337. doi: 10.1118/1.2207129. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Tandon Y.K., Bartholmai B.J., Koo C.W. Putting artificial intelligence (AI) on the spot: Machine learning evaluation of pulmonary nodules. J. Thorac. Dis. 2020;12:6954. doi: 10.21037/jtd-2019-cptn-03. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Armato S.G., III, McLennan G., Bidaut L., McNitt-Gray M.F., Meyer C.R., Reeves A.P., Zhao B., Aberle D.R., Henschke C.I., Hoffman E.A., et al. The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans. Med. Phys. 2011;38:915–931. doi: 10.1118/1.3528204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Gu Y., Chi J., Liu J., Yang L., Zhang B., Yu D., Zhao Y., Lu X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput. Biol. Med. 2021;137:104806. doi: 10.1016/j.compbiomed.2021.104806. [DOI] [PubMed] [Google Scholar]
- 82.Chang S., Emoto H., Metaxas D.N., Axel L. Pulmonary micronodule detection from 3D chest CT; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Saint-Malo, France. 26–29 September 2004; pp. 821–828. [Google Scholar]
- 83.Takizawa H., Shigemoto K., Yamamoto S., Matsumoto T., Tateno Y., Iinuma T., Matsumoto M. A recognition method of lung nodule shadows in X-Ray CT images using 3D object models. Int. J. Image Graph. 2003;3:533–545. doi: 10.1142/S0219467803001172. [DOI] [Google Scholar]
- 84.Li Q., Doi K. New selective nodule enhancement filter and its application for significant improvement of nodule detection on computed tomography; Proceedings of the Medical Imaging 2004: Image Processing. International Society for Optics and Photonics; San Diego, CA, USA. 16–19 February 2004; pp. 1–9. [Google Scholar]
- 85.Paik D.S., Beaulieu C.F., Rubin G.D., Acar B., Jeffrey R.B., Yee J., Dey J., Napel S. Surface normal overlap: A computer-aided detection algorithm with application to colonic polyps and lung nodules in helical CT. IEEE Trans. Med. Imaging. 2004;23:661–675. doi: 10.1109/TMI.2004.826362. [DOI] [PubMed] [Google Scholar]
- 86.Mendonça P.R., Bhotika R., Sirohey S.A., Turner W.D., Miller J.V., Avila R.S. Model-based analysis of local shape for lesion detection in CT scans; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Palm Springs, CA, USA. 26–29 October 2005; pp. 688–695. [DOI] [PubMed] [Google Scholar]
- 87.Lee Y., Hara T., Fujita H., Itoh S., Ishigaki T. Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Trans. Med. Imaging. 2001;20:595–604. doi: 10.1109/42.932744. [DOI] [PubMed] [Google Scholar]
- 88.Wiemker R., Rogalla P., Zwartkruis A., Blaffert T. Computer-aided lung nodule detection on high-resolution CT data; Proceedings of the Medical Imaging 2002: Image Processing. International Society for Optics and Photonics; San Diego, CA, USA. 23–28 February 2002; pp. 677–688. [Google Scholar]
- 89.Kostis W.J., Reeves A.P., Yankelevitz D.F., Henschke C.I. Three-dimensional segmentation and growth-rate estimation of small pulmonary nodules in helical CT images. IEEE Trans. Med. Imaging. 2003;22:1259–1274. doi: 10.1109/TMI.2003.817785. [DOI] [PubMed] [Google Scholar]
- 90.Gurcan M.N., Sahiner B., Petrick N., Chan H.P., Kazerooni E.A., Cascade P.N., Hadjiiski L. Lung nodule detection on thoracic computed tomography images: Preliminary evaluation of a computer-aided diagnosis system. Med. Phys. 2002;29:2552–2558. doi: 10.1118/1.1515762. [DOI] [PubMed] [Google Scholar]
- 91.Kanazawa K., Kawata Y., Niki N., Satoh H., Ohmatsu H., Kakinuma R., Kaneko M., Moriyama N., Eguchi K. Computer-aided diagnosis for pulmonary nodules based on helical CT images. Comput. Med. Imaging Graph. 1998;22:157–167. doi: 10.1016/S0895-6111(98)00017-2. [DOI] [PubMed] [Google Scholar]
- 92.Kawata Y., Niki N., Ohmatsu H., Kusumoto M., Kakinuma R., Mori K., Nishiyama H., Eguchi K., Kaneko M., Moriyama N. Computer-aided diagnosis of pulmonary nodules using three-dimensional thoracic CT images; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Utrecht, The Netherlands. 27 September–1 October 2001; pp. 1393–1394. [Google Scholar]
- 93.Betke M., Ko J.P. Detection of pulmonary nodules on CT and volumetric assessment of change over time; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Cambridge, UK. 19–22 September 1999; pp. 245–252. [Google Scholar]
- 94.Kubo M., Kubota K., Yamada N., Kawata Y., Niki N., Eguchi K., Ohmatsu H., Kakinuma R., Kaneko M., Kusumoto M., et al. CAD system for lung cancer based on low-dose single-slice CT image; Proceedings of the Medical Imaging 2002: Image Processing. International Society for Optics and Photonics; San Diego, CA, USA. 19–25 January 2002; pp. 1262–1269. [Google Scholar]
- 95.Oda T., Kubo M., Kawata Y., Niki N., Eguchi K., Ohmatsu H., Kakinuma R., Kaneko M., Kusumoto M., Moriyama N., et al. Detection algorithm of lung cancer candidate nodules on multislice CT images; Proceedings of the Medical Imaging 2002: Image Processing. International Society for Optics and Photonics; San Diego, CA, USA. 19–25 January 2002; pp. 1354–1361. [Google Scholar]
- 96.Saita S., Oda T., Kubo M., Kawata Y., Niki N., Sasagawa M., Ohmatsu H., Kakinuma R., Kaneko M., Kusumoto M., et al. Nodule detection algorithm based on multislice CT images for lung cancer screening; Proceedings of the Medical Imaging 2004: Image Processing. International Society for Optics and Photonics; San Diego, CA, USA. 16–19 February 2004; pp. 1083–1090. [Google Scholar]
- 97.Brown M.S., McNitt-Gray M.F., Goldin J.G., Suh R.D., Sayre J.W., Aberle D.R. Patient-specific models for lung nodule detection and surveillance in CT images. IEEE Trans. Med. Imaging. 2001;20:1242–1250. doi: 10.1109/42.974919. [DOI] [PubMed] [Google Scholar]
- 98.Messay T., Hardie R.C., Rogers S.K. A new computationally efficient CAD system for pulmonary nodule detection in CT imagery. Med. Image Anal. 2010;14:390–406. doi: 10.1016/j.media.2010.02.004. [DOI] [PubMed] [Google Scholar]
- 99.Setio A.A., Jacobs C., Gelderblom J., van Ginneken B. Automatic detection of large pulmonary solid nodules in thoracic CT images. Med. Phys. 2015;42:5642–5653. doi: 10.1118/1.4929562. [DOI] [PubMed] [Google Scholar]
- 100.Wang Z., Xin J., Sun P., Lin Z., Yao Y., Gao X. Improved lung nodule diagnosis accuracy using lung CT images with uncertain class. Comput. Methods Programs Biomed. 2018;162:197–209. doi: 10.1016/j.cmpb.2018.05.028. [DOI] [PubMed] [Google Scholar]
- 101.Baralis E., Chiusano S., Garza P. A lazy approach to associative classification. IEEE Trans. Knowl. Data Eng. 2007;20:156–171. doi: 10.1109/TKDE.2007.190677. [DOI] [Google Scholar]
- 102.Pehrson L.M., Nielsen M.B., Ammitzbøl Lauridsen C. Automatic pulmonary nodule detection applying deep learning or machine learning algorithms to the LIDC-IDRI database: A systematic review. Diagnostics. 2019;9:29. doi: 10.3390/diagnostics9010029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.LeCun Y., Bottou L., Bengio Y., Haffner P. Gradient-based learning applied to document recognition. Proc. IEEE. 1998;86:2278–2324. doi: 10.1109/5.726791. [DOI] [Google Scholar]
- 104.Kadir T., Gleeson F. Lung cancer prediction using machine learning and advanced imaging techniques. Transl. Lung Cancer Res. 2018;7:304. doi: 10.21037/tlcr.2018.05.15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105.Lee S.M., Seo J.B., Yun J., Cho Y.H., Vogel-Claussen J., Schiebler M.L., Gefter W.B., Van Beek E.J., Goo J.M., Lee K.S., et al. Deep learning applications in chest radiography and computed tomography. J. Thorac. Imaging. 2019;34:75–85. doi: 10.1097/RTI.0000000000000387. [DOI] [PubMed] [Google Scholar]
- 106.Akram S., Javed M.Y., Qamar U., Khanum A., Hassan A. Artificial neural network based classification of lungs nodule using hybrid features from computerized tomographic images. Appl. Math. Inf. Sci. 2015;9:183–195. doi: 10.12785/amis/090124. [DOI] [Google Scholar]
- 107.Choi W.J., Choi T.S. Automated pulmonary nodule detection based on three-dimensional shape-based feature descriptor. Comput. Methods Programs Biomed. 2014;113:37–54. doi: 10.1016/j.cmpb.2013.08.015. [DOI] [PubMed] [Google Scholar]
- 108.Alilou M., Kovalev V., Snezhko E., Taimouri V. A comprehensive framework for automatic detection of pulmonary nodules in lung CT images. Image Anal. Stereol. 2014;33:13–27. doi: 10.5566/ias.v33.p13-27. [DOI] [Google Scholar]
- 109.Bai J., Huang X., Liu S., Song Q., Bhagalia R. Learning orientation invariant contextual features for nodule detection in lung CT scans; Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI); Brooklyn, NY, USA. 16–19 April 2015; pp. 1135–1138. [Google Scholar]
- 110.El-Regaily S.A., Salem M.A.M., Aziz M.H.A., Roushdy M.I. Lung nodule segmentation and detection in computed tomography; Proceedings of the 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS); Cairo, Egyp. 5–7 December 2017; pp. 72–78. [Google Scholar]
- 111.Golan R., Jacob C., Denzinger J. Lung nodule detection in CT images using deep convolutional neural networks; Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN); Vancouver, BC, Canada. 24–29 July 2016; pp. 243–250. [Google Scholar]
- 112.Bergtholdt M., Wiemker R., Klinder T. Pulmonary nodule detection using a cascaded SVM classifier; Proceedings of the Medical Imaging 2016: Computer-Aided Diagnosis. International Society for Optics and Photonics; San Diego, CA, USA. 27 February–3 March 2016; p. 978513. [Google Scholar]
- 113.Zhang T., Zhao J., Luo J., Qiang Y. Deep belief network for lung nodules diagnosed in CT imaging. Int. J. Perform. Eng. 2017;13:1358. doi: 10.23940/ijpe.17.08.p17.13581370. [DOI] [Google Scholar]
- 114.Jacobs C., van Rikxoort E.M., Murphy K., Prokop M., Schaefer-Prokop C.M., van Ginneken B. Computer-aided detection of pulmonary nodules: A comparative study using the public LIDC/IDRI database. Eur. Radiol. 2016;26:2139–2147. doi: 10.1007/s00330-015-4030-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Wang Y., Wu B., Zhang N., Liu J., Ren F., Zhao L. Research progress of computer aided diagnosis system for pulmonary nodules in CT images. J. X-ray Sci. Technol. 2020;28:1–16. doi: 10.3233/XST-190581. [DOI] [PubMed] [Google Scholar]
- 116.McWilliams A., Tammemagi M.C., Mayo J.R., Roberts H., Liu G., Soghrati K., Yasufuku K., Martel S., Laberge F., Gingras M., et al. Probability of cancer in pulmonary nodules detected on first screening CT. N. Engl. J. Med. 2013;369:910–919. doi: 10.1056/NEJMoa1214726. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 117.Horeweg N., Scholten E.T., de Jong P.A., van der Aalst C.M., Weenink C., Lammers J.W.J., Nackaerts K., Vliegenthart R., ten Haaf K., Yousaf-Khan U.A., et al. Detection of lung cancer through low-dose CT screening (NELSON): A prespecified analysis of screening test performance and interval cancers. Lancet Oncol. 2014;15:1342–1350. doi: 10.1016/S1470-2045(14)70387-0. [DOI] [PubMed] [Google Scholar]
- 118.Revel M.P., Bissery A., Bienvenu M., Aycard L., Lefort C., Frija G. Are two-dimensional CT measurements of small noncalcified pulmonary nodules reliable? Radiology. 2004;231:453–458. doi: 10.1148/radiol.2312030167. [DOI] [PubMed] [Google Scholar]
- 119.Korst R.J., Lee B.E., Krinsky G.A., Rutledge J.R. The utility of automated volumetric growth analysis in a dedicated pulmonary nodule clinic. J. Thorac. Cardiovasc. Surg. 2011;142:372–377. doi: 10.1016/j.jtcvs.2011.04.015. [DOI] [PubMed] [Google Scholar]
- 120.Bianconi F., Fravolini M.L., Pizzoli S., Palumbo I., Minestrini M., Rondini M., Nuvoli S., Spanu A., Palumbo B. Comparative evaluation of conventional and deep learning methods for semi-automated segmentation of pulmonary nodules on CT. Quant. Imaging Med. Surg. 2021;11:3286. doi: 10.21037/qims-20-1356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 121.Kuhnigk J.M., Dicken V., Bornemann L., Bakai A., Wormanns D., Krass S., Peitgen H.O. Morphological segmentation and partial volume analysis for volumetry of solid pulmonary lesions in thoracic CT scans. IEEE Trans. Med. Imaging. 2006;25:417–434. doi: 10.1109/TMI.2006.871547. [DOI] [PubMed] [Google Scholar]
- 122.Jamshid D., Hamdan A., Manlio V., Ye X. Segmentation of pulmonary nodules in thoracic CT scans: A region growing approach. IEEE Trans. Med. Imaging. 2008;27:467–480. doi: 10.1109/TMI.2007.907555. [DOI] [PubMed] [Google Scholar]
- 123.Tao Y., Lu L., Dewan M., Chen A.Y., Corso J., Xuan J., Salganicoff M., Krishnan A. Multi-level ground glass nodule detection and segmentation in CT lung images; Proceedings of the International Conference on Medical Image Computing and Computer—Assisted Intervention; London, UK. 20–24 September 2009; pp. 715–723. [DOI] [PubMed] [Google Scholar]
- 124.Zhou J., Chang S., Metaxas D.N., Zhao B., Ginsberg M.S., Schwartz L.H. An automatic method for ground glass opacity nodule detection and segmentation from CT studies; Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society; Virtual. 1–5 November 2006; pp. 3062–3065. [DOI] [PubMed] [Google Scholar]
- 125.Charbonnier J.P., Chung K., Scholten E.T., Van Rikxoort E.M., Jacobs C., Sverzellati N., Silva M., Pastorino U., Van Ginneken B., Ciompi F. Automatic segmentation of the solid core and enclosed vessels in subsolid pulmonary nodules. Sci. Rep. 2018;8:646. doi: 10.1038/s41598-017-19101-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 126.Kubota T., Jerebko A.K., Dewan M., Salganicoff M., Krishnan A. Segmentation of pulmonary nodules of various densities with morphological approaches and convexity models. Med. Image Anal. 2011;15:133–154. doi: 10.1016/j.media.2010.08.005. [DOI] [PubMed] [Google Scholar]
- 127.Mukhopadhyay S. A segmentation framework of pulmonary nodules in lung CT images. J. Digit. Imaging. 2016;29:86–103. doi: 10.1007/s10278-015-9801-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 128.Liu Y., Wang Z., Guo M., Li P. Hidden conditional random field for lung nodule detection; Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP); Paris, France. 27–30 October 2014; [DOI] [Google Scholar]
- 129.Li Q., Sone S., Doi K. Selective enhancement filters for nodules, vessels, and airway walls in two- and three-dimensional CT scans. Med. Phys. 2003;30:2040–2051. doi: 10.1118/1.1581411. [DOI] [PubMed] [Google Scholar]
- 130.Quattoni A., Wang S., Morency L.P., Collins M., Darrell T. Hidden Conditional Random Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2007;29:1848–1852. doi: 10.1109/TPAMI.2007.1124. [DOI] [PubMed] [Google Scholar]
- 131.Zhao C., Han J., Jia Y., Gou F. Lung Nodule Detection via 3D U-Net and Contextual Convolutional Neural Network; Proceedings of the 2018 International Conference on Networking and Network Applications (NaNA); Xi’an, China. 12–15 October 2018; [DOI] [Google Scholar]
- 132.Özgün Ç., Abdulkadir A., Lienkamp S.S., Brox T., Ronneberger O. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016. Springer; Berlin, Germany: 2016. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation; pp. 424–432. [DOI] [Google Scholar]
- 133.Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y. Generative adversarial nets; Proceedings of the 27th International Conference on Neural Information Processing Systems; Montreal, QC, Canada. 8 December 2014; pp. 2672–2680. [Google Scholar]
- 134.Luo X., Song T., Wang G., Chen J., Chen Y., Li K., Metaxas D.N., Zhang S. SCPM-Net: An anchor-free 3D lung nodule detection network using sphere representation and center points matching. Med. Image Anal. 2022;75:102287. doi: 10.1016/j.media.2021.102287. [DOI] [PubMed] [Google Scholar]
- 135.Yin S., Deng H., Xu Z., Zhu Q., Cheng J. SD-UNet: A Novel Segmentation Framework for CT Images of Lung Infections. Electronics. 2022;11:130. doi: 10.3390/electronics11010130. [DOI] [Google Scholar]
- 136.Gong J., Liu J., Hao W., Nie S., Zheng B., Wang S., Peng W. A deep residual learning network for predicting lung adenocarcinoma manifesting as ground-glass nodule on CT images. Eur. Radiol. 2020;30:1847–1855. doi: 10.1007/s00330-019-06533-w. [DOI] [PubMed] [Google Scholar]
- 137.Sim Y., Chung M.J., Kotter E., Yune S., Kim M., Do S., Han K., Kim H., Yang S., Lee D.J., et al. Deep convolutional neural network–based software improves radiologist detection of malignant lung nodules on chest radiographs. Radiology. 2020;294:199–209. doi: 10.1148/radiol.2019182465. [DOI] [PubMed] [Google Scholar]
- 138.Tajbakhsh N., Suzuki K. Comparing two classes of end-to-end machine-learning models in lung nodule detection and classification: MTANNs vs. CNNs. Pattern Recognit. 2017;63:476–486. doi: 10.1016/j.patcog.2016.09.029. [DOI] [Google Scholar]
- 139.Hu X., Gong J., Zhou W., Li H., Wang S., Wei M., Peng W., Gu Y. Computer-aided diagnosis of ground glass pulmonary nodule by fusing deep learning and radiomics features. Phys. Med. Biol. 2021;66:065015. doi: 10.1088/1361-6560/abe735. [DOI] [PubMed] [Google Scholar]
- 140.Zwanenburg A., Leger S., Vallières M., Löck S. Image biomarker standardisation initiative. arXiv. 2016 doi: 10.1148/radiol.2020191145.1612.07003 [DOI] [Google Scholar]
- 141.Sharafeldeen A., Elsharkawy M., Khaled R., Shaffie A., Khalifa F., Soliman A., khalek Abdel Razek A.A., Hussein M.M., Taman S., Naglah A., et al. Texture and shape analysis of diffusion-weighted imaging for thyroid nodules classification using machine learning. Med. Phys. 2021;49:988–999. doi: 10.1002/mp.15399. [DOI] [PubMed] [Google Scholar]
- 142.Lambin P., Rios-Velazquez E., Leijenaar R., Carvalho S., Van Stiphout R.G., Granton P., Zegers C.M., Gillies R., Boellard R., Dekker A., et al. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur. J. Cancer. 2012;48:441–446. doi: 10.1016/j.ejca.2011.11.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 143.Foley F., Rajagopalan S., Raghunath S.M., Boland J.M., Karwoski R.A., Maldonado F., Bartholmai B.J., Peikert T. Seminars in Thoracic and Cardiovascular Surgery. Volume 28. Elsevier; Amsterdam, The Netherlands: 2016. Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: The future of imaging? pp. 120–126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 144.Wang X., Mao K., Wang L., Yang P., Lu D., He P. An appraisal of lung nodules automatic classification algorithms for CT images. Sensors. 2019;19:194. doi: 10.3390/s19010194. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 145.Li M., Narayan V., Gill R.R., Jagannathan J.P., Barile M.F., Gao F., Bueno R., Jayender J. Computer-aided diagnosis of ground-glass opacity nodules using open-source software for quantifying tumor heterogeneity. Am. J. Roentgenol. 2017;209:1216. doi: 10.2214/AJR.17.17857. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 146.Fan L., Fang M., Li Z., Tu W., Wang S., Chen W., Tian J., Dong D., Liu S. Radiomics signature: A biomarker for the preoperative discrimination of lung invasive adenocarcinoma manifesting as a ground-glass nodule. Eur. Radiol. 2019;29:889–897. doi: 10.1007/s00330-018-5530-z. [DOI] [PubMed] [Google Scholar]
- 147.Madero Orozco H., Vergara Villegas O.O., Cruz Sánchez V.G., Ochoa Domínguez H.D.J., Nandayapa Alfaro M.D.J. Automated system for lung nodules classification based on wavelet feature descriptor and support vector machine. Biomed. Eng. Online. 2015;14:9. doi: 10.1186/s12938-015-0003-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 148.Dehmeshki J., Ye X., Costello J. Shape based region growing using derivatives of 3D medical images: Application to semiautomated detection of pulmonary nodules; Proceedings of the 2003 International Conference on Image Processing; Barcelona, Spain. 14–17 September 2003; pp. I-1085–I-1088. [Google Scholar]
- 149.Kumar D., Wong A., Clausi D.A. Lung nodule classification using deep features in CT images; Proceedings of the 2015 12th Conference on Computer and Robot Vision; Halifax, NS, Canada. 3–5 June 2015; pp. 133–138. [Google Scholar]
- 150.Li Q., Balagurunathan Y., Liu Y., Qi J., Schabath M.B., Ye Z., Gillies R.J. Comparison between radiological semantic features and lung-RADS in predicting malignancy of screen-detected lung nodules in the National Lung Screening Trial. Clin. Lung Cancer. 2018;19:148–156. doi: 10.1016/j.cllc.2017.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 151.Liu A., Wang Z., Yang Y., Wang J., Dai X., Wang L., Lu Y., Xue F. Preoperative diagnosis of malignant pulmonary nodules in lung cancer screening with a radiomics nomogram. Cancer Commun. 2020;40:16–24. doi: 10.1002/cac2.12002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 152.Zhao W., Yang J., Sun Y., Li C., Wu W., Jin L., Yang Z., Ni B., Gao P., Wang P., et al. 3D deep learning from CT scans predicts tumor invasiveness of subcentimeter pulmonary adenocarcinomas. Cancer Res. 2018;78:6881–6889. doi: 10.1158/0008-5472.CAN-18-0696. [DOI] [PubMed] [Google Scholar]
- 153.Wang J., Chen X., Lu H., Zhang L., Pan J., Bao Y., Su J., Qian D. Feature-shared adaptive-boost deep learning for invasiveness classification of pulmonary subsolid nodules in CT images. Med. Phys. 2020;47:1738–1749. doi: 10.1002/mp.14068. [DOI] [PubMed] [Google Scholar]
- 154.Huang G., Liu Z., Van Der Maaten L., Weinberger K.Q. Densely connected convolutional networks; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA. 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- 155.Xia X., Gong J., Hao W., Yang T., Lin Y., Wang S., Peng W. Comparison and fusion of deep learning and radiomics features of ground-glass nodules to predict the invasiveness risk of stage-I lung adenocarcinomas in CT scan. Front. Oncol. 2020;10:418. doi: 10.3389/fonc.2020.00418. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 156.Uthoff J., Stephens M.J., Newell J.D., Jr., Hoffman E.A., Larson J., Koehn N., De Stefano F.A., Lusk C.M., Wenzlaff A.S., Watza D., et al. Machine learning approach for distinguishing malignant and benign lung nodules utilizing standardized perinodular parenchymal features from CT. Med. Phys. 2019;46:3207–3216. doi: 10.1002/mp.13592. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 157.Shen W., Zhou M., Yang F., Yang C., Tian J. Multi-scale convolutional neural networks for lung nodule classification; Proceedings of the International Conference on Information Processing in Medical Imaging; 2015. pp. 588–599. [DOI] [PubMed] [Google Scholar]
- 158.Nibali A., He Z., Wollersheim D. Pulmonary nodule classification with deep residual networks. Int. J. Comput. Assist. Radiol. Surg. 2017;12:1799–1808. doi: 10.1007/s11548-017-1605-6. [DOI] [PubMed] [Google Scholar]
- 159.Liu X., Hou F., Qin H., Hao A. Multi-view multi-scale CNNs for lung nodule type classification from CT images. Pattern Recognit. 2018;77:262–275. doi: 10.1016/j.patcog.2017.12.022. [DOI] [Google Scholar]
- 160.Chen H., Wu W., Xia H., Du J., Yang M., Ma B. Classification of pulmonary nodules using neural network ensemble; Proceedings of the International Symposium on Neural Networks; Guilin, China. 29 May–1 June 2011; pp. 460–466. [Google Scholar]
- 161.Kuruvilla J., Gunavathi K. Lung cancer classification using neural networks for CT images. Comput. Methods Programs Biomed. 2014;113:202–209. doi: 10.1016/j.cmpb.2013.10.011. [DOI] [PubMed] [Google Scholar]
- 162.Ardila D., Kiraly A.P., Bharadwaj S., Choi B., Reicher J.J., Peng L., Tse D., Etemadi M., Ye W., Corrado G., et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med. 2019;25:954–961. doi: 10.1038/s41591-019-0447-x. [DOI] [PubMed] [Google Scholar]
- 163.Li K., Liu K., Zhong Y., Liang M., Qin P., Li H., Zhang R., Li S., Liu X. Assessing the predictive accuracy of lung cancer, metastases, and benign lesions using an artificial intelligence-driven computer aided diagnosis system. Quant. Imaging Med. Surg. 2021;11:3629. doi: 10.21037/qims-20-1314. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 164.Zhou M., Leung A., Echegaray S., Gentles A., Shrager J.B., Jensen K.C., Berry G.J., Plevritis S.K., Rubin D.L., Napel S., et al. Non–small cell lung cancer radiogenomics map identifies relationships between molecular and imaging phenotypes with prognostic implications. Radiology. 2018;286:307–315. doi: 10.1148/radiol.2017161845. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 165.Yamamoto S., Korn R.L., Oklu R., Migdal C., Gotway M.B., Weiss G.J., Iafrate A.J., Kim D.W., Kuo M.D. ALK molecular phenotype in non–small cell lung cancer: CT radiogenomic characterization. Radiology. 2014;272:568–576. doi: 10.1148/radiol.14140789. [DOI] [PubMed] [Google Scholar]
- 166.Aerts H.J., Grossmann P., Tan Y., Oxnard G.R., Rizvi N., Schwartz L.H., Zhao B. Defining a radiomic response phenotype: A pilot study using targeted therapy in NSCLC. Sci. Rep. 2016;6:33860. doi: 10.1038/srep33860. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 167.Rizzo S., Petrella F., Buscarino V., De Maria F., Raimondi S., Barberis M., Fumagalli C., Spitaleri G., Rampinelli C., De Marinis F., et al. CT radiogenomic characterization of EGFR, K-RAS, and ALK mutations in non-small cell lung cancer. Eur. Radiol. 2016;26:32–42. doi: 10.1007/s00330-015-3814-0. [DOI] [PubMed] [Google Scholar]
- 168.Velazquez E.R., Parmar C., Liu Y., Coroller T.P., Cruz G., Stringfield O., Ye Z., Makrigiorgos M., Fennessy F., Mak R.H., et al. Somatic mutations drive distinct imaging phenotypes in lung cancer. Cancer Res. 2017;77:3922–3930. doi: 10.1158/0008-5472.CAN-17-0122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 169.Lee K.H., Goo J.M., Park C.M., Lee H.J., Jin K.N. Computer-aided detection of malignant lung nodules on chest radiographs: Effect on observers’ performance. Korean J. Radiol. 2012;13:564–571. doi: 10.3348/kjr.2012.13.5.564. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 170.Liu S., Xie Y., Jirapatnakul A., Reeves A.P. Pulmonary nodule classification in lung cancer screening with three-dimensional convolutional neural networks. J. Med. Imaging. 2017;4:041308. doi: 10.1117/1.JMI.4.4.041308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 171.Kang G., Liu K., Hou B., Zhang N. 3D multi-view convolutional neural networks for lung nodule classification. PLoS ONE. 2017;12:e0188290. doi: 10.1371/journal.pone.0188290. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 172.Lyu J., Ling S.H. Using multi-level convolutional neural network for classification of lung nodules on CT images; Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Honolulu, HI, USA. 18–21 July 2018; pp. 686–689. [DOI] [PubMed] [Google Scholar]
- 173.Ciompi F., Chung K., Van Riel S.J., Setio A.A.A., Gerke P.K., Jacobs C., Scholten E.T., Schaefer-Prokop C., Wille M.M., Marchiano A., et al. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci. Rep. 2017;7:46479. doi: 10.1038/srep46479. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 174.Shaffie A., Soliman A., Fraiwan L., Ghazal M., Taher F., Dunlap N., Wang B., van Berkel V., Keynton R., Elmaghraby A., et al. A generalized deep learning-based diagnostic system for early diagnosis of various types of pulmonary nodules. Technol. Cancer Res. Treat. 2018;17:1533033818798800. doi: 10.1177/1533033818798800. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 175.Hua K.L., Hsu C.H., Hidayati S.C., Cheng W.H., Chen Y.J. Computer-aided classification of lung nodules on computed tomography images via deep learning technique. OncoTarg. Ther. 2015;8:2015–2022. doi: 10.2147/OTT.S80733. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 176.Song Q., Zhao L., Luo X., Dou X. Using deep learning for classification of lung nodules on computed tomography images. J. Healthc. Eng. 2017;2017:8314740. doi: 10.1155/2017/8314740. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 177.Causey J.L., Zhang J., Ma S., Jiang B., Qualls J.A., Politte D.G., Prior F., Zhang S., Huang X. Highly accurate model for prediction of lung nodule malignancy with CT scans. Sci. Rep. 2018;8:9286. doi: 10.1038/s41598-018-27569-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 178.El-Baz A.S., Gimel’farb G.L., Suri J.S. Stochastic Modeling for Medical Image Analysis. CRC Press; Boca Raton, FL, USA: 2016. [Google Scholar]
- 179.Elsharkawy M., Sharafeldeen A., Soliman A., Khalifa F., Ghazal M., El-Daydamony E., Atwan A., Sandhu H.S., El-Baz A. A Novel Computer-Aided Diagnostic System for Early Detection of Diabetic Retinopathy Using 3D-OCT Higher-Order Spatial Appearance Model. Diagnostics. 2022;12:461. doi: 10.3390/diagnostics12020461. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 180.Elsharkawy M., Sharafeldeen A., Taher F., Shalaby A., Soliman A., Mahmoud A., Ghazal M., Khalil A., Alghamdi N.S., Razek A.A.K.A., et al. Early assessment of lung function in coronavirus patients using invariant markers from chest X-rays images. Sci. Rep. 2021;11:12095. doi: 10.1038/s41598-021-91305-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 181.Farahat I.S., Sharafeldeen A., Elsharkawy M., Soliman A., Mahmoud A., Ghazal M., Taher F., Bilal M., Razek A.A.K.A., Aladrousy W., et al. The Role of 3D CT Imaging in the Accurate Diagnosis of Lung Function in Coronavirus Patients. Diagnostics. 2022;12:696. doi: 10.3390/diagnostics12030696. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 182.Krizhevsky A., Sutskever I., Hinton G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012;25:1097–1105. doi: 10.1145/3065386. [DOI] [Google Scholar]
- 183.Pan S.J., Yang Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010;22:1345–1359. doi: 10.1109/TKDE.2009.191. [DOI] [Google Scholar]
- 184.Zhang S., Sun F., Wang N., Zhang C., Yu Q., Zhang M., Babyn P., Zhong H. Computer-aided diagnosis (CAD) of pulmonary nodule of thoracic CT image using transfer learning. J. Digit. Imaging. 2019;32:995–1007. doi: 10.1007/s10278-019-00204-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 185.Suk H.I., Shen D. Deep learning-based feature representation for AD/MCI classification; Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Nagoya, Japan. 22–26 September 2013; pp. 583–590. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Not applicable.