Abstract
Lung cancer is considered more serious among other prevailing cancer types. One of the reasons for it is that it is usually not diagnosed until it has spread and by that time it becomes very difficult to treat. Early detection of lung cancer can significantly increase the chances of survival of a cancer patient. An effective nodule detection system can play a key role in early detection of lung cancer thus increasing the chances of successful treatment. In this research work, we have proposed a novel classification framework for nodule classification. The framework consists of multiple phases that include image contrast enhancement, segmentation, optimal feature extraction, followed by employment of these features for training and testing of Support Vector Machine. We have empirically tested the efficacy of our technique by utilizing the well-known Lung Image Consortium Database (LIDC) dataset. The empirical results suggest that the technique is highly effective for reducing the false positive rates. We were able to receive an impressive sensitivity rate of 97.45%.
Introduction
Lung cancer is the world’s most common cancer in terms of patients and deaths every year. One of the reasons for greater number of deaths caused by lung cancer is the inability to diagnose lung cancer in the early stages as the symptoms tend to appear in the later stages1. Computer aided diagnosis of Lung CT images is considered an effective technique for the detection of lung abnormally nodules. The nodules can have a variety of causes e.g. infections, sarcoidosis, Hamartoma, Wegener’s granulomatosis, Pneumoconiosis, Tuberculosis, hypersensitivity Pneumonia and Cancer. The detected nodules are located in the lung region of the CT scan. Such area is normally less than half the area of the CT slice. It takes considerably longer time to locate or search for a nodule in the whole slice. In order to reduce this complexity, it is better to reduce the search space by considering only that part of the slice where the nodule may exist. This demands a technique for segmenting the part of the lungs. In this research work, we propose a fully automated segmentation technique based on Genetic and image processing techniques based on morphology. The proposed technique would serve as a pre-processing step of CAD and will greatly benefit the nodule detection process.
Segmentation can be defined as a process that partitions a digital image into different set of segments. The Segmentation process transforms an image into a more meaningful form. Analysis of resultant segments is also comparatively easier 2. The resultant set of segments collectively covers the entire image. The pixels in each region have some similar characteristics like intensity, colour, or texture although characteristics wise, the adjacent regions tend to be significantly different. Figure 1 illustrate the sample lungs CT scan image.
Many researchers have proposed different lung segmentation algorithms and authors in3 provide a comprehensive review of these algorithms. The proposed methods can be classified into three basic categories4.
Simple
This class of algorithms include low complexity techniques like region growing5,6 and their basis are formed on simplistic assumptions (e.g., density range of lung tissue). These techniques have a major advantage of being computational inexpensive and provide efficient and effective results for the segmentation of normal lungs, but are not suitable for the cases of diseased lungs or image artifacts.
Advanced
This class includes algorithms that are more robust and do not have the inherent limitations of the simple category algorithms but are generally much more computationally expensive than the simple category algorithms. Approaches based on registration7, advanced thresholding with adaptive border matching8 and texture features9 fall within this category.
Hybrid
This class includes approaches that utilize the approaches in category 2 only if the methods in category 1 fail to provide the perceived result on the basis of some heuristics (e.g. lung volume assumptions).
The Authors in10 have proposed an ensemble classification technique that is aided by clustering. In their approach they have utilized clustering for training dataset classification. For SVM training, the resulting nodule and non-nodules in the clustering step are used. Maeda et al.11. have proposed a technique that is a mix of ANN, Genetic Algorithm (GA) and SVM. They have opted to use temporal subtraction of consecutive CT scan images for the detection of candidate nodules. In the first phase, the candidate nodules’ features are computed that are later refined by utilizing rule-based feature analysis. They have utilized Principal Component Analysis (PCA) for feature space reduction. Later, The Artificial Neural network (ANN) is employed for classifying the nodules. They have deployed existing well-known techniques for the segmentation phase. The estimation of the center of the nodule is performed by using divergence of normalized gradient and nodule and vessel enhancement filtering is utilized for the segmentation of clusters of nodules. This is followed by the invariant, shape and regional descriptor calculations. Choi et al.12. have adopted a technique in which thresholding, contouring correction and morphological operation are used to extract the lung volume. They first utilize multiple thresholding scheme for extracting candidates from lung volume. This step is followed by the pruning of resultant candidates. The rules for pruning are defined on the premise of the type of features of the candidate nodules. Genetic Programming (GP) classifier is then trained and used for classifying the nodules and non-nodules. Choi et al.13. provide further insights and propose a hierarchical approach to the classification of nodules by means of SVM. In this study, the input image is first processed to obtain non-overlapped blocks and then those blocks are discarded which are non-informative. The features are extracted from the enhanced blocks and SVM was used for classification of candidate nodules. A new approach for lung nodule classification is presented in14. In this work, the input image is transformed to frequency domain using wavelet transform instead of using segmentation. After that gray level co-occurrence matrix (GLCM) was employed for extraction of texture features. Sheeraz et al.15. proposed a novel hybrid feature based method for nodule detection. The 3-dimensional and 2-dimensional statistical features are extracted from candidate nodules. The reported the sensitivity rate of 95.31%. In another interesting approach16, the optimal threshold value for segmentation was obtained using Gaussian approximation based differential evolution technique. For extraction of optimized features, they proposed a feature descriptor based on gradient intensity. The obtained accuracy and sensitivity were 98.7% and 97.5%. Naqi et al.17. presented nodule detection technique which is based on geometric fit in parametric form. The hybrid geometric feature was extracted for better representation which comprise on 2D as well as 3D information about nodules. They have achieved sensitivity rate of 95.6% on lung image database consortium images. Prewitt et al.18. have utilized the mode method technique for the selection of thresholds at the valleys on the histogram. Their proposed technique requires histogram data to be smoothed to automatically select, search for modes and place thresholds at the minimum between them. Their proposed technique is excessively dependent on the gray level histogram structure carrying peaks and valleys that are consonant with the image’s gray level subpopulation. The major problem with this approach is that the simple Heuristic search method is inadequate for finding the two peaks. Moreover, the bottom of the valley is difficult to find and calculate the exact threshold in case of flat valley.
Although the techniques discussed above obtained good results on normal lung images, however, the decrease in image quality may degraded their system performance which results in loss of important diagnostic information.
In this paper, we proposed a novel framework for lung segmentation that reduce the false positive error rate and improve accuracy rate for low contrast images and noisy images.
Major contribution of the proposed technique includes.
The hierarchical block structure is used to preserved the image details such as nodules and blood vessels.
Image contrast is enhanced in frequency domain and image details are preserved.
Extraction of the most discriminative features from lung nodules.
The rest of the paper is organized as follows: Section 2 provides the description of the materials and methods. Experimental results are presented in Section 3. Conclusion and future directions are provided in Section 4.
Material and Methods
The performance of accurate lung nodule detection is highly dependent on the image contrast enhancement and the accurate feature extraction. That is why the contrast enhancement and feature extraction step is considered as the most important steps. The aim of contrast enhancement is to improve the visual quality of an input image before feature extraction. In this paper, we present an effective contract enhancement technique that can not only improve the image contrast, but also preserve the brightness.
In order to enhance the image contrast, the input CT scan image is first converted into low-frequency (LF) component and high-frequency (HF) component using discrete cosine transform. Then Contrast limited adaptive histogram equalization (CLAHE) is employed to enhance the low-frequency component and the high-frequency component information remain unchanged. The reason is that most of the image noise is contained within high-frequency component.
After contrast enhancement, the image is divided into non-overlapping blocks and the non-informative blocks are filtered. In next step, thresholding is applied for extraction of lung region. In feature extraction step, weber local descriptor (WLD) is used to compute two components: differential excitation and orientation. These components describe and capture the texture information of an image. Finally, SVM classifier is trained and tested on the extracted features to classify nodules and non-nodules. The details of the proposed method are described in sub-sequent sections. Figure 2 illustrate the schematic diagram of our proposed method.
Preprocessing
In this paper, we introduce an efficient and simple framework to enhance contrast of the image without boosting noise levels in the compressed domain. Figure 3 illustrate the flow diagram of proposed pre-processed method.
Generally, spatial and frequency domain are used for image contrast enhancement. The detail of an image can be well obtained by transforming the image from spatial to frequency domain19. The focus of the spatial domain techniques is mainly on local information while frequency domain techniques explore the global information of an image. In frequency domain the image is converted into high and low-frequency component. The low-frequency component of the image contains the image detail while noise exists in the high frequency component. The good side of the frequency domain technique is that the image contrast can be enhanced without amplification of noise. In this paper we utilized discrete cosine transform to produce low and high frequency component of an input image.
We can compute the DCT of a input scan image dx,y of size M × N, by using the expression as defined in Eq. 1. For all values of u = 0, 1, 2 …, M−1 and v = 0, 1, 2, …, N − 1 expression of Eq. 1 must be evaluated. Also, given Du,v, for x = 0, 1, 2…, M − 1 and y = 0, 1, 2,…, N − 1, dx,y can be obtained by inverse DCT transform which is mentioned in Eq. 2. Note that both equations (1) and (2) consist of a two-dimensional pair of DCT, where x and y are spatial coordinates and, u and v refers to frequency variables20.
1 |
2 |
The power spectrum P(u, v) of image dx,y is defined in Eq. 3:
3 |
That is, the energy of the image is defined as the sum of squares of the DCT coefficients21.
Due to the large frequency values that DCT coefficients have at the origin, it’s generally referred to as the direct current (DC) element of the gamut whereas other coefficients are known as alternating current (AC) elements. The DC coefficients within the higher left corner show facts of lower frequencies, while the AC coefficients within the lower right corner relate to facts of upper frequencies. The fundamental characteristic of DCT is focusing in low-frequency components the foremost energy of a representative image. This means that the high-frequency component coefficients are nearly zero and considered negligible in maximum cases. Utmost data are included in the components of low-frequency of the spatial image, which symbolize a coarse or blurred version22.
The image’s low-frequency component is then enhanced with CLAHE. Instead of working on an entire image, the CLAHE decomposes the image into different regions and determines the number of histograms corresponding to each data region. In order to avoid over enhancement, CLAHE uses contrast limiting approach for each neighborhood point from which the transformation function is derived in a particular region. The CLAHE23 equation from which the new gray levels can be obtained as:
4 |
Where j is the new pixel value we want to generate, the maximum and minimum pixel values correspond to jmax and jmin. P(f) is the distribution of cumulative probabilities.
At the starting point of background and the ending point before fat, the display range of the image is expanded for the whole pixels range as illustrated in Fig. 3. Results of the contrast stretching are shown in Fig. 4.
Thresholding
Thresholding technique is the simple and efficient way for segmentation of the images. The segmentation is based upon directly on pixel intensities. Due to the overlap between the background intensities and some sections of ROI, simple thresholding may not be suitable for extraction of lung region24. The background region of the respective images is discarded to overcome this problem. The proposed segmentation technique used a combination of optimal thresholding based on differential evolution25 and corner-seeded region growing.
When background of the scan image is removed, optimal thresholding based on differential evolution is employed to determine the boundary of lung region and lung area extraction. The initial threshold of −950 HU is applied, as the majority of the lung region ranges from −950 HU to −500 HU. It is an iterative process in which each iteration recalculates the threshold.
In an image, histogram is used to obtain the probability distribution for various gray levels. This distribution of probabilities is calculated by Eq. 5 in the first place.
5 |
K represents the total number of categories in the scan image. Pi and pi(x) are the distribution functions of probability and probability in category i. Mi represent the mean and σi is standard deviation. For both different categories, the overall probability error is reduced by Eq. 6. This is used to calculate the optimal threshold.
6 |
This error relates to the Ti threshold. The overall error is then computed in accordance with Eq. 7.
7 |
A threshold image containing the lung mask is now generated. Figure 5(b) illustrates the CT scan threshold image.
Background Removal
By simply applying the image threshold, we cannot get a whole part of the lungs from the background. From Fig. 4, it is clearly evident that the gray levels of image background and of the lungs are highly similar. Therefore, a mechanism is needed to eliminate the entire background. Initially, a background removal operator is used to remove the background26. This operator moves along the four directions beginning at four corners of the target image. It identifies the image background pixels by using the range values of grey levels and then removes the particular pixels till the pixels surpass the range values or length of rows or columns. Further, the image is traversed from top to bottom at middle. The resulting image consists only of the chest and the lungs segments.
Candidate Nodule Extraction
The result of the preprocessing step is the 3D lung mask, which is subsequently used to extract lung volume from the original Lung CT. The lung volume extracted consists of nodules and vessels. Due to their intrinsic density variations, nodules and vessels tend to be denser than the lung.
In order to extract the ROIs, threshold is computed using the median slice which is the best available thresholding technique. It should be noted that, since vessels and nodules have different levels of density, multiple threshold values must be calculated based on nodule type.
Candidate Nodule Pruning
The resulting ROIs are nodules and vessels. The general diameter range of the nodule is between 3 mm and 30 mm. ROIs <3 mm diameter are therefore excluded as noise and, ROIs >30 mm diameter are pruned as lesions or vessels. To detect the vessels in the ROIs, the elongation property is used.
Feature Extraction
For nodule detection and classification, relevant features play important role. In this study, we applied WLD to extract the local features.
As proposed by Chen et al.27, local descriptor technique called WLD is used to classify texture and face detection. This technique comprised of two processes: (a) the differential excitation, which describes a central pixel’s relative intensity differences from its neighbors and (b) orientation, which describes the central pixel’s gradient orientation. These two processes would produce complementary information for the description of local texture. From the literature27, Weber significance and orientation are described as follows:
Weber magnitude:
8 |
Where the arctan function is applied to forestall the output from being large and therefore might partly subdue the noise side-effect. xc is the center pixel, xi = 0, 1, …, p−1 is the adjacent pixels and is the differential excitation between xc and xi, p is the count of neighbors and a is an attribute to adjust the differences in intensity between adjacent pixels. If εm(xc) is zero or close to zero, it is mainly flat area28.
The orientation factor can be described as the ratio of the horizontal direction change to the current pixel’s vertical direction. Sobel operator is used to obtain gradient orientation and can be calculated as;
9 |
where x1 − x2 and x3 − x7 shows differences in the intensity x and y direction respectively.
Support Vector Machine (SVM) for Nodule and Non-Nodule Classification
The classification of patterns is described as the task of categorizing any object in a specified class type. Vapnik developed the SVM to solve the problems of classification. Cortes and Vapnik developed the present version of SVM classifier for regression at AT&T laboratories in 199529. For binary classification problems that have only two different classes, the theoretical characteristics of SVM are typically defined.
SVM’s basic idea is to build a hyperplane that maximizes the margin between positive and negative examples. The hyperplane is determined by the supporting vectors closest to the surface of the decision. The decision surface is determined by the internal product of the training data, which allows us to map the input vectors to a higher-dimensional internal product area called the feature space. The input function vector is displayed in the form N*M matrix below.
10 |
The total number of feature vectors is indicated here by N and M dimensional feature vector is represented by v. The SVM finds the hyper plane of higher dimensional space in the training process and separates the nodules from non-nodules.
Experimental Results and Discussion
In this section, we evaluate and analyze the performance of the proposed method on LIDC image dataset of chest CT images30. As LIDC database contain images that were collected from various institutes, the spatial resolution and X-ray image parameters varied (slice intervals, 0.625–3.0 mm; resolution in the plane, 0.488–0.946 mm; tube voltage, 120–140 kV; and tube current, 40–499 mA). In this work, we focus our attention on nodules with a diameter of 5–20 mm, which were identified as a nodule by at least one doctor in four. By using the LIDC database, we considered 84 cases among which included 103 nodules in total.
Quantitative metrics for evaluation
The proposed diagnostic system is evaluated in terms of performance by means of well-known metrics, including sensitivity, accuracy, and specificity. These measurements are calculated using True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN). Where TP is the likelihood that a cancer patient has cancer. The FP is the probability that cancer is found to be the detection value of a healthy person. The TN is the likelihood that the cancer patient is healthy. The FN is explained as a healthy person is having cancer.
Accuracy
Accuracy is the measure of the classification scheme’s overall effectiveness/usefulness. It can be calculated using the following equation.
11 |
Sensitivity
Sensitivity is termed as the capability of a classifier to detect positive class patterns. The following equation can be used to obtain it.
12 |
Specificity
Specificity is termed as the capability of a classifier to detect negative class patterns. Specificity can be obtained from the following equation.
13 |
To enhance the contrast of the images along with preserving the image detail, we first preprocessing all the image for possible contrast enhancement. The image is divided into a block of 8 × 8 size. Each block is then converted into frequency domain via DCT. The upper left corner of DCT that is d(0, 0) is the DC coefficient and the other sixty-three elements are AC coefficients as shown in Fig. 6. The coefficients are sorted from upper left to lower right corner in order of increasing spatial frequencies.
Figure 7 illustrate the input image conversion into low-frequency (DC) and high-frequency (AC) component using DCT. As shown in the figure, the LF component contain most of the image detail and HF component is mostly noisy.
To avoid the over-enhancement and preserving the image details, we only enhanced the low-frequency component using CLAHE and the high-frequency component is kept unchanged. The visual results of the proposed CLAHE-DCT method is compared with classical histogram equalization method (see Fig. 8).
As shown in Fig. 8, the histogram equalization method does not improve the image detail information and it tends to over-enhanced the input image. In contrast, our proposed method preserved the edge and texture details also sufficiently improve the image brightness.
In next step, the background is removed from the images via operator used in26. For segmentation, we used differential evolution-based optimal thresholding25. This is because the simple thresholding is failed to achieve good performance. After candidate nodule extraction, feature extraction from the candidate nodule is another important step.
To perform the feature extraction step, we first compute the corresponding orientation and differential excitation component of an image. Each image of differential excitation and orientation is then divided into N non-overlapped blocks R1, R2, R3, …. RN. WLD histogram Hn(n = 1, 2, 3, …N) is then constructed for each block of differential excitation and orientation image. In next step, the WLD histograms of each block is integrating to construct enhanced features vector which can be used for classification.
In last step, the feature vectors of differential excitation EHist and orientation component OHist are fused together to generate more robust representation of input image. Figure 9 illustrate the feature extraction process.
The feature vectors of different sizes FS-10, FS-12, FS-14, FS-16 and FS-18 are extracted using feature extraction step.
For training and testing purpose, we divided the dataset in the following manner:
70% and 30% training and testing ratio
50% and 50% training and testing ratio
30% and 70% training and testing ratio
SVM results with different training to testing ratio are shown in Fig. 10.
Sensitivity and specificity of FS-14 for 70-30 training and testing ratio
In order to perform the experiment using 70–30% training to testing ratio, we have divided the dataset into training to testing samples in which 1100 samples are reserved for training and 472 samples are used for testing. The testing set contains 236 non-nodules and 236 nodules. For 50–50% training to testing ratio, we used 393 samples as a training set and 393 samples as a testing set. Similarly, in case of 30–70% training to testing ratio, the total number of training and testing samples 472 and 1100. The results of correctly classified and miss-classified nodules/non-nodules are shown in Table 1.
Table 1.
Training to Testing Ratio (%) | TP | FP | FN | TN | Total Nodules | Total Non-Nodules |
---|---|---|---|---|---|---|
70–30 | 233 | 2 | 3 | 234 | 236 | 236 |
50–50 | 383 | 4 | 10 | 389 | 393 | 393 |
30–70 | 456 | 8 | 94 | 542 | 550 | 550 |
True Positive -> TP, False Positive -> FP, False Negative -> FN, True Negative -> 234.
We have obtained 98.73% sensitivity, 99.15% specificity and 98.94% accuracy rate in case of 70–30% training to testing ratio. The results of 50–50% training to testing ratio is 97.45% sensitivity, 98.98% specificity and 98.35% accuracy rate. We have observed that in case of 30–70% training to testing ratio the performance is reduced that is 82.9% sensitivity, 98.54% specificity and 90.72% accuracy rate.
Performance evaluation using K-fold cross validation
We have also evaluated the performance of SVM by k-fold cross validation. For k-fold cross validation we set the value of k to 5,7 and 10. The performance of the SVM classifier on k-folds are shown in Fig. 11. As shown in the Fig. 11, 7-fold cross validation provides more better results as compared to 5 and 10-folds. We have also observed that there is a small difference between the performance of different k-fold which shows the robustness of the proposed method.
In order to represents the comparison in better way, we have also plotted curves for different training to testing ratio which illustrate that how correctly SVM classifier make a difference between nodules and non-nodules. The curve of true positive rate (TPR) against false positive rate (FPR) obtained for SVM is illustrated in Fig. 12. It is worth to note that although for three type of training to testing ratio we have obtained stable results using SVM, however, true positive rate for 50–50% training to testing ratio is slightly higher as compared to 70–30% and 30–70% training to testing ratio.
One of the important factors in performance analysis is to compare the results with existing methods reported in literature. There are many methods reported in literature31–34 (with different domains) who follow the same experimental protocols. Such comparison is mandatory to evaluate the importance of diagnostic method. However, due to difference in experimental protocols this type of comparison is also very challenging. These include performance metrics, nodule size and type of dataset used. We have selected those methods which used accuracy, sensitivity and specificity as a performance metrics and also performed experiments on LIDC dataset. A brief comparison of proposed method and methods reported in literature is provided in Fig. 13. As shown in Fig. 13, our proposed method reported 99.15% specificity, 98.73% sensitivity and 98.94% accuracy which shows improvement as compared to performance of existing method.
Conclusion
In this paper, a novel and effective pulmonary nodule detection framework is proposed. In the initial phase, the contrast of the images is enhanced that increases the robustness for segmenting images with varying contrasts. Transformation from spatial domain to frequency domain is performed using DCT which reveals those features that are difficult to detect in the original spatial domain. Most CAD systems have a common weakness that their system fail to perform well on low contrast medical images. In this study, we have proposed an effective framework for image contrast enhancement in frequency domain without boosting the noise. The proposed method has reduced false positives significantly in nodule candidates by using the most discriminative texture features. The empirical results provide the evidence that the proposed method can efficiently classify nodules and non-nodules. In the future, we are planning to use evolutionary algorithms in order to search for optimal features. We would also like to ensemble different classifiers for performance improvement.
Acknowledgements
The work reported in this paper was supported by the National Natural Science Foundation of China under Grant 61672080.
Author Contributions
S.A.K. proposed the idea and conceptualization. S.A.K. and S.H. performed data analysis, experimentation and scientific discussions, and prepared the original draft. K.I. and S.Y. supervised the work as well as validated the findings, and helped in revision and organization of the paper. Further, K.I. and S.Y. also supported in funding acquisition.
Competing Interests
The authors declare no competing interests.
Footnotes
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Zhao B, Gamsu G, Ginsberg MS, Jiang L, Schwartz LH. Automatic detection of small lung nodules on CT utilizing a local density maximum algorithm, journal of applied clinical medical physics. 2003;4:248–260. doi: 10.1120/1.1582411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Silva, A. C., Carvalho, P. C. P. & Gattass, M. Diagnosis of Lung Nodule using Gini Coefficient and skeletonization in computerized Tomography images, in Proceedings of the 2004 ACM symposium on Applied computing, pp. 243–248 (2004).
- 3.Zhang J, Xia Y, Cui H, Zhang Y. Pulmonary nodule detection in medical images: a survey, Biomedical Signal Processing and Control. 2018;43:138–147. doi: 10.1016/j.bspc.2018.01.011. [DOI] [Google Scholar]
- 4.Gill G, Beichel RR. An approach for reducing the error rate in automated lung segmentation, Computers in biology and medicine. 2016;76:143–153. doi: 10.1016/j.compbiomed.2016.06.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Leader JK, et al. Automated lung segmentation in X-ray computed tomography: development and evaluation of a heuristic threshold-based scheme 1. Academic radiology. 2003;10:1224–1236. doi: 10.1016/S1076-6332(03)00380-5. [DOI] [PubMed] [Google Scholar]
- 6.Kuhnigk J-M, et al. New tools for computer assistance in thoracic CT. Part 1. Functional analysis of lungs, lung lobes, and bronchopulmonary segments. Radiographics. 2005;25:525–536. doi: 10.1148/rg.252045070. [DOI] [PubMed] [Google Scholar]
- 7.Sluimer I, Prokop M, Van Ginneken B. Toward automated segmentation of the pathological lung in CT. IEEE transactions on medical imaging. 2005;24:1025–1038. doi: 10.1109/TMI.2005.851757. [DOI] [PubMed] [Google Scholar]
- 8.Pu J, Paik DS, Meng X, Roos J, Rubin GD. Shape break-and-repair strategy and its application to automated medical image segmentation, IEEE transactions on visualization and computer graphics. 2011;17:115–124. doi: 10.1109/TVCG.2010.56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Khan SA, Kenza K, Nazir M, Usman M. Proficient lungs nodule detection and classification using machine learning techniques, Journal of Intelligent & Fuzzy Systems. 2015;28:905–917. [Google Scholar]
- 10.Lee SLA, Kouzani AZ, Hu EJ. Random forest based lung nodule classification aided by clustering, Computerized medical imaging and graphics. 2010;34:535–542. doi: 10.1016/j.compmedimag.2010.03.006. [DOI] [PubMed] [Google Scholar]
- 11.Maeda S, et al. Detection of Lung Nodules in Thoracic MDCT Images Based on Temporal Changes from Previous and Current Images, JACIII. 2011;15:707–713. doi: 10.20965/jaciii.2011.p0707. [DOI] [Google Scholar]
- 12.Choi W-J, Choi T-S. Genetic programming-based feature transform and classification for the automatic detection of pulmonary nodules on computed tomography images, Information Sciences. 2012;212:57–78. doi: 10.1016/j.ins.2012.05.008. [DOI] [Google Scholar]
- 13.Choi W-J, Choi T-S. Automated pulmonary nodule detection system in computed tomography images: A hierarchical block classification approach. Entropy. 2013;15:07–523. doi: 10.3390/e15020507. [DOI] [Google Scholar]
- 14.Orozco HM, Villegas OOV, Sánchez VGC, Domínguez HdJO, Alfaro MdJN. Automated system for lung nodules classification based on wavelet feature descriptor and support vector machine. Biomedical engineering online. 2015;14:9. doi: 10.1186/s12938-015-0003-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Akram S, Javed MY, Akram MU, Qamar U, Hassan A. Pulmonary nodules detection and classification using hybrid features from computerized tomographic images, Journal of Medical Imaging and Health Informatics. 2016;6:252–259. doi: 10.1166/jmihi.2016.1600. [DOI] [Google Scholar]
- 16.Jaffar, M. A., Siddiqui, A. B. & Mushtaq, M. Ensemble classification of pulmonary nodules using gradient intensity feature descriptor and differential evolution. Cluster Computing, pp. 1–15, 2017.
- 17.Naqi, S. M., Sharif, M. & Jaffar, A. Lung nodule detection and classification based on geometric fit in parametric form and deep learning. Neural Computing and Applications, pp. 1–19, 2018.
- 18.Prewitt JM, Mendelsohn ML. The analysis of cell images. Annals of the New York Academy of Sciences. 1966;128:1035–1053. doi: 10.1111/j.1749-6632.1965.tb11715.x. [DOI] [PubMed] [Google Scholar]
- 19.Munir A, Hussain A, Khan SA, Nadeem M, Arshid S. Illumination invariant facial expression recognition using selected merged binary patterns for real world images. Optik. 2018;158:1016–1025. doi: 10.1016/j.ijleo.2018.01.003. [DOI] [Google Scholar]
- 20.Lin H-D, Ho D-C. Detection of pinhole defects on chips and wafers using DCT enhancement in computer vision systems. The International Journal of Advanced Manufacturing Technology. 2007;34:567–583. doi: 10.1007/s00170-006-0614-3. [DOI] [Google Scholar]
- 21.Rao, K. R. & Hwang, J. J. Techniques and standards for image, video, and audio coding vol. 70: Prentice hall New Jersey (1996).
- 22.Gonzalez, R. & Woods, R. Digital Image Processing. 2nd edn Prentice Hall, New Jersey, vol. 793 (2002).
- 23.Pizer SM, et al. Adaptive histogram equalization and its variations. Computer vision, graphics, and image processing. 1987;39:355–368. doi: 10.1016/S0734-189X(87)80186-X. [DOI] [Google Scholar]
- 24.Naqi SM, Sharif M, Yasmin M. Multistage segmentation model and SVM-ensemble for precise lung nodule detection. International journal of computer assisted radiology and surgery. 2018;13:1083–1095. doi: 10.1007/s11548-018-1715-9. [DOI] [PubMed] [Google Scholar]
- 25.Cuevas E, Zaldivar D, Pérez-Cisneros M. A novel multi-threshold segmentation approach based on differential evolution optimization. Expert Systems with Applications. 2010;37:5265–5271. doi: 10.1016/j.eswa.2010.01.013. [DOI] [Google Scholar]
- 26.Jaffar, M. A., Hussain, A., Nazir, M., Mirza, A. M. & Chaudhry, A., GA and morphology based automated segmentation of lungs from Ct scan images, in 2008 International Conference on Computational Intelligence for Modelling Control & Automation, pp. 265–270 (2008).
- 27.Chen J, et al. WLD: A robust local image descriptor. IEEE transactions on pattern analysis and machine intelligence. 2010;32:1705–1720. doi: 10.1109/TPAMI.2009.155. [DOI] [PubMed] [Google Scholar]
- 28.Wang B, Li W, Yang W, Liao Q. Illumination normalization based on Weber’s law with application to face recognition. IEEE Signal Processing Letters. 2011;18:462–465. doi: 10.1109/LSP.2011.2158998. [DOI] [Google Scholar]
- 29.Deka PC. Support vector machine applications in the field of hydrology: a review. Applied soft computing. 2014;19:372–386. doi: 10.1016/j.asoc.2014.02.002. [DOI] [Google Scholar]
- 30.Armato SG, III, et al. Lung image database consortium: developing a resource for the medical imaging research community. Radiology. 2004;232:739–748. doi: 10.1148/radiol.2323032035. [DOI] [PubMed] [Google Scholar]
- 31.Muslim, H. S. M., Khan, S. A., Hussain, S., Jamal, A. & Qasim, H. S. A. A knowledge-based image enhancement and denoising approach. Computational and Mathematical Organization Theory, 1–14 (2018).
- 32.Khan SA, Hussain A, Usman M. Reliable facial expression recognition for multi-scale images using weber local binary image based cosine transform features. Multimedia Tools and Applications. 2018;77:1133–1165. doi: 10.1007/s11042-016-4324-z. [DOI] [Google Scholar]
- 33.Khan SA, Ishtiaq M, Nazir M, Shaheen M. Face recognition under varying expressions and illumination using particle swarm optimization. Journal of computational science. 2018;28:94–100. doi: 10.1016/j.jocs.2018.08.005. [DOI] [Google Scholar]
- 34.Khan SA, Hussain S, Xiaoming S, Yang S. An Effective Framework for Driver Fatigue Recognition Based on Intelligent Facial Expressions Analysis. IEEE Access. 2018;6:67459–67468. doi: 10.1109/ACCESS.2018.2878601. [DOI] [Google Scholar]