Skip to main content
Medical Physics logoLink to Medical Physics
. 2017 May 23;44(7):3556–3569. doi: 10.1002/mp.12208

An integrated segmentation and shape‐based classification scheme for distinguishing adenocarcinomas from granulomas on lung CT

Mehdi Alilou 1,, Niha Beig 1, Mahdi Orooji 1, Prabhakar Rajiah 2, Vamsidhar Velcheti 3, Sagar Rakshit 3, Niyoti Reddy 3, Michael Yang 4, Frank Jacono 5, Robert C Gilkeson 6, Philip Linden 7, Anant Madabhushi 1
PMCID: PMC5988352  NIHMSID: NIHMS970771  PMID: 28295386

Abstract

Purpose

Distinguishing between benign granulmoas and adenocarcinomas is confounded by their similar visual appearance on routine CT scans. Unfortunately, owing to the inability to discriminate these lesions radigraphically, many patients with benign granulomas are subjected to unnecessary surgical wedge resections and biopsies for pathologic confirmation of cancer presence or absence. This suggests the need for improved computerized characterization of these nodules in order to distinguish between these two classes of lesions on CT scans. While there has been substantial interest in the use of textural analysis for radiomic characterization of lung nodules, relatively less work has been done in shape based characterization of lung nodules, particularly with respect to granulmoas and adenocarcinomas. The primary goal of this study is to evaluate the role of 3D shape features for discrimination of benign granulomas from malignant adenocarcinomas on lung CT images. Towards this end we present an integrated framework for segmentation, feature characterization and classification of these nodules on CT.

Methods

The nodule segmentation method starts with separation of lung regions from the surrounding lung anatomy. Next, the lung CT scans are projected into and represented in a three dimensional spectral embedding (SE) space, allowing for better determination of the boundaries of the nodule. This then enables the application of a gradient vector flow active contour (SEGvAC) model for nodule boundary extraction. A set of 24 shape features from both 2D slices and 3D surface of the segmented nodules are extracted, including features pertaining to the angularity, spiculation, elongation and nodule compactness. A feature selection scheme, PCA‐VIP, is employed to identify the most discriminating set of features to distinguish granulmoas from adenocarcinomas within a learning set of 82 patients. The features thus identified were then combined with a support vector machine classifier and independently validated on a distinct test set comprising 67 patients. The performance of the classifier for both of the training and validation cohorts was evaluated by the area under receiver characteristic curve (ROC).

Results

We used 82 and 67 studies from two different institutions respectively for training and independent validation of the model and the shape features. The Dice coefficient between automatically segmented nodules by SEGvAC and the manual delineations by expert radiologists (readers) was 0.84± 0.04 whereas inter‐reader segmentation agreement was 0.79± 0.12. We also identified a set of consistent features (Roughness, Convexity and Spherecity) that were found to be strongly correlated across both manual and automated nodule segmentations (R > 0.80, p < 0.0001) and capture the marginal smoothness and 3D compactness of the nodules. On the independent validation set of 67 studies our classifier yielded a ROC AUC of 0.72 and 0.64 for manually‐ and automatically segmented nodules respectively. On a subset of 20 studies, the AUCs for the two expert radiologists and 1 pulmonologist were found to be 0.82, 0.68 and 0.58 respectively.

Conclusions

The major finding of this study was that certain shape features appear to differentially express between granulomas and adenocarcinomas and thus computer extracted shape cues could be used to distinguish these radiographically similar pathologies.

Keywords: CADx, Lung CT, nodule characterization, segmentation, shape analysis

1. Introduction

Pulmonary adenocarcinoma represents one of the most common histological types of lung cancer.1 Granulomas in turn represent the most common benign tumor confounders on CT and PET.2 While granulomas only represent one of the several types of possible benign diagnoses, they can be similar in size, shape, and appearance to lung cancers on CT.3 While a PET scan can often be used to distinguish benign from malignant lung nodules, it is less useful for granulomas as these localized areas of infection can also show high uptake on PET during their acute phase of infection.

Over 1 million people in the US are annually subjected to a CT‐guided or bronchoscopic biopsy and over 60,0004 are subjected to a surgical wedge resection. These surgical interventions currently represent the only way of definitively ascertaining the presence or absence of cancer within a CT‐detected nodule.5 However, more than 30% of pulmonary nodules identified as suspicious on a CT scan and that are biopsied or resected will be identified as being benign, meaning that nearly $ 600 M is being spent annually in the US on unnecessary and invasive surgical procedures.4 While not every patient identified with a nodule on a CT scan will undergo a surgical procedure for diagnostic confirmation for the presence of cancer, the majority of patients identified with an indeterminate nodule on a CT scan will undergo follow‐up repeat CT scans. The goal of these repeat scans, which is to evaluate whether the nodule is increasing in size (and hence possibly cancer), involves subjecting a number of patients with benign disease to unnecessary radiation. Furthermore, granulomas and slow growing cancers might both increase at roughly the same rate, thus rendering the follow‐up CT scans largely noninformative.6

Since, irregularities along the tumor perimeter can result from internal tumor heterogeneity, the differences in growth patterns and shape of the nodules could potentially allow for discrimination of granulomas from adenocarcinomas. There have been studies published on characterizing the lung nodules based on their appearance.7, 8, 9 However, in these studies,7, 8, 9 characterization is typically subjective and qualitative or based on very simple quantitative features. For example, the response evaluation criteria in solid tumors (RECIST) which is used to measure tumor response to therapy based on CT is a unidirectional linear measurement to estimate tumor diameter. The RECIST criteria assume a spherical tumor with linear growth uniformly in all directions. Since the linear growth assumption is often violated, high inter‐reader variability may occur in the identification of the lesion boundary. Therefore, there is a need for more reliable, reproducible, and quantitative image features.10 Consequently, a number of groups have begun to look at shape analysis, both in a qualitative or semiquantitative way for shape‐based characterization of lung nodules on CT images.

Shape‐based classification of the lung nodules is different from textural analysis. Nonshape features such as textural‐ and intensity‐based features may be more sensitive to intensity and scanner variability.11 In other words, different CT scanners and scanning parameters such as slice thickness and reconstruction algorithms could affect the resulting textural features.12 Shape features on the other hand tend to be less sensitive to differences in image intensity and scanner platforms and potentially more predictive of lesion diagnosis compared to texture features.

In this work, we present an integrated framework for segmentation and shape‐based characterization and discrimination of adenocarcinomas from granulomas from routine CT images. This work, to the best of our knowledge, represent the first attempt to evaluate the role of 3D shape features to discriminate between granulomas and adenocarcinomas on lung CT images.

1.A. Previous work

Table 1 summarizes some recent approaches to shape‐based analysis of lung nodules, approaches most pertinent to the subject of this work. In,13 the authors analyzed shape (e.g., surface area, volume, and surface to volume ratio) features together with textural and intensity features extracted from CT data of lung and oropharyngeal cancers. Applying unsupervised clustering to the extracted image features, they found an association with underlying gene‐expression profiles of lung cancer patients. However, the authors primarily focused on size‐related attributes and not on features relating to angulairity, smoothness, or shape of the nodule.

Table 1.

A summarized list of published studies which employed shape features for image‐based decision support on lung CT images

Study Type of shape features Application
Wang et al.,7 Semiquantitative: lobulation, irregularity, spiculation Association of adenocarcinoma with overall survival rate
Brandman et al.,8 Simple quantitative: maximal nodule diameter Classifying benign from malignant nodules
Gimenez et al.,9 Qualitative: size Classifying benign from malignant nodules
Grove et al.,14 A single quantitative feature: convexity Discrimination of lung adenocarcinomas
Hugo et al.,13 Quantitative: volume, surface area, compactness Unsupervised clustering of lung, head, and neck cancers

While a number of researchers have proposed lung nodule segmentation and characterization approaches, these approaches have rarely been integrated together into a single framework. In,15 Tan et al., proposed a marker‐controlled watershed and active contour‐based algorithm for the segmentation of a variety of lesions, ranging from tumors found in patients with advanced lung cancer to small nodules detected in lung cancer screening programs. The limitation of this approach is that the watershed lines which are used for initialization of an active contour are often irregular, and sometimes include vessels and other nonlesion structures. A region growing approach was presented in16 for segmentation of pulmonary nodules in thoracic CT scans. However, these approaches may not always be ideal when the nodules to be segmented share a large portion of their surface with an adjoining structure of similar density. Wang et al.,17 presented a spiral‐scanning technique for segmentation of pulmonary nodules in lung 3D CT scans. This method may not be valid for complex lung lesions as it assumes that each scan line intersects the lesion only once and that the nodule is brighter than its background. In,18 a 3D active contour approach was proposed. Although, the problem of pleura‐connected nodules was solved by defining a mask energy to penalize contours that grow beyond the pleura or thoracic wall, as in,16 this approach may not be able to address the issue of vessel‐connected nodules and discriminating nodule adjacent or adjoining structures.

Figure 1 shows modules comprising our lung nodule segmentation approach. The approach includes separation of the lung region from the surrounding anatomy, a nonlinear embedding representation of the lung making it easier to separate from the rest of the thoracic anatomy, removing non‐nodule structures with a rule‐based classifier and finally a single click active contour‐based segmentation to extract the nodule surface. Our segmentation approach is robust because segmentation errors caused by both pleura‐ and vessel‐attached nodules are eliminated by separation of lung regions (Fig. 1(a)) and removing non‐nodule structures. Moreover, a spectral embedding‐based active contour is presented. Spectral embedding is a nonlinear dimensionality reduction method19, 20 that forms an affinity matrix via a prespecified kernel function. The kernel function enables a mapping of the original set of image features (or intensities) to a new kernel space where spectral decomposition is then applied to the corresponding graph Laplacian. The SE framework itself is general and can be used for other data clustering applications. Laplacian embedding is particularly useful for reducing the dimensionality of data. It yields a low‐dimensional representation that best preserves the structure of the original manifold in the sense that points that are close to each other on the original manifold will also be close after embedding. In other words, the embedding emphasizes small groups of similar pixels to tightly cluster. Each individual pixel in the original lung CT image is then represented by the corresponding value of the eigen vectors obtained via the spectral decomposition step. SE representation of the lung provides strong gradients at the margin of the nodules which would allow an active contour model to stop evolving at the nodule boundary. SE has been employed in several studies for image segmentation and clustering,21, 22 in fact this approach builds on the approach initially presented by Agner et al., in23 for segmenting breast lesions on MRI. However, our approach, unlike Agner et al., also comprises the gradient vector flow field (GVF) active contour. The GVF forces, calculated for the whole image domain, are used to drive the AC.

Figure 1.

Figure 1

The process of the segmentation method includes: (a) separation of lung region from the surrounding anatomy, (b) non‐nodule structures removal with a rule‐based classifier and spectral embedding representation of the lung region, and (c) applying single‐click SEGvAC to obtain a refined segmentation of the nodules. [Color figure can be viewed at wileyonlinelibrary.com]

The main work flow of our integrated segmentation and feature extraction framework is shown in Fig. 2. Following application of SEGvAC for nodule segmentation, a set of 24 original (i.e., initial) 2D and 3D shape features are extracted which are subsequently reduced to the six most discriminatory attributes utilizing the PCA‐VIP feature ranking technique.24, 25 PCA‐VIP involves mapping the high‐dimensional data via principal component analysis (PCA)26 and subsequently ranking features based on their contributions to both the structure of the PCA embedding and class labels. Most feature selection schemes attempt to identify important features in the original space, but are often limited by dependencies and interactive effects among features. PCA‐VIP, however, attempts to perform feature selection by first mapping the original data into a set of orthogonal vectors and thus mitigates the issue of feature dependencies.24 The top ranked PCA‐VIP features are then used to train a Support Vector Machine (SVM) classifier to distinguish granulomas from adenocarcinomas on a training set of lung CT images. The SVM is then independently validated on the hold out test set, the independent set coming from a completely different institution than the training set.

Figure 2.

Figure 2

The main workflow of the framework includes: (a) segmentation of the lung regions and nodules, (b) extraction of the 2D and 3D shape features, (c) feature ranking, and (d) shape‐based classification. [Color figure can be viewed at wileyonlinelibrary.com]

This work presents the following methods. (a) An efficient single‐click segmentation method (SEGvAC) for nodule segmentation which is then used for subsequent feature extraction. (b) Evaluating the role of 3D shape features for noninvasive discrimination of granulomas from adenocarcinomas. Pertinently, this is an integrated segmentation, feature extraction, and classification approach for the problem of separating granulomas from adenocarcinomas on lung CT.

The remainder of this paper is organized as follows. Section 2 describes the methods including segmentation, feature analysis, and classification schemes used in this work. In section 3, we present the experimental results and accompanying discussion. Concluding remarks are presented in 4.

2. Methods

2.A. Lung and nodule segmentation

In the first step and inspired by the methods in27 and,28 lung regions are isolated from surrounding anatomy. The optimal threshold to separate the body from nonbody voxels (i.e., low‐density voxels of lung and surrounding air) is identified. The result is the initial lung mask. This initial lung mask is further refined by applying the morphological hole‐filling algorithm to the logical complement of the initial lung mask.

Having extracted the initial region of interest (i.e., lung regions), the next step is the automatic segmentation of the nodule. We employ an Active contour (AC)29 scheme. The approach assumes that the image plane ΩR2 is partitioned into two regions by a curve ϒ. The foreground region is defined as Ω1 and the background region is defined as Ω2. In other words, the image plane is comprised of the union of regions of interest, background, and evolving contour (Ω=Ω1Ω2Υ). In the simplified form, the energy functional of an edge‐based AC is defined as:

E=αE1+βE2, (1)

where E2 refers to internal forces used to keep the integrity and elasticity of the contour and image force E1 is defined as:

E1=Υg(v(c))dc, (2)

where c = (x,y) corresponds to a voxel in 2D image plane, v(c) is the intensity value of the voxel c and g(v(c)) is defined as:

g(v(c))=11+ψ(v(c)). (3)

The gradient function ψ(v(c)) is often calculated by the gray‐level gradient. However, in this work instead of employing the gray‐level gradient, we used tensor gradient function derived from the spectral‐embedding representation. Agner et al., in23 showed that the spectral embedding‐based tensor gradient yielded better region‐ and boundary‐based statistics and stronger gradients. Figure 3(a) illustrates the intensity gradient of a CT slice. The tensor gradients derived from the spectral embedding representation of the images are shown in Fig. 3(b). Note that the edges of the nodule in Fig. 3(b) which is obtained from spectral embedding appear to be more pronounced compared to those shown in Fig. 3(a).

Figure 3.

Figure 3

(a) Image gradients, (b) Tensor gradients obtained from SE. [Color figure can be viewed at wileyonlinelibrary.com]

Unlike,23 where the authors employed an edge‐based AC, we employed the gradient vector flow field (GVF) AC.30 The GVF forces calculated for whole image domain are used to drive the AC. Active contours driven by GVF do not need to be initialized very close to the boundary. The GVF forces are calculated by applying generalized diffusion equations to both components of the gradient of an image edge map. In order to segment the nodules, SEGvAC is manually initialized by a single point click. The starting and ending slice numbers were also provided to SEGvAC. However, as can be seen in Fig. 3(b), the gradient values corresponding to objects not of interest such as vessel‐like structures and image artifacts might affect our boundary‐based SEGvAC . Therefore, prior to employing the SEGvAC we employ a rule‐based classifier to remove the unwanted structures based on their 3D geometric properties. These properties include bounding box measures and elongation of the 3D structures defined as the length of the major axis divided by the length of the minor axis. Since lung nodules are usually about 5–30 mm in size, 3D structures that do not fit this size criteria are eliminated via a rule‐based classifier. Candidate objects for inclusion or exclusion were examined in terms of convexity and elongation measures introduced in31 and32 for distinguishing vessel‐like structures from more convex and sphere‐like objects. This filtering scheme works ideally when 3D blobs inside the volume of interest are separated from each other. To specifically address the issue of vessel connected nodules, a set of morphological operations to isolate vessel connected nodules was employed. In this regard, we have applied erosion followed by dilation with a disk‐based structuring element. However, to detach nodules and vessels, the size of the erosion kernel was modulated to be slightly bigger than the size of the dilation operator. Note that, the description of the rules and thresholds applied in the segmentation process is provided in the appendix section.

The segmentation performance of the SEGvAC was determined by measuring the overlap between manually and automatically segmented nodules using Dice similarity coefficient.33 Having M and A as manual and automatic nodule segmentations, the overlap is defined as:

S=2×MAM+A, (4)

Additionally we also measured over‐ and under segmentation errors, quantified respectively as,

ϵov=M¯AM. (5)
ϵun=MA¯M, (6)

2.B. Feature extraction and selection

A set of 24 computer‐extracted shape features in 2D and 3D were extracted for nodule characterization. For all the 2D sections that comprised the volume of the nodule, features were extracted on a planar basis. The mean and standard deviation of each of the features was then computed over the sections spanning the volume of the nodule. Figures 4(a)–4(c), represent the CT slice containing a nodule, 2D segmentations of a nodule obtained via SEGvAC and the corresponding 3D rendering of the lung nodule from individual 2D sections, respectively. 3D rendering of the nodule on each of the 2D sections was created via an isosurface rendering technique.34 Note that the isosurface rendering employed was solely for the purposes of illustration, therefore the quality of the features was not affected by the 3D rendering method.

Figure 4.

Figure 4

(a) A CT slice containing a section of a nodule, (b) Individual contours for the same nodule obtained via the SEGvAC model, and (c) the iso‐surface rendering of the same nodule. [Color figure can be viewed at wileyonlinelibrary.com]

Since, irregularities along the nodule surface may result from internal heterogeneity and differences in growth patterns, a set of shape features that is capable of capturing and quantifying the differences between the adenocarcinomas and the granulomas is required. For example, to capture the speculations around a nodule, the convex hull is computed by defining the smallest convex polygon enclosing a planar tumor region of interest (ROI). Then, the convexity of the nodule is defined as the ratio of the volume of the nodule to the volume of its convex hull (Fig. 5(a)).

Figure 5.

Figure 5

Illustration of the process of feature extraction from the automatically segmented lung nodules. Extraction of 2D shape features from individual 2D sections and (a) convex hull and contour points of the nodule of interest, (b) convex hull, contour points, center of mass, and radial distances from center of mass to contour points. [Color figure can be viewed at wileyonlinelibrary.com]

In addition to the convex hull which enables the calculation of the convexity ratio of a nodule, we also compute width, height, depth, perimeter, area, eccentricity, compactness, radial distance, roughness, elongation equivalent diameter, and 3D spherecity of each of the automatically segmented nodules. The list of the computed features and their associated description is provided in Table 2.

Table 2.

Extracted shape features and their associated descriptions. fi corresponds to the feature index

Feature Description Feature Description
Size (f1,f2,f3) Height, width, depth of the bounding box Compactness (f14,f15) Mean comp, stddev comp. Ratio of product of 4π and area to the squared perimeter, 4πArPr2
Area (f4,f5) Mean area, standard deviation of area Radial distance (f16,f17) Mean and standard deviation of distances from center of nodule in each slice to corresponding contour points
Perimeter (f6,f7) Mean perimeter, standard deviation of perimeter Roughness (f18,f19) Mean Rough, stddev Rough. Perimeter of a nodule region divided by their its convex hull perimeter,PrPc
Equivalent diameter (f8,f9) Mean eqvdiam, stddev eqvdiam. Diameter of a circle with the same area of a nodule region, Arπ Elongation (f20,f21) Mean elong, stddev elong. Ratio of minor to major axis, ArAc
Eccentricity (f10,f11) Mean eccen, stddev eccen. The ratio of the distance between the foci of the ellipse, having the same second‐moments as the region, and its major axis length. Convexity (f22,f23) Mean conv, stddev conv. Ratio of the area of a tumor slice to its convex hull, ArAc
Extend (f12,f13) Mean extend, stddev extend. Region area divided by area of its bounding box, ArAbox Sphericity (f24) 3D compactness, 36πV2A3, V is the volume and A is the surface area

A total of 24 features (both 2D and 3D) are extracted for each nodule resulting in a 24‐dimensional feature vector. To avoid problems arising from slice thickness, we considered slice thickness and voxel dimensions during the calculation of the shape features. To identify the most discriminating of the 24 features we performed feature ranking using the PCA‐based variable importance projection (VIP) scheme presented in.24 PCA‐VIP involves mapping the high‐dimensional data via principal component analysis and subsequently ranking features based on their contributions to both the structure of the PCA embedding and class labels. A PCA‐VIP score is then calculated for each feature as follows:

πj=mi=1hbi2tiTti(pjipi)2i=1hbi2tiTti, (7)

where m is the number of features in the original, high‐dimensional feature space, and bi refers to the coefficients that solve the regression equation.

y=TBT, (8)

which correlates the principal components with the outcome vector y and the fraction pjipi corresponds to how much the jth feature contributes to the ith principal component in the low‐dimensional embedding. The top features are identified as those which maximally contribute to one or more components in the low dimensional PCA embedding.

2.C. Support vector machine (SVM)‐based classification

We adopted a SVM classifier in conjunction with the top ranked PCA‐VIP determined features to discriminate the two pathologies of interest (adenocarcinomas and granulomas). The classifier training was solely done on the learning set. Since, the data may not be linearly separable in the original high dimensional feature space, the radial‐based kernel (RBF)35 was selected for use with the SVM. The choice of kernel was determined via empirical testing on the training set. The RBF function is applied to the training samples which are in the instance‐label form (xi,yi), where xiRn and yi{1,1}. RBF is defined as:

K(xi,xj)=exp(γxixj2),γ>0, (9)

The training procedure was carried out using a threefold cross‐validation resampling method. Two SVM classifiers CRed and CAll were trained, corresponding, respectively, to the SVM trained with just the PCA‐VIP identified features and all 24 features. The performance of CRed and CAll was measured via the area under curve (AUC) of the receiver operating characteristic (ROC) curve.

2.D. Human versus machine performance comparison

To provide a better evaluation of the quality of the segmentation result of SEGvAC, we compared the automated segmentation of the nodule against that of two readers in terms of the Dice measure. In addition to machine‐reader, inter‐reader segmentation agreement was computed as well. Two board‐certified attending radiologists with 24 and 12 years of experience in thoracic and chest radiology, respectively, were asked to manually segment nodules from 20 studies chosen from D1 (i.e., validation set). CT images and the coordinates of the nodules centers were provided to the human readers. Readers were not aware about the pathology of the nodules. The nodules were also size‐matched between adenocarcinomas and granulomas.

In addition to segmentation, the classification performance of our shape‐based classifier was compared against the diagnosis of two board‐certified attending radiologists with 24 and 12 years of experience in thoracic radiology and a pulmonologist with 2 years of experience. The readers were blinded to the true histopathologic diagnosis of a total of 20 cases pulled from D1. Readers were asked to assign a score between (1,5) to each nodule, with 1 referring to a high confidence that the nodule is “benign”, 2 referring to a diagnosis of “mostly benign”, 3 being “not sure”, 4 being “mostly malignant”, and 5 being “malignant”. The classifier performance for the machine and the human readers was then compared on the 20 cases pulled from D1.

3. Experimental results and discussion

3.A. Data

This study included two independent cohorts totaling 149 patients from two different institutions. Cohort 1 (D1) had 67 patients from University Hospitals (UH) of Cleveland while cohort 2 (D2) had 82 patients from the Cleveland Clinic (CCF). For every patient study considered, histopathologic confirmation (either via surgical wedge resection, CT‐guided biopsy or bronchoscopy) was available to determine whether the nodule was a granuloma or adenocarcinoma. In total there were 80 cases with adenocarcinomas and 69 cases with granulomas across D1 and D2. The number of slices per scan ranged from 126 to 385 and slice thickness of the CT scans ranged from 1–6 mm. The slice thickness (Slc) distribution of the CT scans in both datasets is as follows: 63 cases with Slc ⩽ 1 mm, 41 cases with 1 < Slc ⩽ 2 mm, 21 cases with 2 < Slc ⩽ 4 mm, and 24 cases with 4 < Slc ⩽ 6 mm. The CT scan images were obtained with a Siemens scanner at kilo voltage peak distribution of 120–140 KVp with a current varying from 25 to 40 mAs depending upon the patient conditions. Each slice had a XY planar resolution of 512× 512 pixels with a 16‐bit gray scale resolution in Hounsfield Units (HU). Table 3 provides the details corresponding to D1 and D2.

Table 3.

Class distribution of the two categories of nodules in the learning and validation sets

Data set Adenocarcinomas Granulomas. Total
D1 (UH) 34 33 67
D2 (CCF) 46 36 82
Total 80 69 149

3.B. Segmentation accuracy

Our segmentation approach was evaluated on a cohort of 149 chest CT studies including solid, part solid, and ground glass opacity (GGO) nodules, the Dice coefficient for nodules segmented by SEGvAC compared to the readers ground truth segmentation was 0.84 ± 0.04. The ground truth segmentation for the nodules was obtained manually by readers using the 3D Slicer imaging software,36 the radiologist was naive to the pathologic diagnosis of the nodule. Figures 6(a)–6(c) show slices of a CT image including part of a solid nodule with the corresponding manual initialization for the SEGvAC model. Figures 6(d)–6(f) show the final segmentation results of a typical active contour model driven by image intensity gradients, while Figs. 6(g)–6(i) show the final segmentation results of SEGvAC . As can be appreciated from Figs. 6(d)–6(f) and 6(g)–6(i), SEGvAC appears to be able to better capture the speculations on the nodule compared to the standard active contour approach employing just the image gradients alone (as opposed to the tenor gradients). Table 4 summarizes the quantitative results of the segmentation performance of the SEGvAC model.

Figure 6.

Figure 6

Comparison of the nodule segmentation performance between the SEGvAC and a typical AC model on nodules corresponding to three different slices of a lung CT scan. (a)–(c) Initialization for the active contour model for three different cross sectional slices of a nodule and manual segmentation of the nodules by a reader, (d)–(f) AC segmentation derived by image intensity gradients, (g)–(i) SEGvAC segmentation results derived by tensor gradients. [Color figure can be viewed at wileyonlinelibrary.com]

Table 4.

Summary statistics of segmentation performance for the SEGvAC and standard AC models on a total of 149 patient studies from two different institutions. The ground truth used for quantitative evaluation was via a human reader's annotation of the nodule using 3D slicer

Approach SEGvAC AC
S
ϵov
ϵun
S
ϵov
ϵun
Adeno 0.85 ± 0.04 0.25 ± 0.05 0.08 ± 0.04 0.71 ± 0.06 0.34 ± 0.06 0.13 ± 0.05
Granu 0.84 ± 0.05 0.19 ± 0.07 0.09 ± 0.05 0.74 ± 0.05 0.33 ± 0.05 0.13 ± 0.09
All 0.84 ± 0.04 0.22 ± 0.06 0.08 ± 0.04 0.72 ± 0.05 0.33 ± 0.05 0.07 ± 0.04

3.C. Identifying most predictive features

In parallel to the PCA‐VIP, hierarchical clustering of the shape features revealed clusters with distinct radiomic attributes across the various samples in the learning set (D2). The heat map shown in Fig. 7 represents the samples of the training cohort arranged along the rows and the emerging features clusters across the columns. As may be observed from Fig. 7, most of the features (i.e., columns) have almost the same values (i.e., colors) for different samples (i.e., rows) belonging to the adenocarcinomas and granulomas classes. However, there is a feature cluster involving features 14, 18, 22, 12, 10, and 20 which appears to differentially express for the samples from the two classes. These features corresponded to compactness, roughness, extent, convexity, eccentricity, and elongation.

Figure 7.

Figure 7

X axis (columns) corresponds to the emerging features clusters and Y axis (rows) corresponds to the samples. Colors/intensities of the map represent feature values normalized to [0,1]. The discriminating feature cluster identified (via a parenthesis) represents the set of features found to be differentially expressing between the two classess of nodules. [Color figure can be viewed at wileyonlinelibrary.com]

Figure 8 represents the AUC values, averaged across 50 cross‐validation runs for top PCA‐VIP selected features on D2 (the training set only). Note that the classifier attains a maximum AUC value (0.76) with the six top PCA‐VIP features identified on D2, but starts to falls off subsequently. This may possibly be on account of the curse of dimensionality (number of features exceeding of the number of training exemplars) and possibly because some of the less important features begin to adversely affect the performance of the classifier. All AUC values shown on this plot were obtained on the learning set alone (D2). The six features having the highest PCA‐VIP scores were identified as : the mean of Extent, Convexity, Eccentricity, Compactness, Roughness, Elongation. PCA‐VIP selected features matches to the results of the unsupervised feature clustering which led to the visually discriminating features cluster in Fig. 7. Note that the top six learned features from the training set D2 were also used in conjunction with a machine learning classifier on the validation set D1. Figure 9 illustrates the 3D rendering of four adenocarcinomas and four granulomas. The corresponding feature vector of each nodule in the form of a bar graph is illustrated in the bottom left of each individual panel. Note that, columns of the bar graph represent the top six features and the corresponding height is a reflection of their values. As it may be seen from the figure, Convexitymean (f22) appears to significantly overexpress for the adenocarcinomas and appears to be largely underexpressing for the granulomas.

Figure 8.

Figure 8

AUC values for the SVM classifier by invoking features from 1–24. The features shown on the X axis are rank ordered in terms of their PCA‐VIP scores. [Color figure can be viewed at wileyonlinelibrary.com]

Figure 9.

Figure 9

3D rendering of four adenocarcinomas (a–d) and four granulomas (e–h). Corresponding feature vector of each nodule in the form of a bar graph is illustrated in the bottom left of each individual panel. The corresponding height of each column is a reflection of feature fi's value. [Color figure can be viewed at wileyonlinelibrary.com]

3.D. Classification accuracy

The classification results of both manually and automatically segmented nodules are provided in Table. 5. The classifier was trained with D2 and tested independently on D1. It is trained with either the reduced or the original set of shape features. While AUC for the CManualRed (i.e., SVM classifier trained and validated with manually segmented nodules) is 0.72, the classifier trained and validated with automatically segmented nodules (CautoRed) yielded an AUC of 0.64, differences between CManualRed and CautoRed being statistically significantly different (P = 0.0014). However, it is worth noting that even the results of a fully automated segmentation‐based classifier are comparable with a reader's classification on this cohort. More importantly, results of CautoRed and CManualRed clearly suggest that shape‐based features have a role to play in distinguishing granulomas from adenocarcinomas. Corresponding AUC values for the top ranked features are shown in Fig. 10.

Table 5.

Independent validation result: Classification is performed with SVM classifier which is trained with D2 and tested on D1. The same number of features which were learned on D2 (top 6 features), was extracted for D1 and employed in conjunction with the CAutoRed, CManualRed and CManual,2Red classifiers

Classifier Description AUC
CManualRed
Trained with manually segmented nodules, validated on manually segmented nodules 0.72
CManual,2Red
Trained with manually segmented nodules, validated on automatically segmented nodules 0.65
CManualAll
Trained with manually segmented nodules, validated on automatically segmented nodules 0.58
CAutoRed
Trained with automatically segmented nodules, validated on automatically segmented nodules 0.64

Figure 10.

Figure 10

Classification performance (AUC) for each individual top‐ranked feature selected by PCA‐VIP. The AUC values averaged across 50 cross‐validation runs with SVM. [Color figure can be viewed at wileyonlinelibrary.com]

In order to evaluate the effect of automatic segmentation on the subsequent classification of the nodules, we computed shape features from the nodule contours obtained via automated segmentation. These features were then subsequently used to train a machine‐based classifier to predict the probability that the nodule was malignant. As mentioned in Table. 5, the classification AUC for the features corresponding to the automatically segmented nodules on the independent dataset was CAutoRed=0.64. The performance of CAutoRed was found to be 8% less than CManualRed and this difference may be attributable to nodule segmentation errors on account of SEGvAC. More specifically the classification performance is linked to the shape features derived from the boundaries. For instance, during manual segmentation, human readers may have tended to emphasize and capture specific speculations in the margin boundary which the SEGvAC model may have smoothed out. It is possible that some of the discriminative information lies in these finer, more granular attributes of the margin boundary.

3.E. Effect of manual and automated segmentation on feature selection

The most consistent features across the manual and automatic segmented nodules for the D1 dataset were: the mean of Extent, Roughness, Convexity and Sphericity. The correlation among the scores of CManualRed and CautoRed was found to be R = 0.80 , P < 0.0001.

For D2 features that were found to be consistent across manual and automatic segmentation were Height, Width, Depth, Surface Area, Equivalentdiameterstdev, the mean of Extent, Roughness, Convexity and Sphericity. The correlation among the scores of CManualRed and CautoRed was found to be R = 0.95, P < 0.0001. The intersection of the features identified across D1 and D2 were the mean of Roughness, Convexity, Extent and Spherecity.

3.F. Reader versus machine performance comparison results

3.F.1. Segmentation performance comparison against readers

The Dice overlap between SEGvAC and the readers was found to be 0.84 ± 0.07 and 0.83 ± 0.11, respectively, whereas, the inter‐reader agreement was 0.79 ± 0.12. All agreement values were computed only from those slices identified as containing a lesion by both readers. The results of machine‐reader and inter‐reader segmentation agreements in terms of Dice (S), over and under segmentation errors (ϵov,ϵun) are delineated in Table. 6. Furthermore, Fig. 11 illustrates inter‐reader and machine‐reader specific Dice values for 20 studies. Note that the Dice values between the machine (i.e., SEGvAC) and the two readers are averaged and then plotted in Fig. 11. The inter‐reader Dice score is slightly lower than that of the machine‐reader score. The discord might be on account of the following reasons: Firstly, since human experts were tasked with both initially identifying and segmenting the nodules (although they were provided the approximate location of the nodule of interest from the pathology reports), for some of the nodules the starting and ending slice numbers differed between the readers. In other words, the readers differed on where the nodule began and where it ended. That means, although readers were segmenting the same nodule, the number of slices that the nodule traversed could have differed. Predictably in assessing the Dice coefficient in 3D, this resulted in inter‐reader disagreement. On the other hand, the SEGvAC algorithm was only run on a slice by slice basis on those specific slices on which the human reader had identified the presence of a nodule. Under these conditions Dice value for the two readers was 0.66 ± 0.18. Interestingly, when the Dice value was re‐calculated for only those slices where the readers had segmented the nodule (i.e., the intersection of the nodule volumes defined by the human readers), the Dice value increased to 0.79 ± 0.12 from 0.66 ± 0.18. Furthermore, the SEGvAC was manually initialized within the nodule. Hence the Dice coefficient for SEGvAC was better aligned with the segmentations of the individual readers, compared to the agreement between readers.

Table 6.

Machine‐reader and inter‐reader segmentation agreement computed by Dice measure (S) and corresponding segmentation errors (ϵov, ϵun) on a subset of 20 studies

Comparison S
ϵov
ϵun
Machine vs reader1 0.84 ± 0.07 0.26 ± 0.10 0.08 ± 0.03
Machine vs reader2 0.83 ± 0.11 0.27 ± 0.12 0.08 ± 0.04
Reader1 vs reader2 0.79 ± 0.12 0.19 ± 0.16 0.18 ± 0.08
Figure 11.

Figure 11

Dice values for SEGvAC vs. human readers and reader1 vs. reader2 for 20 independent test cases. Note that segmentation agreement values between SEGvAC and the two readers are averaged across 20 independent test cases. [Color figure can be viewed at wileyonlinelibrary.com]

3.F.2. Classification performance comparison of machine against readers

The comparison of classifier performance between machine and humans on the 20 cases selected from D1 are shown in Fig. 12. An overall AUC of 0.76 was obtained using CManualRed. On the same holdout set, the AUCs for the two radiologists and one pulmonologist (denoted by reader 1, 2 and 3), were found to be 0.82, 0.68, and 0.58, respectively. Note that, the human versus machine study was conducted on a small validation set and its primary goal was to show the trend that the shape‐based features appear to hold their own in terms of diagnostic performance with respect to the human readers. Interestingly on this small dataset, the machine‐based classifier outperformed two of the three human readers. However, we note that this was a proof of concept demonstration and conclusions drawn from these preliminary results need to be interpreted in light of the limited sample size. Moreover, the nodules were size‐matched between two classes. The AUC associated within the SVM classifier trained with features relating to nodule size (i.e., height, width, and depth of the 3D bounding box associated with each nodule) was found to be 0.54 ± 0.02. These AUC values which are only marginally better than random guessing appear to suggest that features relating to nodule size are unable to appreciably discriminate between granulomas and adenocarcinomas.

Figure 12.

Figure 12

Classification performance comparison for CManualRed (machine) and the three human readers on a set of 20 cases selected from D1. An overall AUC of 0.76 was obtained using CManualRed. The AUCs for the human readers were found to be 0.82, 0.68, and 0.58, respectively. [Color figure can be viewed at wileyonlinelibrary.com]

4. Concluding remarks

Due to similar visual appearance, distinguishing between benign granulomas and adenocarcinomas is difficult on routine CT scans. The inability to discriminate between benign granulomas and adenocarcinomas has tended to result in a large number of patients with benign pathologies undergoing unnecessary surgical wedge resections or biopsies, an issue that will be exacerbated with the recent decision from the centers for medicare and medicaid services (CMS) to reimburse for annual lung cancer screening via low‐dose CT for heavy smokers and people with high risk for lung cancer. The goal of this work was to evaluate the role of shape attributes in distinguishing between granulomas and adenocarcinomas. In order to evaluate this hypothesis, we developed an integrated framework for segmentation‐ and shape‐based characterization of the nodules on chest CT scans.

Our framework comprised of a nodule segmentation approach that integrated spectral embedding, active contours, and a rule‐based classifier to automatically divest lung nodules from adjoining lung parenchyma and vessels, and extract the nodule boundary. A set of 2D and 3D shape features were extracted from the surface of the nodule and used to train a machine‐learning classifier to distinguish granulomas and adenocarcinomas on the training set. The classifier was then independently validated on a separate set of cases. Major findings of our study were (a) both manual and automated segmentation approaches yielded a similar set of shape features for discriminating granulomas and adenocarcinomas, (b) our automated segmentation approach (SEGvAC) yielded very good concordance against manual segmentations. However, future work will be necessary to ensure that the automatic segmentation provides a nodule boundary that is more effective for classification and (c) the performance of the shape‐based classifier on an independent validation for both automated and manual segmentation clearly seems to suggest that shape is an important attribute to consider for discriminating granulomas and adenocarcinomas.

Conflicts of interest

The authors have no relevant conflicts of interest to disclose.

Acknowledgments

Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under award numbers 1U24CA199374‐01, R01CA202752‐01A1 R21CA179327‐01; R21CA195152‐01 the National Institute of Diabetes and Digestive and Kidney Diseases under award number R01DK098503‐02, the DOD Prostate Cancer Synergistic Idea Development Award (PC120857); the DOD Lung Cancer Idea Development New Investigator Award (LC130463), the DOD Prostate Cancer Idea Development Award; the Case Comprehensive Cancer Center Pilot Grant VelaSano Grant from the Cleveland Clinic the Wallace H. Coulter Foundation Program in the Department of Biomedical Engineering at Case Western Reserve University. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Appendix 1.

Rules and thresholds applied in segmentation

In this appendix, we provide the description of the rules and thresholds that were applied in the segmentation process. As mentioned in section 2.A. the segmentation process is comprised of two steps—lung regions and nodule segmentation.

Rules for lung region segmentation

The procedure for the 3D lung region segmentation is as follows. First, a threshold of −700 in Hounsfield units is applied to the input CT images to generate the initial lung mask (Mi). Then, the body mask (Mb) is generated by applying a morphological hole‐filling algorithm to the inverted initial lung mask (¬Mi). Then, a 3D connected component‐labeling algorithm is used to get the connected components of the CT image. By choosing the largest component of the resultant image which corresponds to the body voxels, we obtain the body mask. Having inverted the initial lung and the body mask, the secondary lung mask is obtained as: Ms=¬MiMb, where, ¬Mi is the inverted initial lung mask, Mb is the body mask, and is the logical “AND” operator. The final lung mask (Mf) is then generated by applying the hole‐filling algorithm on Ms. Finally, the segmented lung image is obtained by superimposing Mf on the input image, which in turn will form the basis for the initial region interest in the next step of the segmentation component.

Rules for nodule segmentation

Prior to invoking SEGvAC , a rule‐based classifier is used to eliminate vessels and to retain nodule‐like structures. In this regard, first, a threshold of −650 Hounsfield units is applied to the lung regions to eliminate lung tissue. Then, morphological opening followed by a 3D connected components labeling algorithm is used to partition the lung regions. Next, a set of 3D geometric properties of 3D lung regions are extracted. These properties include bounding box measures and elongation of the 3D structures defined as the length of the major axis divided by the length of the minor axis. Since lung nodules are usually about 5–30 mm in size, 3D structures that fall outside this range are eliminated. Candidate objects for inclusion or exclusion were also examined in terms of convexity and elongation measures. Then, the remaining objects are converted to the spectral embedding space using Gaussian kernel and four dominant eigenvectors. Finally, the single‐click SEGvAC is then invoked for the structures retained in order to obtain the final nodule segmentation.

References

  • 1. Travis WD, Brambilla E, Muller HK, Harris CC. Pathology and Genetics of Tumours of the Lung, Pleura, Thymus and Heart. Feance: IARC Press; 2004. [Google Scholar]
  • 2. Swensen SJ, Brown LR, Colby TV, Weaver AL. Pulmonary nodules: CT evaluation of enhancement with iodinated contrast material. Radiology. 1995;194:393–398. [DOI] [PubMed] [Google Scholar]
  • 3. Mukhopadhyay S, Gal AA. Granulomatous lung disease: an approach to the differential diagnosis. Archives Pathol Lab Med. 2010;134:667–690. [DOI] [PubMed] [Google Scholar]
  • 4. Boskovic T, Stojanovic M, Stanic J, et al. Pneumothorax after transbronchial needle biopsy. J Thorac Dis. 2014;6:S427–S434. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Rusu M, Yang M, Rajiah P, et al. Histology‐CT fusion facilitates the characterization of suspicious lung lesions with no, minimal, and significant invasion on CT. Lab Invest. 2015;95:401A–401A. [Google Scholar]
  • 6. Hasegawa M, Sone S, Takashima S, et al. Growth rate of small lung cancers detected on mass CT screening. Br J Radiol. 2000;73:1252–1259. [DOI] [PubMed] [Google Scholar]
  • 7. Wang H, Schabath MB, Liu Y, et al. Semiquantitative computed tomography characteristics for lung adenocarcinoma and their association with lung cancer survival. Clin Lung Cancer. 2015;16:e141–e163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Brandman S, Ko JP. Pulmonary nodule detection, characterization, and management with multidetector computed tomography. J Thorac Imaging. 2011;26:90–105. [DOI] [PubMed] [Google Scholar]
  • 9. Gimenez A, Franquet T, Prats R, Estrada P, Villalba J, Bague S. Unusual primary lung tumors: a radiologic‐pathologic overview 1. Radiographics. 2002;22:601–619. [DOI] [PubMed] [Google Scholar]
  • 10. Suzuki C, Jacobsson H, Hatschek T, et al. Radiologic measurements of tumor response to treatment: practical approaches and limitations 1. Radiographics. 2008;28:329–344. [DOI] [PubMed] [Google Scholar]
  • 11. Christensen E, Hunter L, Stingo F, Klawikowski S. TU‐C‐103‐08: determination of CT texture variability among several CT scanners. Med Phys. 2013;40:438–438. [Google Scholar]
  • 12. Zhao B, Tan Y, Tsai WY, Schwartz LH, Lu L. Exploring variability in CT characterization of tumors: a preliminary phantom study. Translat Oncol. 2014;7:88–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Aerts HJWL, Velazquez ER, Leijenaar RT, et al. Decoding tumor phenotype by noninvasive imaging using a quantitative radiomics approach. Nature commun. 2014;5:4006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Grove O, Berglund AE, Schabath MB, et al. Quantitative computed tomographic descriptors associate tumor shape complexity and intratumor heterogeneity with prognosis in lung adenocarcinoma. PloS One. 2015;10:e0118261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Tan Y, Schwartz LH, Zhao B. Segmentation of lung lesions on CT scans using watershed, active contours, and Markov random field. Med Phys. 2013;40:043502. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Dehmeshki J, Amin D, Valdivieso M, Ye X. Segmentation of pulmonary nodules in thoracic CT scans: a region growing approach. IEEE Trans Med Imaging. 2008;27:467–480. [DOI] [PubMed] [Google Scholar]
  • 17. Wang J, Engelmann R, Li Q. Segmentation of pulmonary nodules in three‐dimensional CT images by use of a spiral‐scanning technique. Med Phys. 2007;34:4678–4689. [DOI] [PubMed] [Google Scholar]
  • 18. Way TW, Hadjiiski LM, Sahiner B, et al. Computer‐aided diagnosis of pulmonary nodules on CT scans: segmentation and classification using 3D active contours. Med Phys. 2006;33:2323–2337. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Lee G, Rodriguez C, Madabhushi A. Investigating the efficacy of nonlinear dimensionality reduction schemes in classifying gene and protein expression studies. IEEE Trans Comput Biol Bioinform. 2008;5:368–384. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Jamieson AR, Giger ML, Drukker K, Li H, Yuan Y, Bhooshan N. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t‐SNE. Med Phys. 2010;37:339–351. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Weiss Y. Segmentation using eigenvectors: a unifying view. In: IEEE International Conference On Computer Vision. Kerkyra ; 1999:975–982. [Google Scholar]
  • 22. Shi J, Malik J. Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell. 2000;22:888–905. [Google Scholar]
  • 23. Agner SH, Xu J, Madabhushi A. Spectral embedding based active contour (SEAC) for lesion segmentation on breast dynamic contrast enhanced magnetic resonance imaging. Med Phys. 2013;40:032305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Ginsburg SH, Tiwari P, Kurhanewicz J, Madabhushi A. Variable ranking with PCA: finding multiparametric MR imaging markers for prostate cancer diagnosis and grading. In: Madabhushi A, Dowling J, Huisman H, Barratt D, eds. Prostate Cancer Imaging. Image Analysis and Image‐Guided Interventions. Toronto, Canada: Springer Science & Business Media; 2011:146–157. [Google Scholar]
  • 25. Ginsburg SH, Viswanath SE, Bloch BN, et al. Novel PCA‐VIP scheme for ranking MRI protocols and identifying computer‐extracted MRI measurements associated with central gland and peripheral zone prostate tumors. J Magn Reson Imaging. 2015;41:1383–1393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Jolliffe I. Principal Component Analysis. Wiley StatsRef: Statistics Reference Online; 2014. http://onlinelibrary.wiley.com/doi/10.1002/9781118445112.stat06472/abstract;jsessionid=53219DF2D2F5F4A60FC3BFAE86ADA996.f02t01 [Google Scholar]
  • 27. Hu SH, Hoffman EA, Reinhardt JM. Automatic lung segmentation for accurate quantitation of volumetric X‐ray CT images. IEEE Trans Med Imaging. 2001;20:490–498. [DOI] [PubMed] [Google Scholar]
  • 28. Leader J K, Zheng B, Rogers RM, et al. Automated lung segmentation in X‐ray computed tomography: development and evaluation of a heuristic threshold‐based scheme1. Academic Radiol. 2003;10:1224–1236. [DOI] [PubMed] [Google Scholar]
  • 29. Kass M, Witkin A, Terzopoulos D. Snakes: active contour models. Int J Comput Vision. 1988;1:321–331. [Google Scholar]
  • 30. Xu CH, Prince JL. Snakes, shapes, and gradient vector flow. IEEE Trans Image Process. 1998;7:359–369. [DOI] [PubMed] [Google Scholar]
  • 31. Rahtu E, Salo M, Heikkila J. A new convexity measure based on a probabilistic interpretation of images. IEEE Trans Pattern Anal Mach Intell. 2006;28:1501–1512. [DOI] [PubMed] [Google Scholar]
  • 32. Stojmenovic M, Zunic J. Measuring elongation from shape boundary. J Math Imaging Vis. 2008;30:73–85. [Google Scholar]
  • 33. Zijdenbos AP, Dawant BM, Margolin RA, Palmer AC. Morphometric analysis of white matter lesions in MR images: method and validation. IEEE Trans Med Imaging. 1994;13:716–724. [DOI] [PubMed] [Google Scholar]
  • 34. Thevenaz PH, Unser M. Precision isosurface rendering of 3D image data. IEEE Trans Image Process. 2003;12:764–775. [DOI] [PubMed] [Google Scholar]
  • 35. Burges CJC. A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov. 1998;2:121–167. [Google Scholar]
  • 36. Fedorov A, Beichel R, Kalpathy‐Cramer J, et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn Reson Imaging. 2012;30:1323–1341. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Medical Physics are provided here courtesy of American Association of Physicists in Medicine

RESOURCES