Abstract
Purpose:
The authors propose a computer-aided diagnosis (CAD) system for prostate cancer to aid in improving the accuracy, reproducibility, and standardization of multiparametric magnetic resonance imaging (MRI).
Methods:
The proposed system utilizes two MRI sequences [T2-weighted MRI and high-b-value (b = 2000 s/mm2) diffusion-weighted imaging (DWI)] and texture features based on local binary patterns. A three-stage feature selection method is employed to provide the most discriminative features. The authors included a total of 244 patients. Training the CAD system on 108 patients (78 MR-positive prostate cancers and 105 benign MR-positive lesions), two validation studies were retrospectively performed on 136 patients (68 MR-positive prostate cancers, 111 benign MR-positive lesions, and 117 MR-negative benign lesions).
Results:
In distinguishing cancer from MR-positive benign lesions, an area under receiver operating characteristic curve (AUC) of 0.83 [95% confidence interval (CI): 0.76–0.89] was achieved. For cancer vs MR-positive or MR-negative benign lesions, the authors obtained an AUC of 0.89 AUC (95% CI: 0.84–0.93). The performance of the CAD system was not dependent on the specific regions of the prostate, e.g., a peripheral zone or transition zone. Moreover, the CAD system outperformed other combinations of MRI sequences: T2W MRI, high-b-value DWI, and the standard apparent diffusion coefficient (ADC) map of DWI.
Conclusions:
The novel CAD system is able to detect the discriminative texture features for cancer detection and localization and is a promising tool for improving the quality and efficiency of prostate cancer diagnosis.
Keywords: prostate cancer, multiparametric MRI, CAD, texture analysis, feature selection
1. INTRODUCTION
Prostate cancer is the second leading cause of death from cancer in men and nearly 30 000 deaths are expected in the United States in 2014.1 Earlier and more accurate detection and localization of prostate cancer is critical to provide appropriate treatment. Diagnosis of prostate cancer requires a biopsy. The current standard of care is to obtain 10–14 cores randomly from the prostate using ultrasound (US) imaging to guide the needle into standard anatomic locations. As a result, random biopsies lead to an overdiagnosis of incidental, nonlethal microscopic tumors and an underdiagnosis of clinically significant lesions located outside the typical biopsy template.2,3
Recent studies have shown that magnetic resonance imaging (MRI) can visualize the more aggressive lesions in the prostate and significantly improve the detection rate of clinically significant prostate cancers, especially when a multiparametric MRI approach is employed.4,5 This approach incorporates several MRI sequences such as T2-weighted (T2W) MRI, diffusion-weighted (DW) MRI, dynamic contrast enhanced (DCE) MRI, and, less commonly, MR spectroscopy. The identified lesions can be fused and superimposed on real-time US imaging to enable targeted biopsy via software-based registration6 or visual registration. MRI-US guided fusion biopsy almost doubles the significant cancer detection rate compared to a standard 12-core transrectal US (TRUS) biopsy on a per-core basis and specifically lowers the detection of inconsequential, low-grade tumors.7 However, examining multiparametric imaging is a complex and time consuming process, requiring specific training and expertise. Readers must rapidly integrate a large amount of visual information, mentally register and resolve sometimes contradictory findings. This process can be especially challenging to less experienced readers. Computer-aided diagnosis (CAD) systems can assist in processing multiparametric MRI by extracting and drawing attention to meaningful information contained within the images, thus, potentially facilitating or improving the decision-making.
Several CAD systems adopting a multiparametric approach have been developed. T2W MRI was combined with DCE MRI,8,9 diffusion-weighted imaging (DWI),10 or MR spectroscopy.11,12 Some incorporated T2W MRI, DCE MRI, and DWI together.13–20 Intensity8,10,13–16,19 and texture9–11,16,18 features were commonly used to characterize suspicious lesions. Texture features included first-order statistics,9,16 co-occurrence matrix,10,16 gradient operators,16,18 local binary pattern,18 local phase quantization,9 and wavelet transform.11 In addition, graph embedding,12,21 random walk,17 locally linear embedding,15 and principal component analysis11 were used to reduce data dimension and/or to improve data representation. The majority of these systems focused only on the peripheral zone of the prostate10,13,15,16,18 because it is where >70% of the prostate cancers arise.22,23 Cancer detection in the transition zone or central gland is more challenging in part because cancers can be masked by adjacent benign prostatic hyperplasia (BPH).9,23
T2W MRI, DCE MRI, and a map of apparent diffusion coefficient (ADC) from DWI have been the mainstay of multiparametric MRI in the previous studies. In addition to these, high-b-value DWI has recently gained much attention and shown a capability for tissue and tumor characterization and detection in the brain,24 breast,25 colon,26 gallbladder,27 liver,28 pancreas,29 and prostate.30–32 In particular, high-b-value DWI (b = 2000 s/mm2) combined with T2W MRI substantially improved prostate cancer detection.30,32
Herein, we propose a MRI CAD system for detecting prostate cancers utilizing features derived from T2W MRI and high-b-value DWI (Fig. 1). This system extracts texture information using local binary pattern (LBP)33 and its variants. A three-stage feature selection method selects the most discriminative features for cancer. In the first stage, frequent pattern mining34 discovers a single or a combination of texture patterns that can represent either cancer or benign lesions. In the second stage, Wilcoxon rank-sum test finds the texture patterns that significantly differ regarding class labels (cancer and benign). In the third stage, the texture patterns that minimize redundancy among the patterns and maximize relevance among the patterns and class labels are selected by applying a mutual information-based criterion. The selected patterns are designated as the most discriminative texture features and used to build a classification model, support vector machine (SVM),35 providing a diagnostic cancer prediction map for the whole prostate.
FIG. 1.

Pipeline of the MRI CAD system. Multiparametric MRI is processed using three texture operators. Frequent and discriminative texture features are selected and used to build a classification model. Finally, the integrated cancer prediction map is produced.
The purpose of our study is to assess the ability of a texture feature extraction and selection scheme as well as the performance of a classification model in detecting and localizing prostate cancer based on T2W MRI and high-b-value DWI.
2. MATERIALS AND METHODS
2.A. Patient population
This study was conducted as part of an ongoing institutional review board (IRB)-approved clinical trial of MRI-US fusion prostate biopsy. Eligible patients had a history of elevated PSA or clinical suspicion of prostate cancer and had at least one suspicious lesion visualized on multiparametric MRI. The commercial fusion platform used for this study was the UroNav system (In vivo, Philips Healthcare, Gainesville, FL). Patients had standard of care 12-core TRUS guided extended sextant biopsies, and two targeted MRI-US fusion guided biopsies (axial and sagittal planes) per MRI-identified lesion. The study population consisted of 508 consecutive patients from January 2013 to May 2014. Of these, we excluded 264 patients. The exclusion criteria were (1) previous record of treatment (focal laser ablation, hormone therapy, and cryotherapy): 17 patients; (2) absence of one or more MRI sequences: 37 patients; (3) poor image quality, deformation, and patient motion: 75 patients; and (4) no unequivocal MR-identified (cancer or benign) lesion or no benign sextant biopsy >30 mm away from the MR-identified lesions: 135 patients. Unequivocal cancer or benign lesions were to be cancer or benign for both axial and sagittal biopsies. Benign lesions including atrophy, high-grade prostatic intraepithelial neoplasia, and inflammation were also excluded. The distance between benign sextant biopsies and MR-identified lesions was calculated using MR coordinate information. The characteristics of the remaining 244 patients are presented in Table I.
TABLE I.
Characteristics of patient cohort.
| All datasets | Calibration | Validation | ||
|---|---|---|---|---|
| Patients | 244 | 108 | 136 | |
| Age, mean (SDa) | 63.32 (7.63) | 63.11 (7.78) | 63.49 (7.53) | |
| PSA, mean (SD) | 9.71 (10.23) | 9.40 (9.45) | 9.95 (10.84) | |
| MR-identified lesions | 362 | 183 | 179 | |
| Location | Peripheral zone, nb (%) | 243 (67.13) | 131 (71.58) | 112 (62.57) |
| Central gland, n (%) | 119 (32.87) | 52 (28.42) | 67 (37.43) | |
| Right, n (%) | 151 (41.71) | 69 (37.70) | 82 (45.81) | |
| Midline, n (%) | 29 (8.01) | 16 (8.74) | 13 (7.26) | |
| Left, n (%) | 182 (50.28) | 98 (53.55) | 84 (46.93) | |
| Apex, n (%) | 190 (34.85) | 102 (55.74) | 88 (49.16) | |
| Mid, n (%) | 142 (25.45) | 65 (35.52) | 77 (43.02) | |
| Base, n (%) | 30 (7.88) | 16 (8.74) | 14 (7.82) | |
| Suspicion level | High, n (%) | 62 (17.13) | 38 (20.77) | 24 (13.41) |
| Moderate, n (%) | 274 (75.69) | 132 (72.13) | 142 (79.33) | |
| Low, n (%) | 26 (7.18) | 13 (7.10) | 13 (7.26) | |
| Gleason scorec | 7, n (%) | 73 (50.00) | 38 (48.72) | 35 (51.47) |
| 8, n (%) | 49 (33.56) | 29 (37.18) | 20 (29.41) | |
| 9, n (%) | 20 (13.70) | 8 (10.26) | 12 (17.65) | |
| 10, n (%) | 4 (2.74) | 3 (3.85) | 1 (1.47) |
Standard deviation.
Number of cases.
From the biopsy samples obtained in axial plane.
The 244 patients were divided into the calibration (from January 2013 to September 2013) and validation (from October 2013 to May 2014) datasets. Biopsy-proven MR-identified point targets were used to provide a ground truth label. The calibration dataset was composed of 183 MR-positive lesions consisting of 78 cancers and 105 MR-positive benign lesions derived from MRIs of 108 patients. It was used to select the most discriminative features and to train the CAD system. The validation dataset from 136 patients was composed of 179 MR-positive lesions consisting of 68 cancers and 111 MR-positive benign lesions as well as 117 MR-negative regions sampled by routine biopsy and confirmed to be normal. Two validation studies were performed: (1) cancer vs benign in MR-positive lesions and (2) cancer vs benign in both MR-positive and MR-negative regions. The first validation study was to test if the CAD system was capable of identifying cancerous lesions from benign lesions that were determined as “lesions” by experienced radiologists (68 cancer and 111 benign lesions). The second study was to examine if the CAD system can detect cancer lesions from benign lesions whether they were positive or not on MRI (68 cancers, 111 MR-positive benign, and 117 MR-negative benign lesions).
2.B. MRI protocol
Mutiparametric MRI of the prostate was performed on a 3-T MR scanner (Achieva-TX, Philips Medical Systems, Best, NL) using the anterior half of a 32-channel SENSE cardiac coil (In vivo, Philips Healthcare, Gainesville, FL) and an endorectal coil (BPX-30, Medrad, Indianola, PA). No pre-examination bowel preparation was required. The balloon of each endorectal coil was distended with approximately 45 ml of perfluorocarbon (Fluorinert FC-770, 3M, St. Paul, MN) to reduce imaging artifacts related to air-induced susceptibility. T2W MRI, DW MRI, and DCE MRI were acquired. The standard DWI was acquired with five evenly spaced b-values (0–750 s/mm2) and high-b-value DWI was acquired with b = 2000 s/mm2. Multiparametric MRI was independently evaluated by two experienced genitourinary radiologists (BT, PLC with 6 and 13 yr of experience, respectively). The location of the identified suspicious lesions was recorded in a MRI coordinate system, and imported into the UroNav (In vivo, Philips Healthcare, Gainesville, FL) fusion biopsy system. The criteria for a positive lesion on multiparametric MRI have been previously described.36,37 The targets defined by multiparametric analysis were marked on T2W MRI and displayed on triplanar (axial, sagittal, and coronal) images as biopsy targets. In addition, the whole prostate was manually or semiautomatically segmented by the radiologists.
2.C. Prostate biopsy and review
All patients underwent MRI-US fusion targeted biopsy using the UroNav system. Briefly, during the biopsy, an electromagnetic (EM) field generator was placed above the pelvis, and a 2D end-fire TRUS probe (Philips C9-5ec) with detachable EM tracking sensors was positioned in the rectum. This enables real-time tracking of the US transducer (and thus biopsy guide and needle path) during the procedure. The operator scanned the prostate from its base to its apex with the tracked probe, and a fan-shaped 3D volumetric US image was reconstructed, segmented, and spatially (rigidly) registered with the prebiopsy T2W MRI which was annotated with targets in a semiautomatic fashion. Then, the live US image (iU22, Philips Healthcare, Andover, MA) was fused with the MR images in real-time. Image registration was based on EM tracking (In vivo, Gainesville, FL, Philips Interventions, formerly Traxtal, Inc., Ontario, Canada and Northern Digital, Inc., Waterloo, CA) and technical details have been described in the previous reports.6 An experienced prostate pathologist reviewed the biopsy samples, obtained by MRI-US fusion guided biopsy and a standard 12-core systematic biopsy, and reported the tissue characteristics and malignancy.
2.D. Description of CAD system
2.D.1. Postprocessing
T2W MRI and high-b-value DWI are first normalized. We identify potential outliers as the voxels whose intensities are below 1 percentile or above 99 percentile of the voxel intensities in the prostate. Excluding these outlier voxels, we compute the median and standard deviation of the voxel intensities in the prostate using the whole prostate segmentation (semiautomated and confirmed and manually adjusted by the radiologists) and divide the intensity of each voxel by median + 2 × standard deviation. Then, the different modalities of MR images (MRI-to-MRI) are rigidly registered using MR coordinate information.18 The image normalization and registration are performed per MR slice.
2.D.2. Feature extraction
Three texture operators are incorporated to extract texture information in the prostate: (1) LBP,33 (2) local directional derivative pattern (LDDP),38 and (3) variance measure operator (VAR)33 (Fig. 1). LBP is a popular local texture descriptor due to its low computational complexity, gray-scale and rotation invariance, and robustness to illumination changes. LBP compares the gray level of a pixel and its local neighborhood and generates a (binary) pattern code. The pattern code is often represented as a decimal number. A local neighborhood is defined as a set of evenly distributed pixels on a circle. A radius of the circle determines a spatial resolution of LBP. LDDP is an extension of LBP, which encodes higher order derivative information of texture. VAR measures the local variance of texture. Since VAR is continuous, it is discretized by equal-depth binning to provide a pattern code. Finally, the pattern codes are summarized into a histogram. A bin in the histogram corresponds to a unique pattern code.
2.D.2.a. Local binary pattern and its variants.
Given a (center) pixel c in an image, LBP examines its neighboring pixels p(p = 0, …, P − 1) in a radius R and generates a binary pattern code as follows
| (1) |
where is 1 if x ≥ 0 and 0 if x < 0 and gc and gp represent the gray level of the center pixel and its neighborhood pixels, respectively. The coordinates of the neighborhood pixels are computed as and their gray levels are estimated by interpolation. Since LBP depends exclusively on the sign of the gray level differences, the pattern code is invariant to the scaling of the gray scale. Moreover, rotating the image, the gray level of the neighborhood pixels rotates around the center pixel. The rotation results in a different binary pattern code but only makes a bitwise shift in the original pattern code. Hence, rotation-invariant pattern code is computed as
| (2) |
where is a circular bitwise shift operator on x by i bits.
Higher order derivative information is computed using LDDP (Ref. 38) to provide more detailed texture information. Second order LDDP along p direction is computed as follows:
| (3) |
| (4) |
where and denote the gray level of a neighborhood pixel p in a circle of radius R1 and R2, respectively.
Since LBP and LDDP lack contrast information, variance of the local contrast is also measured as follows:
where
| (5) |
2.D.3. Feature selection
When multiple LBPs with various radii and/or LBP variants (e.g., LDDP and VAR) are incorporated, a multidimensional histogram outperforms a combination of single histograms. However, the multidimensional histogram is not time- and space-effective due to the exponential growth of feature space.49 Furthermore, many noisy bins adversely affect the texture analysis since their density estimates are unreliable.33 Limiting the number of bit transitions (from 0 to 1, or vice versa), “uniform” patterns are often utilized but lead to a huge loss of texture information.39 Instead, we find the most informative histogram bins (or features) that are frequent and discriminative, i.e., improve discriminative power as well as reduce noisy bins.
First, we find frequent pattern codes by adopting a data mining approach, so called frequent pattern mining.34 Whether a pattern code is frequent or not is determined by a user-specified threshold. Frequent pattern mining discovers any combination of the pattern codes that are frequent, i.e., it simultaneously examines not only an individual operator (single-dimensional histograms) but also any combinations of the operators (multidimensional histograms). Hence, “frequent pattern codes” include bins from single- and multiple-operator histograms. Second, the occurrences of the frequent pattern codes between cancers and benign tissue are compared using Wilcoxon rank-sum test. Only the significant pattern codes (p-value <0.05) are selected. Third, the significant pattern codes are ordered via mRMR (minimum redundancy maximum relevance)40 criterion. Following the mRMR order, forward feature selection sequentially adds one new pattern code at a time, and measures the discriminative power of the pattern codes so far, at that point in time. The set of pattern codes with the highest classification performance is chosen as the most discriminative pattern set. Performing K-fold cross validation, the classification performance is measured by the ratio of the number of correctly predicted cancers and benign cases and the total number of cases. K-fold cross validation divides the training dataset into K disjoint partitions, learns classification models on the K − 1 partitions, and tests the models on the remaining partition. This is repeated K times with different choices of the testing partition. We set K = 5. The frequency of the most discriminative pattern codes forms the texture features for the CAD system.
2.D.3.a. Frequent pattern mining.
Suppose a dataset D = {d1, d2, …, dn} has NA categorical attributes, and class label Y = {y1, y2, …, yn} had NC classes where yi is the label associated with data di. Each attribute could have a number of values, and each pair of an attribute A and a value v (A, v) is mapped to a distinct item in Q = {a1, a2, …, am}. Then, each data di is represented as a set of items in Q. In the dataset, frequent patterns are the item sets which occur no less than a user-specified threshold. In other words, a k-item set α, consisted of k items from Q, is frequent if α occurred no less than times in the dataset, where θ is a user-specified minimum support (MinSup) threshold, is the total number of data, and the support of a pattern is the number of data containing the pattern (MinSup = 1%). “FP-growth,”41 which generates the complete set of frequent patterns without candidate generation, is used to mine frequent patterns.
2.D.3.b. mRMR.
mRMR (Ref. 40) is a feature selection method that attempts not only to maximize the relevance between the features and class labels but also to minimize the redundancy among the features. Both the relevance and redundancy are characterized in terms of mutual information as follows:
| (6) |
| (7) |
where represent the mutual information of two variables x and y, S is a feature set, and Y is a class label. To achieve the goal of optimizing the above two conditions simultaneously, the simple mRMR criterion, , is invoked. mRMR starts from a feature with the highest maximal relevance and selects a new feature among the rest of features that is the most correlated with the class labels while being the least redundant with the selected features so far. Thus, it generates an order of the features according to the mRMR criterion.
2.D.4. Classification
For each MRI, the three texture operators (LBP, LDDP, and VAR) are applied using two neighboring topologies , i.e., six different pattern codes are generated for each pixel. Collecting pattern codes in a rectangular window (7 × 7 mm) centered at the MRI-identified target point, the discriminative texture features are selected and computed using a three-stage feature selection method (frequent pattern mining, Wilcoxon rank-sum test, and mRMR criterion). SVM (Ref. 35) [LIBSVM (Ref. 42) implementation in matlab ] is used to distinguish cancer (+1) from benign (−1) lesions. As a kernel function, a radial basis function , γ = 1 is adopted. The classification results are summarized into a receiver operating characteristic (ROC) curve. The area under ROC curve (AUC) and a 95% confidence interval (CI) are computed with the trapezoidal rule. Sensitivity (the rate of correctly identified cancer lesions given true cancer lesions) and specificity (the rate of correctly identified benign lesions given true benign lesions) are also computed using zero as the cutoff value.
2.E. Statistical analysis
Data analysis was performed using r software version 2.15.2 (GNU General Public License). Statistical significance of frequent patterns in discriminating cancer lesions from benign lesions is determined by Wilcoxon rank-sum test. Bootstrap resampling with 2000 repetitions is adopted to assess 95% CI of AUCs and statistical significance of the differences between AUCs of the two ROC curves.43
3. RESULTS
3.A. T2W MRI and high-b-value DWI distinguished cancer from benign lesions
We trained our CAD system using the discriminative features from T2W MRI and high-b-value DWI [calibration performance: 0.97 AUC (95% CI: 0.94–0.99)]. Then, the two validation studies were performed. In the first validation study (cancer vs MR-positive benign), our CAD system achieved an AUC of 0.83 (95% CI: 0.76–0.89) [Fig. 2(b)]. An AUC of 0.89 (95% CI: 0.84–0.93) was obtained in the second validation study [cancer vs benign (MR-positive or MR-negative)] [Fig. 3(b)]. Moreover, the cancer prediction of our CAD system was not dependent on the specific regions of the prostate [peripheral zone: 0.83 AUC (0.73–0.91 95% CI) and transition zone: 0.83 AUC (0.72–0.93 95% CI)]. The CAD system predicted the presence of cancer for the whole prostate. The predicted cancer areas corresponded to the MR suspicious lesions that were proved to be cancer by MRI-US fusion targeted biopsy and pathology review [Fig. 4(a)]. The MR-positive benign areas were predicted as benign [Fig. 4(b)].
FIG. 2.

ROC curves for cancer versus MR-positive benign. (a) Single-modal and (b) bi- and trimodal combinations.
FIG. 3.
ROC curves for cancer versus benign. (a) Single-modal and (b) bi- and trimodal combinations.
FIG. 4.

Cancer prediction for the whole prostate from 12 different patients. First, second, and third columns show a cancer prediction map, T2W MRI, and high-b-value DWI, respectively. MR-positive lesions (red circles) were proven to be (a) cancer and (b) benign. GS denotes Gleason score.
We also assessed mispredicted cases (false negatives and false positives) by our CAD system. False negatives, missed cancer lesions, mainly included smaller- or subcapsular lesions [rows 1, 3, 4, and 6 in Fig. 5(a)]. Due to the window-based feature computation scheme, insufficient information may have provided and complicated the prediction, leading to a lower likelihood of cancer. In some cases, although the targeted voxel was missed, its neighboring voxels were correctly identified as cancer [rows 2 and 5 in Fig. 5(a)]. Moreover, false positives, benign lesions that were predicted as cancer, were often observed where cancerlike imaging signatures are shown: dark on T2W MRI and bright on high-b-value DWI [Fig. 5(b)]. BPH [row 3 in Fig. 5(a) and rows 1, 5, and 6 in Fig. 5(b)] and dark anterior areas [row 5 in Fig. 5(b)] were other causes of false positives.
FIG. 5.

False prediction from 12 different patients. (a) False negatives, missed cancer lesions and (b) false positives, benign lesions that were predicted as cancer are shown. First, second, and third columns show a cancer prediction map, T2W MRI, and high-b-value DWI, respectively. Red circles are MR-positive lesions. GS denotes Gleason score.
3.B. T2W MRI and high-b-value DWI outperformed other combinations of MRI modalities
We repeated the two validation experiments using each of the two MRI modalities (T2W MRI and high-b-value DWI) and the ADC map of the standard DWI. For the discrimination of cancer and MR-positive benign lesions, T2W MRI, high-b-value DWI, and the ADC map alone showed an AUC of 0.78 (95% CI: 0.70–0.85), 0.65 (95% CI: 0.56–0.73), and 0.67 (95% CI: 0.59–0.75), respectively [Fig. 2(a)]. For cancer vs benign, T2W MRI achieved an AUC of 0.84 (95% CI: 0.78–0.90), high-b-value DWI achieved an AUC of 0.71 (95% CI: 0.64–0.78), and the ADC map achieved an AUC of 0.69 (95% CI: 0.60–0.76) [Fig. 3(a)]. There were significant differences between our CAD system and these single-modal predictions.49
Additionally, bi- and trimodal combinations were tested. The combination of T2W MRI and the ADC map showed the AUCs of 0.78 (95% CI: 0.71–0.85) and 0.85 (95% CI: 0.78–0.90) for cancer vs MR-positive benign and for cancer vs benign, respectively. The combination of high-b-value DWI and the ADC map produced an AUC of 0.69 (95% CI: 0.60–0.77) for cancer vs MR-positive benign and an AUC of 0.75 (95% CI: 0.68–0.81) for cancer vs benign. The differences between these two bimodal combinations and our CAD system were also statistically significant.49 Interestingly, the trimodal combination, combining T2W MRI, high-b-value DWI, and the ADC map, did not improve upon our CAD system utilizing T2W MRI and high-b-value DWI. There was no statistically significant difference between them.49
3.C. Discriminative features
Using the three-stage feature selection scheme, 124 discriminative features were obtained. The features were from 48 different combinations of the three texture operators and two neighboring topologies.49 The length of the pattern codes forming the discriminative features ranged from 1 to 6. Hence, the feature selection method, in fact, explored a variety of combinations and selected the features that best describe the texture information of cancer and benign lesions.
4. DISCUSSION
The results of our study demonstrated that the MRI CAD system combining T2W MRI and high-b-value DWI could identify cancer for the whole prostate and the three-stage feature selection scheme could find the most discriminative texture features for distinguishing cancer and benign lesions. The discriminative features were selected from a variety of combinations of texture patterns, which are infeasible with the conventional approach of constructing multiple single- or multidimensional histograms; for instance, a 6D histogram would require >20 × 106 bins with the same setting. This CAD system could distinguish cancer from MR-positive benign lesions that were preselected by expert radiologists, suggesting that the CAD system could reduce the number of negative biopsies.
We compared our CAD system to the previous CAD systems (Table II). In terms of cancer detection performance, our CAD system was comparable to the previous systems. The majority of them reported the cancer detection performance from 0.82 to 0.89 AUC. Two of them achieved over 0.95 AUC. The direction comparison may be misleading since the performance is subject to several factors including patient population and characteristics, dataset size, ground truth and region of interest generation, validation scheme, etc. Especially, most CAD systems have been evaluated on smaller datasets ranging from 15 to 54 patients. It is highly likely that the patient characteristics significantly differ from each other. Moreover, a leave-one-out cross-validation (LOOCV) scheme has been mainly adopted for validation. Since the uniqueness of the CAD systems is not guaranteed in a LOOCV scheme,44 the performance of such CAD systems in the clinic is still questionable. However, our CAD system was independently trained and tested on a large-scale dataset including 244 patients. The training and validation datasets contained a diverse population of at risk patients with differing frequencies of cancer and benign lesions (Table I). We also note that the patient population in this study is overlapped with Ref. 45 where 186 patients were employed and 80% sensitivity was achieved at 70% specificity in distinguishing high risk lesions from low risk and benign lesions. At 70% specificity, our CAD system showed 72% and 88% sensitivity for cancer vs MR-positive benign and cancer vs benign, respectively. In addition, we have developed another CAD system46 utilizing T2W MRI, the ADC map of DWI, and DCE MRI on the overlapping patient population. A random forest classifier was built using first/second order statistics, texture, and shape features. Instead of point targets, contours of targeted lesions were identified by expert radiologists and used to train (40 patients) and test (21 patients) the classifier. An AUC of 0.928 (CI: 0.927–0.928) was achieved. Adding DCE MRI, the CAD system improved the performance of cancer detection but evaluated on a limited size of patient cohort.
TABLE II.
Details of the previous CAD systems.
| CAD system | Performance | Data size | Validation | Region | Imaging modality |
|---|---|---|---|---|---|
| Chan et al. (Ref. 10) | AUC = 0.839 ± 0.064 | 15 | LOO CV | PZ | T2W, ADC |
| Vos et al. (Ref. 8) | AUC = 0.89 (CI: 0.81–0.95) | 29 | LOO CV | PZ | T2W, DCE |
| Liu and Yetik (Ref. 14) | AUC = 0.89 | 20 | LOO CV | WP | T2W, DCE, ADC |
| Tiwari et al. (Ref. 11) | AUC = 0.89 ± 0.02 | 36 | 3-fold CV | WP | T2W, MR spectroscopy |
| Shah et al. (Ref. 15) | F-measure = 0.89 | 24 | LOO CV | PZ | T2W, DCE, ADC |
| Niaf et al. (Ref. 16) | AUC = 0.89 (CI: 0.81–0.84), cancer vs benign | 30 | LOO CV | PZ | T2W, DCE, ADC |
| AUC = 0.82 (CI: 0.73–0.9), cancer vs MR-positive benign | |||||
| Moradi et al. (Ref. 47) | AUC = 0.96 | 29 | LOO CV | WP | DTIa, DCE |
| Tiwari et al. (Ref. 12) | AUC = 0.89 ± 0.09 | 29 | LOO CV | WP | T2W, MR spectroscopy |
| 3-fold CV | |||||
| Peng et al. (Ref. 19) | AUC = 0.95 ± 0.02 | 48 | LOO CV | WP | T2W, DCE, ADC |
| Liu et al. (Ref. 18) | AUC = 0.82 (CI: 0.71–0.93) | 54 | 36 training | WP | T2W, DCE, ADC |
| 18 testing | |||||
| Niaf et al. (Ref. 20) | AUC = 0.89 | 49 | LOO CV | WP | T2W, DCE, ADC |
| Litjens et al. (Ref. 48) | AUC = 0.889 | 347 | LOO CV | WP | T2W, PDWb, DCE, ADC |
Diffusion tensor imaging.
Proton density-weighted imaging.
It was remarkable that the addition of the ADC map to our CAD system did not improve the prediction performance, and had reduced sensitivity, with only slightly improved specificity.49 Similarly, adding the ADC map to other single-modal systems obtained relatively small improvements compared to the improvements achieved by the addition of T2W MRI or high-b-value DWI to others. The ADC map and high-b-value DWI may carry similar underlying biological and physiological information, but the ADC calculation may introduce errors due to misregistration of different b-value images or intensity threshold applied to the images. For the purpose of cancer detection, there might be a larger synergy between T2W MRI and high-b-value DWI than between the ADC map and others.
We used biopsy-proven point targets to provide a ground truth label and to compute texture features. The point targets were required to be unequivocally cancer or benign in axial and sagittal planes, confirming the reliability of the ground truth. This may have biased our CAD system toward a larger volume of tumor. Finer and more reliable ground truth may be available through whole mount prostate tissue specimens. The cancer and benign lesions identified on the whole mount tissue image can be mapped onto the corresponding MRI slice and used to train and test the CAD system, but it still suffers from the complex registration error between the whole mount image and MRI, despite matching techniques such as patient-specific molds.14
There are several limitations to this study. First, we only used MR coordinates to register different MRI sequences, and MRIs with a substantial patient motion or deformation were excluded in this study to minimize their effect on the classification model. The exclusion was mainly due to patient motion, not deformation. Since discriminative features were computed from a rectangular window of 7 × 7 mm around a targeted voxel, a slight patient movement, even a few millimeters, during imaging acquisition could have a significant impact on feature computation. However, local deformations or displacements of the prostate may occur in the clinics. Image registration algorithms may be able to correct for such motion or deformation, enabling better registration of MRI sequences and improved performance of the CAD system. That is, the exclusion may not limit the applicability of our CAD system to the clinics. Nevertheless, the effect of image registration algorithms on the CAD system should be further investigated, if applied, since the correction of such motion or deformation is not trivial. Second, cancer prediction for the whole prostate was only evaluated based on biopsy. This may have preselected for large and image-able lesions. A validation study using whole mount tissue specimens could further ensure the reliability of our CAD system. Third, we computed texture features from the rectangular ROI window around a MRI-identified point target. Even though this approach minimizes users’ input, it may not maximally utilize the local characteristics of tissue due to the possible inclusion of benign tissue for a cancerous target point, or vice versa. Fourth, we only incorporated three texture operators of one particular type in our CAD system. Combined with other intensity- or texture-based features, the CAD system may be able to provide a more accurate and reliable prediction. Fifth, DCE MRI and MR spectroscopy were not considered in this CAD system. Defining methodology to incorporate these imaging modalities might further improve the CAD system. However, they require an extra time and cost for the patient and the system. DCE MRI includes a gadolinium-based contrast injection, which is not without cost or risk. The acquisition and processing of MR spectroscopy need special expertise, specific equipment, and substantial time. Last, the additive value of our CAD system to the current clinical practice is not yet clear. Further study will be conducted to prospectively use the CAD system to facilitate biopsy in specific clinical settings to study whether the cancer prediction of the CAD system helps to improve performance in identifying clinically significant prostate cancers and improves the diagnostic yield or workflow and throughput of prostate biopsy.
5. CONCLUSIONS
We have presented an automated CAD system utilizing T2W MRI and high-b-value DWI for localizing prostate cancer lesions. The performance of the CAD system is sufficiently promising to warrant retrospective and prospective testing in larger cohorts. The ability to assist readers of prostate MRI with cancer prediction maps may offer the potential for improving the diagnostic yield of prostate biopsy and for aiding surgical- or therapeutic-planning of prostate cancer as well as assisting nonspecialists in interpreting prostate MRI, making this method available to a wider population, while potentially reducing the learning curve for interpretation training.
ACKNOWLEDGMENTS
This work was supported by the Intramural Research Program of the National Institutes of Health. This work utilized the high-performance computational capabilities of the Biowulf Linux cluster at the National Institutes of Health, Bethesda, MD (http:/biowulf.nih.gov).
REFERENCES
- 1.Siegel R., Ma J. M., Zou Z. H., and Jemal A., “Cancer statistics, 2014,” Ca-Cancer J. Clin. 64, 9–29 (2014). 10.3322/caac.21208 [DOI] [PubMed] [Google Scholar]
- 2.Presti J. C., G. J. O’Dowd Jr.,, Miller M. C., Mattu R., and Veltri R. W., “Extended peripheral zone biopsy schemes increase cancer detection rates and minimize variance in prostate specific antigen and age related cancer rates: Results of a community multi-practice study,” J. Urol. 169, 125–129 (2003). 10.1016/S0022-5347(05)64051-7 [DOI] [PubMed] [Google Scholar]
- 3.Eskicorapci S. Y., Baydar D. E., Akbal C., Sofikerim M., Gunay M., Ekici S., and Ozen H., “An extended 10-core transrectal ultrasonography guided prostate biopsy protocol improves the detection of prostate cancer,” Eur. Urol. 45, 444–448 (2004), discussion 448-449. 10.1016/j.eururo.2003.11.024 [DOI] [PubMed] [Google Scholar]
- 4.Turkbey B., Mani H., Shah V., Rastinehad A. R., Bernardo M., Pohida T., Pang Y. X., Daar D., Benjamin C., McKinney Y. L., Trivedi H., Chua C., Bratslavsky G., Shih J. H., Linehan W. M., Merino M. J., Choyke P. L., and Pinto P. A., “Multiparametric 3 T prostate magnetic resonance imaging to detect cancer: Histopathological correlation using prostatectomy specimens processed in customized magnetic resonance imaging based molds,” J. Urol. 186, 1818–1824 (2011). 10.1016/j.juro.2011.07.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Habchi H., Bratan F., Paye A., Pagnoux G., Sanzalone T., Mege-Lechevallier F., Crouzet S., Colombel M., Rabilloud M., and Rouviere O., “Value of prostate multiparametric magnetic resonance imaging for predicting biopsy results in first or repeat biopsy,” Clin. Radiol. 69, e120–e128 (2014). 10.1016/j.crad.2013.10.018 [DOI] [PubMed] [Google Scholar]
- 6.Xu S., Kruecker J., Turkbey B., Glossop N., Singh A. K., Choyke P., Pinto P., and Wood B. J., “Real-time MRI-TRUS fusion for guidance of targeted prostate biopsies,” Comput. Aided Surg. 13, 255–264 (2008). 10.3109/10929080802364645 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Pinto P. A., Chung P. H., Rastinehad A. R., Baccala A. A., J. Kruecker Jr.,, Benjamin C. J., Xu S., Yan P., Kadoury S., Chua C., Locklin J. K., Turkbey B., Shih J. H., Gates S. P., Buckner C., Bratslavsky G., Linehan W. M., Glossop N. D., Choyke P. L., and Wood B. J., “Magnetic resonance imaging/ultrasound fusion guided prostate biopsy improves cancer detection following transrectal ultrasound biopsy and correlates with multiparametric magnetic resonance imaging,” J. Urol. 186, 1281–1285 (2011). 10.1016/j.juro.2011.05.078 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Vos P. C., Hambrock T., Barenstz J. O., and Huisman H. J., “Computer-assisted analysis of peripheral zone prostate lesions using T2-weighted and dynamic contrast enhanced T1-weighted MRI,” Phys. Med. Biol. 55, 1719–1734 (2010). 10.1088/0031-9155/55/6/012 [DOI] [PubMed] [Google Scholar]
- 9.Garcia Molina J. F., Zheng L., Sertdemir M., Dinter D. J., Schonberg S., and Radle M., “Incremental learning with SVM for multimodal classification of prostatic adenocarcinoma,” PLoS One 9, e93600 (2014). 10.1371/journal.pone.0093600 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Chan I., Wells W., Mulkern R. V., Haker S., Zhang J. Q., Zou K. H., Maier S. E., and Tempany C. M. C., “Detection of prostate cancer by integration of line-scan diffusion, T2-mapping and T2-weighted magnetic resonance imaging; a multichannel statistical classifier,” Med. Phys. 30, 2390–2398 (2003). 10.1118/1.1593633 [DOI] [PubMed] [Google Scholar]
- 11.Tiwari P., Viswanath S., Kurhanewicz J., Sridhar A., and Madabhushi A., “Multimodal wavelet embedding representation for data combination (MaWERiC): Integrating magnetic resonance imaging and spectroscopy for prostate cancer detection,” NMR Biomed. 25, 607–619 (2012). 10.1002/nbm.1777 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Tiwari P., Kurhanewicz J., and Madabhushi A., “Multi-kernel graph embedding for detection, Gleason grading of prostate cancer via MRI/MRS,” Med. Image Anal. 17, 219–235 (2013). 10.1016/j.media.2012.10.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Ozer S., Langer D. L., Liu X., Haider M. A., van der kwast T. H., Evans A. J., Yang Y. Y., Wernick M. N., and Yetik I. S., “Supervised and unsupervised methods for prostate cancer segmentation with multispectral MRI,” Med. Phys. 37, 1873–1883 (2010). 10.1118/1.3359459 [DOI] [PubMed] [Google Scholar]
- 14.Liu X. and Yetik I. S., “Automated prostate cancer localization without the need for peripheral zone extraction using multiparametric MRI,” Med. Phys. 38, 2986–2994 (2011). 10.1118/1.3589134 [DOI] [PubMed] [Google Scholar]
- 15.Shah V., Turkbey B., Mani H., Pang Y., Pohida T., Merino M. J., Pinto P. A., Choyke P. L., and Bernardo M., “Decision support system for localizing prostate cancer based on multiparametric magnetic resonance imaging,” Med. Phys. 39, 4093–4103 (2012). 10.1118/1.4722753 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Niaf E., Rouviere O., Mege-Lechevallier F., Bratan F., and Lartizien C., “Computer-aided diagnosis of prostate cancer in the peripheral zone using multiparametric MRI,” Phys. Med. Biol. 57, 3833–3851 (2012). 10.1088/0031-9155/57/12/3833 [DOI] [PubMed] [Google Scholar]
- 17.Artan Y. and Yetik I. S., “Prostate cancer localization using multiparametric MRI based on semi-supervised techniques with automated seed initialization,” IEEE Trans. Inf. Technol. Biomed. 16, 1313–1323 (2012). 10.1109/TITB.2012.2201731 [DOI] [PubMed] [Google Scholar]
- 18.Liu P., Wang S., Turkbey B., Grant K., Pinto P., Choyke P., Wood B. J., and Summers R. M., “A prostate cancer computer-aided diagnosis system using multimodal magnetic resonance imaging and targeted biopsy labels,” Proc. SPIE 8670, 86701G–86706G (2013). 10.1117/12.2007927 [DOI] [Google Scholar]
- 19.Peng Y., Jiang Y., Yang C., Brown J. B., Antic T., Sethi I., Schmid-Tannwald C., Giger M. L., Eggener S. E., and Oto A., “Quantitative analysis of multiparametric prostate MR images: Differentiation between prostate cancer and normal tissue and correlation with Gleason score–a computer-aided diagnosis development study,” Radiology 267, 787–796 (2013). 10.1148/radiol.13121454 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Niaf E., Flamary R., Rouviere O., Lartizien C., and Canu S., “Kernel-based learning from both qualitative and quantitative labels: Application to prostate cancer diagnosis based on multiparametric MR imaging,” IEEE Trans. Image Process. 23, 979–991 (2014). 10.1109/tip.2013.2295759 [DOI] [PubMed] [Google Scholar]
- 21.Tiwari P., Kurhanewicz J., Rosen M., and Madabhushi A., “Semi supervised multi kernel (SeSMiK) graph embedding: Identifying aggressive prostate cancer via magnetic resonance imaging and spectroscopy,” Med. Image Comput. Comput. Assisted Interv. 13, 666–673 (2010). 10.1007/978-3-642-15711-0_83 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Jiang Q. and Xia S. J., “Zonal differences in prostate diseases,” Chin. Med. J. (Engl.) 125(9), 1523–1528 (2012). [PubMed] [Google Scholar]
- 23.Futterer J. J., Heijmink S. W., Scheenen T. W., Veltman J., Huisman H. J., Vos P., Hulsbergen-Van de Kaa C. A., Witjes J. A., Krabbe P. F., Heerschap A., and Barentsz J. O., “Prostate cancer localization with dynamic contrast-enhanced MR imaging and proton MR spectroscopic imaging,” Radiology 241, 449–458 (2006). 10.1148/radiol.2412051866 [DOI] [PubMed] [Google Scholar]
- 24.DeLano M. C., Cooper T. G., Siebert J. E., Potchen M. J., and Kuppusamy K., “High-b-value diffusion-weighted MR imaging of adult brain: Image contrast and apparent diffusion coefficient map features,” AJNR, Am. J. Neuroradiol. 21, 1830–1836 (2000). [PMC free article] [PubMed] [Google Scholar]
- 25.Tamura T., Murakami S., Naito K., Yamada T., Fujimoto T., and Kikkawa T., “Investigation of the optimal b-value to detect breast tumors with diffusion weighted imaging by 1.5-T MRI,” Cancer Imaging 14, 11–19 (2014). 10.1186/1470-7330-14-11 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Roth Y., Tichler T., Kostenich G., Ruiz-Cabello J., Maier S. E., Cohen J. S., Orenstein A., and Mardor Y., “High-b-Value diffusion-weighted MR imaging for pretreatment prediction and early monitoring of tumor response to therapy in mice,” Radiology 232, 685–692 (2004). 10.1148/radiol.2322030778 [DOI] [PubMed] [Google Scholar]
- 27.Ogawa T., Horaguchi J., Fujita N., Noda Y., Kobayashi G., Ito K., Koshita S., Kanno Y., Masu K., and Sugita R., “High b-value diffusion-weighted magnetic resonance imaging for gallbladder lesions: Differentiation between benignity and malignancy,” J. Gastroenterol. 47, 1352–1360 (2012). 10.1007/s00535-012-0604-1 [DOI] [PubMed] [Google Scholar]
- 28.Muhi A., Ichikawa T., Motosugi U., Sano K., Matsuda M., Kitamura T., Nakazawa T., and Araki T., “High-b-value diffusion-weighted MR imaging of hepatocellular lesions: Estimation of grade of malignancy of hepatocellular carcinoma,” J. Magn. Reson. Imaging 30, 1005–1011 (2009). 10.1002/jmri.21931 [DOI] [PubMed] [Google Scholar]
- 29.Ichikawa T., Erturk S. M., Motosugi U., Sou H., Iino H., Araki T., and Fujii H., “High-b value diffusion-weighted MRI for detecting pancreatic adenocarcinoma: Preliminary results,” AJR, Am. J. Roentgenol. 188, 409–414 (2007). 10.2214/ajr.05.1918 [DOI] [PubMed] [Google Scholar]
- 30.Katahira K., Takahara T., Kwee T. C., Oda S., Suzuki Y., Morishita S., Kitani K., Hamada Y., Kitaoka M., and Yamashita Y., “Ultra-high-b-value diffusion-weighted MR imaging for the detection of prostate cancer: Evaluation in 201 cases with histopathological correlation,” Eur. Radiol. 21, 188–196 (2011). 10.1007/s00330-010-1883-7 [DOI] [PubMed] [Google Scholar]
- 31.Kitajima K., Takahashi S., Ueno Y., Yoshikawa T., Ohno Y., Obara M., Miyake H., Fujisawa M., and Sugimura K., “Clinical utility of apparent diffusion coefficient values obtained using high b-value when diagnosing prostate cancer using 3 tesla MRI: Comparison between ultra-high b-value (2000 s/mm(2)) and standard high b-value (1000 s/mm(2)),” J. Magn. Reson. Imaging 36, 198–205 (2012). 10.1002/jmri.23627 [DOI] [PubMed] [Google Scholar]
- 32.Ueno Y., Kitajima K., Sugimura K., Kawakami F., Miyake H., Obara M., and Takahashi S., “Ultra-high b-value diffusion-weighted MRI for the detection of prostate cancer with 3-T MRI,” J. Magn. Reson. Imaging 38, 154–160 (2013). 10.1002/jmri.23953 [DOI] [PubMed] [Google Scholar]
- 33.Ojala T., Pietikainen M., and Maenpaa T., “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 971–987 (2002). 10.1109/tpami.2002.1017623 [DOI] [Google Scholar]
- 34.Han J., Cheng H., Xin D., and Yan X., “Frequent pattern mining: Current status and future directions,” Data Min. Knowl. Discovery 15, 55–86 (2007). 10.1007/s10618-006-0059-1 [DOI] [Google Scholar]
- 35.Vapnik V., The Nature of Statistical Learning Theory (Springer, New York, NY, 2000). [Google Scholar]
- 36.Heijmink S. W., Futterer J. J., Hambrock T., Takahashi S., Scheenen T. W., Huisman H. J., Hulsbergen-Van de Kaa C. A., Knipscheer B. C., Kiemeney L. A., Witjes J. A., and Barentsz J. O., “Prostate cancer: Body-array versus endorectal coil MR imaging at 3 T–comparison of image quality, localization, and staging performance,” Radiology 244, 184–195 (2007). 10.1148/radiol.2441060425 [DOI] [PubMed] [Google Scholar]
- 37.Turkbey B., Pinto P. A., Mani H., Bernardo M., Pang Y., McKinney Y. L., Khurana K., Ravizzini G. C., Albert P. S., Merino M. J., and Choyke P. L., “Prostate cancer: Value of multiparametric MR imaging at 3 T for detection–histopathologic correlation,” Radiology 255, 89–99 (2010). 10.1148/radiol.09090475 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Guo Z. H., Li Q., You J., Zhang D., and Liu W. H., “Local directional derivative pattern for rotation invariant texture classification,” Neural Comput. Appl. 21, 1893–1904 (2012). 10.1007/s00521-011-0586-6 [DOI] [Google Scholar]
- 39.Zhou H., Wang R. S., and Wang C., “A novel extended local-binary-pattern operator for texture analysis,” Inf. Sci. 178, 4314–4325 (2008). 10.1016/j.ins.2008.07.015 [DOI] [Google Scholar]
- 40.Peng H. C., Long F. H., and Ding C., “Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1226–1238 (2005). 10.1109/tpami.2005.159 [DOI] [PubMed] [Google Scholar]
- 41.Han J. W., Pei J., and Yin Y. W., “Mining frequent patterns without candidate generation,” Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data (ACM, New York, NY, 2000), Vol. 29, pp. 1–12. [Google Scholar]
- 42.Chang C. C. and Lin C. J., “LIBSVM: A library for support vector machines,” ACM Trans. Intell. Syst. Technol. 2, 1–27 (2011). 10.1145/1961189.1961199 [DOI] [Google Scholar]
- 43.Robin X., Turck N., Hainard A., Tiberti N., Lisacek F., Sanchez J. C., and Muller M., “pROC: An open-source package for R and S plus to analyze and compare ROC curves,” BMC Bioinf. 12, 77–84 (2011). 10.1186/1471-2105-12-77 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Li Q., “Improvement of bias and generalizability for computer-aided diagnostic schemes,” Comput. Med. Imaging Graphics 31, 338–345 (2007). 10.1016/j.compmedimag.2007.02.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Summers R. M., Wang S., Liu P., Turkbey E., Pint P., Choyke P., Wood B. J., and Turkbey B., “Computer-aided diagnosis of prostate cancer using multimodal magnetic resonance imaging,” in SAR Annual Scientific Meeting and Educational Course Abdom Imaging (Springer, New York, NY, 2014), Vol. 39, p. 66. [Google Scholar]
- 46.Wang S., Turkbey B., Burtt K. E., Fraychineaud T., Boehm K. M., Choyke P., Kwak J. T., Xu S., Wood B. J., Petrick N., Sahiner B., Pinto P., and Summers R. M., “Computer-aided diagnosis of prostate cancer on multiparametric MRI using random forest,” J. Med. Imaging (unpublished). [Google Scholar]
- 47.Moradi M., Salcudean S. E., Chang S. D., Jones E. C., Buchan N., Casey R. G., Goldenberg S. L., and Kozlowski P., “Multiparametric MRI maps for detection and grading of dominant prostate tumors,” JMRI, J. Magn. Reson. Imaging 35, 1403–1413 (2012). 10.1002/jmri.23540 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Litjens G., Debats O., Barentsz J., Karssemeijer N., and Huisman H., “Computer-aided detection of prostate cancer in MRI,” IEEE Trans. Med. Imaging 33, 1083–1092 (2014). 10.1109/tmi.2014.2303821 [DOI] [PubMed] [Google Scholar]
- 49.See supplementary material at http://dx.doi.org/10.1118/1.4918318E-MPHYA6-42-035505 for a figure and tables demonstrating the exponentional growth of feature space, classification results, and discriminative features.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Citations
- See supplementary material at http://dx.doi.org/10.1118/1.4918318E-MPHYA6-42-035505 for a figure and tables demonstrating the exponentional growth of feature space, classification results, and discriminative features.

