Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Aug 17.
Published in final edited form as: Med Phys. 2020 May 11;47(8):3415–3422. doi: 10.1002/mp.14196

Pelvic multi-organ segmentation on cone-beam CT for prostate adaptive radiotherapy

Yabo Fu 1, Yang Lei 1, Tonghe Wang 1, Sibo Tian 1, Pretesh Patel 1, Ashesh B Jani 1, Walter J Curran 1, Tian Liu 1, Xiaofeng Yang 1,a)
PMCID: PMC7429321  NIHMSID: NIHMS1602748  PMID: 32323330

Abstract

Background and purpose:

The purpose of this study is to develop a deep learning-based approach to simultaneously segment five pelvic organs including prostate, bladder, rectum, left and right femoral heads on cone-beam CT (CBCT), as required elements for prostate adaptive radiotherapy planning.

Materials and methods:

We propose to utilize both CBCT and CBCT-based synthetic MRI (sMRI) for the segmentation of soft tissue and bony structures, as they provide complementary information for pelvic organ segmentation. CBCT images have superior bony structure contrast and sMRIs have superior soft tissue contrast. Prior to segmentation, sMRI was generated using a cycle-consistent adversarial networks (CycleGAN), which was trained using paired CBCT-MR images. To combine the advantages of both CBCT and sMRI, we developed a cross-modality attention pyramid network with late feature fusion. Our method processes CBCT and sMRI inputs separately to extract CBCT-specific and sMRI-specific features prior to combining them in a late-fusion network for final segmentation. The network was trained and tested using 100 patients’ datasets, with each dataset including the CBCT and manual physician contours. For comparison, we trained another two networks with different network inputs and architectures. The segmentation results were compared to manual contours for evaluations.

Results:

For the proposed method, dice similarity coefficients and mean surface distances between the segmentation results and the ground truth were 0.96±0.03, 0.65±0.67 mm; 0.91±0.08, 0.93±0.96 mm; 0.93±0.04, 0.72±0.61 mm; 0.95±0.05, 1.05±1.40 mm; and 0.95±0.05, 1.08±1.48 mm for bladder, prostate, rectum, left and right femoral heads, respectively. As 1.48 mm for bladder, prostate, compared to the other two competing methods, our method has shown superior performance in terms of the segmentation accuracy.

Conclusion:

We developed a deep learning-based segmentation method to rapidly and accurately segment five pelvic organs simultaneously from daily CBCTs. The proposed method could be used in the clinic to support rapid target and organs-at-risk contouring for prostate adaptive radiation therapy.

Keywords: adaptive radiotherapy, cone-beam CT, deep learning, multi-organ segmentation, synthetic MRI

1. INTRODUCTION

Intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) have been used to deliver conformal radiotherapy dose to the target volume while sparing surrounding normal tissues.14 However, geometric uncertainties, such as patient setup errors, anatomical changes, and target volume deformation are inevitable during the treatment course.5,6 Generous margins around the treatment target are used to account for these geometric uncertainties. Large margins lead to the inadvertent dosing of adjacent healthy tissue, compromising the intended benefits of IMRT and VMAT. To address these issues, adaptive radiation therapy (ART) has been proposed to improve clinical outcomes by accounting for these geometric uncertainties.7 Cone-beam CT (CBCT) integrated in medical linear accelerators plays an important role in ART by providing anatomical information prior to radiation delivery.810 CBCTs can be used to support ART through multiple applications, including patient setup,6 image registration,11 appropriate PTV-CTV margin,12 fractional dose calculation,13 dose accumulation, and dose monitoring.14,15

Many steps in ART are dependent on fast and accurate organs-at-risk (OAR) and target delineation on CBCT.1618 For example, rapid target and OARs contouring provides important information, when used for image registration, can guide couch translation and rotation to reduce patient setup errors.19 Compared to whole-image rigid registration, organ mask-aided nonrigid image registration is more accurate for guiding couch movement by focusing on clinically relevant regions such as the target and OARs.20 Multi-organ segmentation could help generate personalized CTV-to-PTV margins by providing average contours from previous fractions. Nijkamp et al. reported that CTV-to-PTV margins were reduced from 10 mm to 7 mm in personalized prostate radiotherapy, leading to a 29% decrease in PTV volume and a 19% decrease in the rectal volume that received greater than 65 Gy.12 Target and OAR contouring on daily CBCT could reveal interfractional anatomical changes to facilitate online multileaf collimator (MLC) movement modifications and online treatment plan re-optimization.21 Fu et al. demon-strated the superior performance of online MLC modification and re-planning over bone-based and prostate-based rigid couch translation correction in terms of prostate, seminal vesicle, rectum, bladder, and femoral heads dose-volume histograms (DVH).5 However, their IMRT re-optimization process required hours for contouring, re-planning, and verifying. Therefore, rapid automatic multi-organ segmentation is needed to expedite the contouring process of adaptive IMRT planning. In addition, target and OAR contours are also required for accurate DVH calculations, treatment planning, and dose monitoring.

Image registration allows the propagation of contours initially defined on planning CTs to CBCTs.2225 Thor et al. attempted to propagate CT-defined prostate, bladder, rectum contours to CBCTs using deformable image registration.26 However, due to large anatomical differences between the two volumes, and the difficultly of accurately registering the CT to the CBCT, this leads to inaccurately propagated contours. Boydev et al. proposed an automatic prostate segmentation method using rigid registration.27 They found that the prostate localization was heavily affected by the air in rectum. Li et al. investigated 10 deformable image registration algorithms for contour propagation between CT and CBCT in head neck radiotherapy.28 They concluded that all the 10 deformable image registration did not perform better than rigid registration and the performance varies a lot for different region of interest. To eliminate registration-related errors, it is necessary to develop an automatic method that relies solely on the CBCT as input for organ segmentation.

However, segmentation on CBCT is challenging due to low soft tissue contrast and image artifacts caused by scat-ter. As a result, there are very few papers on CBCT-based soft tissue segmentation.29,30 Caucig et al. studied the feasibility of CBCT-based target and normal structure delineation for prostate cancer radiotherapy.31 They concluded that CBCT-based contouring has greater interobserver variability compared to CT-based and MRI-based counterparts. To improve segmentation robustness, Chai et al. and Schoot et al. developed a patient-specific bladder shape model for bladder segmentation on CBCT images.32,33 To circumvent the poor soft tissue contrast of CBCT and increase segmentation accuracy, Mason et al. used ultra-sound images to aid uterine segmentation on CBCT images.34 Lei et al. proposed a method to segment the prostate, bladder, and rectum on CBCT images. Instead of CBCTs, CBCT-generated synthetic MRIs (sMRI) were used as the input.35 They showed that sMRIs could provide superior soft tissue contrast to improve prostate, bladder, and rectum segmentation. However, sMRI is inferior to CBCT for bony structure segmentation. In this study, we propose a new method to segment five pelvic organs including prostate, bladder, rectum, left femoral heads (LFH), and right femoral heads (RFH) on CBCT images for adaptive prostate radiotherapy. Instead of abandoning the original CBCT, we propose to use both the original CBCT and the CBCT-generated sMRI as inputs for pelvic multi-organ segmentation. The purpose is to combine the superior bony structure contrast of CBCTs and the superior soft tissue contrast of sMRIs for improved segmentation performance. In this study, we present a deep learning-based segmentation method that, for the first time, accurately segments five pelvic organs on CBCT simultaneously.

2. MATERIALS AND METHODS

The schematic flow chart of the proposed method is shown in Fig. 1. First, sMRIs were predicted from CBCT images using a pretrained cycle generative adversarial network (CycleGAN). The CycleGAN incorporates CBCT-to-sMRI mapping and inverse sMRI-to-CBCT mapping to improve the image quality of the sMRI. Second, the original CBCT was processed by a UNet-like network,36 called CBCT-Net in Fig. 1, for multi-organ segmentation. Mean-while, the sMRI was processed by a separate UNet-like network, called sMRI-Net in Fig. 1. Finally, a deep fusion net was introduced to combine the features learned from corresponding levels of CBCT-Net and sMRI-Net via attention gates. These attention gates were used since they could retrieve the most relevant features of organ boundaries.37 The CBCT-Net, sMRI-Net, and deep fusion net were together termed the cross-modality attention pyramid network (CMAPN). The network was trained by minimizing three segmentation losses, being CBCT-only loss, sMRI-only loss, and the late-fusion loss. Detailed input and output feature sizes of each layer are shown next to the layer in Fig. 1.

FIG. 1.

FIG. 1.

The schematic flow diagram of the proposed method which contains a CBCT-Net, a sMRI-Net, and a deep fusion net.

2.A. Data acquisition

We retrospectively collected data from 100 patients with prostate cancer treated with external beam radiotherapy. Each dataset includes a CT simulation, a diagnostic MRI, and at least one set of CBCTs during treatment. Varian On-Board Imager CBCT system, with imaging spacing of 0.908 × 0.908 × 2.0 mm3, was used for CBCT acquisition. A Siemens MRI scanner was used to acquire T2-weighted images, specifically a 3D T2-SPACE sequence with 1.0 × 1.0 × 2.0 mm3 voxel size (TR/TE: 1000/123 ms, flip angle: 95°). Institutional review board approval was obtained; no informed consent was required for this HIPAA-compliant retrospective analysis. Ground truth contours were needed for the segmentation network training. The five pelvic organs were first manually contoured on MRIs by physicians. The contours were then propagated to CBCT using image registration software, Velocity AI 3.2.1 (Varian Medical Systems, Palo Alto, CA, USA). The propagated contours were then modified and approved by the physicians to obtain the ground truth for training and testing.

2.B. Synthetic MRI generation

A 3D CycleGAN was used to perform CBCT-to-sMRI mapping. To prepare the paired CBCT-MRI training images, the MR and CBCT images were deformably registered using Velocity AI 3.2.1 (Varian Medical Systems, Palo Alto, CA, USA). Due to the fundamentally different imaging principles between CBCT and MRI, it is difficult to obtain one-to-one intensity mapping between the two. The CycleGAN utilized both targeted transformation (CBCT-to-MRI) and inverse transformation (MRI-to-CBCT) to impose an additional intensity mapping constraint to improve the image quality of the generated sMRI. The CycleGAN was trained using image patches of size 64 × 64 × 64 on a NVIDIA Tesla V100 GPU with 32 GB of memory. Each 3D patch was extracted from paired CBCT and MRIs by sliding the window with overlap to its neighboring patches (Yang et al 2017). This overlap ensures that a continuous whole-image output can be obtained and allows for increased training data for the network. With data augmentation that generated 72 times more training datasets, a total of 6 million 3D image patches were extracted. Once the network was trained, sMRI generation can be performed in 2–5 min, depending on image size. The CycleGAN has a total of 141,763,714 trainable parameters. Detailed information on network implementation was previously described by Lei et al.16

The same 100 patients’ datasets, including CBCT and MRI, were used for 3D CycleGAN training. Inaccurate regisration could result in degraded CBCT-to-MRI image pair quality, which could result in inaccurate sMRI generation. The inaccurate sMRI could in turn degrade the segmentation accuracy of our proposed method. To mitigate this problem, we have carefully reviewed each training CBCT-to-MRI image pair to ensure image alignment for CycleGAN training. The training dataset size for CycleGAN was reduced from 100 to 50 after inaccurate image pairs were excluded. We generated 50 sMRIs using the five CycleGAN models trained in fivefold cross-validation with 10 datasets in each group. To generate sMRI on the excluded 50 patients’ CBCT, we trained another CycleGAN model using the well-aligned 50 patients’ datasets. In this way, we can avoid training and testing using the same patient data.

2.C. Cross-modality attention pyramid network

The CMAPN consists of a CBCT-Net, a sMRI-Net, and a deep fusion net. The CBCT-Net and sMRI-Net share similar UNet-like network architectures, with an encoding path and a decoding path. Both the encoding path and decoding path have five pyramid levels. As shown in Fig. 1, the image feature sizes were reduced through the encoding path that consisted of five convolutional layers. The decoding path that consisted of five de-convolutional layers was used to restore the image sizes for end-to-end network training and image segmentation. The CBCT-Net and sMRI-Net could independently learn relevant features from CBCT and sMRI, respectively, for multi-organ segmentation. The features from corresponding pyramid levels were then concatenated using additive attention gates in the final deep fusion net for further processing. Except for different feature sizes and the attention gates involved, the deep fusion net was the same as the decoding path of the CBCT-Net/sMRI-Net. The deep fusion process was designed to combine the advantages of CBCT-specific features in bony structure segmentation and sMRI-specific feature in soft tissue segmentation.

In this study, we chose to combine CBCT-specific features and sMRI-specific features in the deep fusion net prior to final prediction. The process of combining the separately learned features of the two input images at deep stages is often called “late fusion.”38 To justify the network design of late fusion, we have trained a separate network which concatenated the CBCT and sMRI as a two-channel input image prior to network training. The process of combining multiple input images at the beginning as a multi-channel input is often called “early fusion.”38 To justify the usage of sMRI, we trained and tested another CBCT-Net separately using only CBCT as input. As shown in Fig. 1, the CBCT-Net is a subnetwork of the proposed method. For the ease of description, we use “CBCT-only” to represent the competing method that takes only CBCT as input, “CBCT + sMRI-(Early)” to represent another competing method that takes concatenated CBCT and sMRI as input, and “CBCT + sMRI-(Late)” to represent the proposed method. The total number of trainable parameters for CBCT-only, CBCT + sMRI-(Early), and CBCT + sMRI-(Late) are 35,056,341, 35,057,205, and 45,258,767, respectively.

2.D. Training and testing

To preserve the global anatomical information, the proposed network was trained on full-sized 3D images with volume size of 512 × 512 × 256. We used fivefold cross-validation for network training and testing. The 100 patients’ datasets were divided into five groups, with 20 datasets in each group. One group was used as the testing dataset while the remaining 80 datasets were used as training sets. This process was repeated by five times to allow all 100 datasets to be tested once. Data augmentation such as flip, rotation, scaling was used to enlarge the variety of the training datasets. The augmentation includes rotation (0°, 30°, 60°, 90°), flip-ping (not flip, flip), scaling (1, 1.25, 1.5), and affine warp (original + 2 affine warp), resulting in 72 times more augmented datasets as additional training datasets. The CBCT-only loss, sMRI-only loss, and late-fusion loss were summed as a final dice similarity coefficients (DSCs) loss function. The code was implemented using tensorflow in python. A batch size of 20 was used to fit the 32 GB memory of the NVIDIA Tesla V100 GPU. Adam optimizer with learning rate of 2e-4 was used during the training. Typically, a fixed learning rate ranges from 1e-6 to 1 for a neural network with normalized inputs.39 The learning rate used in this study was determined by trial and error. A too small learning rate usually makes the training converge very slowly. A too large learning rate may overshoot the loss function and never reach convergency. Learning rate annealing which uses a gradually reducing learning rate could be used. The network was trained for 200 epochs. Training required approximately 2 h. Once trained, the network could predict the five pelvic organ contours within 30 s.

2.E. Evaluations

Multiple metrics including DSC, 95% percentile Hausdorff distance (HD95), mean surface distance (MSD), center of mass distance (CMD), and volume difference (VD) were calculated to evaluate the segmentation accuracy. DSC was used to measure the volume overlapping ratio between the ground truth and the prediction. DSC is defined as,

DSC=2×|XY||X|+|Y| (1)

where X and Y represent the volumes of manual and predicted masks, respectively.

HD95 and MSD were calculated to measure the surface distance between the ground truth and the prediction. The HD95 quantifies the 95% percentile of the max distances between points in the manual contour X and predicted contour Y. HD95 is calculated as,

HD95=maxk95%[d(X,Y),d(Y,X)] (2)

MSD quantifies the mean distances between the manual and predicted contours, and can be calculated as,

MSD=1|X|+|Y|(xXd(x,Y)+yYd(y,X)) (3)

where d(x, Y) = miny∈Yxy2. Averaging the calculated distances pixels on the surfaces gives the mean surface distance between X and Y:d(X, Y). Volumetric discrepancies between the ground truth and predicted masks were measured using CMD and VD.

3. RESULTS

3D volumes corresponding to the ground truth, CBCT-only method, CBCT + sMRI-(Early) method, and CBCT + sMRI-(Late) method for the five pelvic organs are shown in Fig. 2. For case 2, all three methods accurately segmented the five pelvic organs. The black arrows in case 1 highlight three geometrical discrepancies for the CBCT-only method, demonstrating superior performance of CBCT + sMRI-(Late) method. In Fig. 3 are two cases where ground truth contours were overlaid on CBCT and the predicted contours were overlaid on both CBCT and sMRI images. The red arrows on Fig. 3 highlighted regions where CBCT and sMRI provide complementary information for organ segmentation. For instance, the bladder, prostate, and rectum have low contrast to its surrounding tissue on the CBCT image, which was significantly improved using sMRI. In contrast, while compact bone is difficult to identify by sMRI, it has high image contrast on the CBCT image.

FIG. 2.

FIG. 2.

Three dimensional volumes of the segmented pelvic organs for two cases. From left to right are the ground truth, CBCT-only segmentation result, CBCT + sMRI-(Early) segmentation result, and CBCT + sMRI-(Late) segmentation result. Black arrows highlight the discrepancies between the ground truth and the CBCT-only method prediction.

FIG. 3.

FIG. 3.

Contours of the segmented organs for two cases. The ground truth contours were overlaid on CBCT. The predicted contours of the proposed method were overlaid on CBCT and sMRI. Red arrows highlight regions where CBCT and sMRI provide complementary information for bony structure and soft tissue segmention.

Segmentation discrepancies between the ground truth and the CBCT-only and CBCT + sMRI-(Late) methods are seen in Fig. 4. Overall, the shapes of the segmented pelvic organs were preserved as most discrepancies were near the organ surfaces. The accurate segmentation results are partially accounted by the full-sized 3D volume training of the network, which helps to preserve structural information of the segmented organs. In Fig. 4 (A1 and B1), part of the bladder was missed by CBCT-only method but successfully segmented by the CBCT + sMRI-(Late) method. As highlighted by the arrows in Fig. 4, a similar phenomenon can also be observed for compact bone, bone marrow, and the prostate surface on some slices.

FIG. 4.

FIG. 4.

A1-A4 shows the discrepancies between the ground truth and the CBCT-only method prediction. B1-B4 shows the discrepancies between the ground truth and the CBCT + sMRI-(Late) method prediction, for corresponding slices.

Table I shows that our method generated the best results among the three methods. The CBCT-only method had an average DSC of 0.94, 0.87, 0.91, 0.93, and 0.93 for bladder, prostate, rectum, LFH, and RFH, respectively. The DSC values increased to 0.96, 0.91, 0.93, 0.95, and 0.95 for the CBCT + sMRI-(Late) method. We have found that the CBCT + sMRI-(Early) method outperformed the CBCT-only method but failed to outperform the proposed method in terms of the evaluation metrics used in this study. The fact that the “early fusion” method outperformed the CBCT-only method confirms that sMRI could provide additional useful information for segmentation. Wilcoxon-signed rank test was performed to analyze the statistical significance of the improvement over “CBCT-only” method for all five metrics. Except for the HD95 which is sensitive to noise in the segmentation results, almost all other metrics were improved with statistical significance for the CBCT + sMRI-(Late) method. In contrast, the improvement in CBCT + sMRI-(Early) method over CBCT-only method was not statistically significant on LFH or RFH for MSD, CMD, and VD, which demonstrates the superiority of late fusion as opposed to early fusion.

TABLE I.

Numerical comparisons among CBCT-only, CBCT + sMRI-(Early), and CBCT + sMRI-(Late). Best mean values are shown in bold. P-values greater than 0.05 are underlined.

Organ Method DSC HD95 (mm) MSD (mm) CMD (mm) VD (CC)
Bladder ①: CBCT-only 0.94±0.03 3.24±2.13 0.97±0.57 1.23±1.10 9.08±8.32
②: CBCT+sMRI-(Early) 0.95±0.03 3.23±4.86 0.79±0.73 0.71 ±0.75 3.75±4.34
③: CBCT+sMRI-(Late) 0.96±0.03 2.46±2.17 0.65±0.67 0.65±1.19 3.28±6.18
p-value (① vs. ②) <.01 <.01 <.01 <.01 <.01
p-value (① vs. ③) <.01 <.01 <.01 <.01 <.01
p-value (② vs. ③) <.01 <.01 <.01 <.01 0.01
Prostate ①: CBCT-only 0.87±0.08 3.89±2.84 1.37 ±0.91 2.05±2.07 2.82±2.37
②: CBCT+sMRI-(Early) 0.89±0.08 3.22±2.60 1.08±0.88 1.45±1.77 1.73±2.30
③: CBCT+sMRI-(Late) 0.91±0.08 2.97±2.62 0.93±0.96 1.25±1.82 2.51 ±3.31
p-value (① vs. ②) <.01 <.01 <.01 <.01 0.02
p-value (① vs. ③) <.01 <.01 <.01 <.01 0.03
p-value (② vs. ③) <.01 <.01 <.01 <.01 <.01
Rectum ①: CBCT-only 0.91 ±0.04 2.87±1.47 0.94±0.46 1.53±1.39 3.12±2.33
②: CBCT+sMRI-(Early) 0.92±0.04 2.59±1.74 0.85±0.77 1.12± 1.67 1.78±2.39
③: CBCT+sMRI-(Late) 0.93±0.04 2.69±2.79 0.72±0.61 1.07±1.58 2.80±3.60
p-value (① vs. ②) <.01 <.01 <.01 <.01 <.01
p-value (① vs. ③) <.01 <.01 <.01 <.01 0.11
p-value (② vs. ③) <.01 0.22 <.01 0.92 <.01
LFH ①: CBCT-only 0.93±0.05 6.81 ±9.77 1.30±1.37 2.66±3.89 10.50± 12.59
②: CBCT+sMRI-(Early) 0.94±0.05 6.82±10.38 1.22±1.46 2.65±4.30 9.94±15.83
③: CBCT+sMRI-(Late) 0.95±0.05 6.33± 10.09 1.05±1.40 2.41±4.18 8.39±14.98
p-value (① vs. ②) <.01 <.01 <.01 0.07 0.20
p-value (① vs. ③) <.01 <.01 <.01 <.01 <.01
p-value (② vs. ③) <.01 <.01 <.01 <.01 <.01
RFH ①: CBCT-only 0.93±0.05 6.79±10.34 1.32±1.46 2.64±4.20 10.62± 13.26
②: CBCT+sMRI-(Early) 0.94±0.05 6.97±10.87 1.23±1.50 2.67±4.35 9.84±14.65
③: CBCT+sMRI-(Late) 0.95±0.05 6.47± 10.77 1.08±1.48 2.40±4.36 8.86±15.40
p-value (① vs. ②) <.01 <.01 <.01 0.12 <.01
p-value (① vs. ③) <.01 <.01 <.01 <.01 <.01
p-value (② vs. ③) <.01 <.01 <.01 <.01 0.02

CBCT, cone-beam CT; sMRI, synthetic MRI; LFH, left femoral heads; RFH, right femoral heads.

4. DISCUSSION

Target and OARs delineation is a foundational task for cancer radiotherapy treatment planning. The contouring accuracy of target and OARs could affect the plan quality for highly modulated and conformal planning methods, such as IMRT and stereotactic body radiotherapy (SBRT). Rapid and accurate contouring on CBCT images could support patient-setup, plan adaptation, dose monitoring, etc. Online manual contouring is labor intensive and observer-dependent, which limits treatment throughput, and increases treatment uncertainty. Therefore, it is highly desirable to develop an automatic method for rapid and accurate segmentation of multiple organs on CBCT images. This will expedite the contouring process for fast adaptive radiotherapy. For the first time, we proposed an automatic CBCT-based segmentation method for pelvic multi-organ segmentation that includes prostate, bladder, rectum, and femoral heads.

We achieved excellent segmentation accuracy with an average DSC of 0.96 for bladder, 0.91 for prostate, and 0.93 for rectum. For comparison, Thor et al.26 achieved an average DSC of 0.73 for bladder, 0.80 for prostate, and 0.77 for rectum, which is inferior to our method. Even without sMRI, our CBCT-only method still outperformed Thor et al.’s method26 in terms of DSC. Brion et al. reported that they used a CNN to segment prostate on 48 CBCT datasets and zachieved DSC of 0.84, which outperformed RayStation that utilized deformable image registration to propagate contours from planning CT to daily CBCT with a DSC of 0.74.40 In these methods, mostly only one organ, at most three organs, were segmented. In contrast, five organs were simultaneously segmented in our method. Besides segmentation accuracy, our method could provide more consistent results than manually produced contours. Choi et al.41 reported that the mean CMD of prostate contouring was 1.73 mm among three observers on ten patients, which is greater than the 1.25 mm of our method.

Since the sMRI is dependent on the CBCT, the additional information provided by sMRI may not be as complementary as original MRI. Despite being less complementary as compared to original MRI, the sMRI could introduce certain additional structural information that is helpful to the segmentation task.17 Table I shows that the segmentation performance can be improved by including the sMRI in the network inputs. It is important to note that registration was not required in the multi-organ segmentation step. If paired CBCT-MRI images or pretrained CycleGAN is unavailable, the CBCT-only method could be used as an alternative for the segmentation. Despite being inferior to our method, the CBCT-only method can also perform accurate segmentation as shown in Table I. Another limitation of our method is that the ground truth contours are subject to interobserver variability. To alleviate this problem, we can include more patients in our model such that those with large interobserver variability can be excluded from the training datasets. CBCT-based dose calculation methods have been proposed.16,4245 Both the CycleGAN and segmentation models were trained using a relatively small number of datasets, which may limit the generalizability of the models when dealing with data inconsistency and anatomical variations. In the future, we plan to include more cases with wider anatomical changes to increase the networks’ generalizability. In addition, we plan to study the dosimetric effect of the CBCT segmentation accuracy to investigate its impact on plan quality for prostate radiotherapy.

5. CONCLUSIONS

We developed a novel segmentation method for pelvic multi-organ segmentation on CBCT images. The proposed method could robustly, rapidly, and accurately contour prostate, bladder, rectum, LFH, and RFH in one single network prediction. The method is a promising tool to support prostate external beam adaptive radiotherapy treatment planning.

ACKNOWLEDGMENTS

This research is supported in part by the National Cancer Institute of the National Institutes of Health under Award Number R01CA215718 and R01EB028324, the Department of Defense (DoD) Prostate Cancer Research Program (PCRP) Award W81XWH-17-1-0438 and W81XWH-17-1-0439, Varian AI Research Grant and Dunwoody Golf Club Prostate Cancer Research Award, a philanthropic award provided by the Winship Cancer Institute of Emory University.

Footnotes

CONFLICT OF INTEREST

The authors declare no conflict of interest.

REFERENCES

  • 1.Posiewnik M, Piotrowski T. A review of cone-beam CT applications for adaptive radiotherapy of prostate cancer. Phys Med. 2019;59:13–21. [DOI] [PubMed] [Google Scholar]
  • 2.Wu QJ, Thongphiew D, Wang Z, et al. On-line re-optimization of prostate IMRT plans for adaptive radiation therapy. Phys Med Biol. 2008;53:673–691. [DOI] [PubMed] [Google Scholar]
  • 3.Thongphiew D, Wu QJ, Lee WR, et al. Comparison of online IGRT techniques for prostate IMRT treatment: adaptive vs repositioning correction. Med Phys. 2009;36:1651–1662. [DOI] [PubMed] [Google Scholar]
  • 4.Men C, Romeijn HE, Jia X, Jiang SB. Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT). Med Phys. 2010;37:5787–5791. [DOI] [PubMed] [Google Scholar]
  • 5.Fu W, Yang Y, Yue NJ, Heron DE, Huq MS. A cone beam CT-guided online plan modification technique to correct interfractional anatomic changes for prostate cancer IMRT treatment. Phys Med Biol. 2009;54:1691–1703. [DOI] [PubMed] [Google Scholar]
  • 6.Barney BM, Lee RJ, Handrahan D, Welsh KT, Cook JT, Sause WT. Image-guided radiotherapy (IGRT) for prostate cancer comparing kv imaging of fiducial markers with cone beam computed tomography (CBCT). Int J Radiat Oncol. 2011;80:301–305. [DOI] [PubMed] [Google Scholar]
  • 7.Li T, Zhu X, Thongphiew D, et al. On-line adaptive radiation therapy: feasibility and clinical study. J Oncol. 2010;2010:407236. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Belshaw L, Agnew CE, Irvine DM, Rooney KP, McGarry CK. Adaptive radiotherapy for head and neck cancer reduces the requirement for res-cans during treatment due to spinal cord dose. Radiation Oncology. 2019;14:189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Castelli J, Simon A, Lafond C, et al. Adaptive radiotherapy for head and neck cancer. Acta Oncol. 2018;57:1284–1292. [DOI] [PubMed] [Google Scholar]
  • 10.Ghilezan M, Yan D, Martinez A. Adaptive radiation therapy for prostate cancer. Semin Radiat Oncol. 2010;20:130–137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Fu Y, Lei Y, Wang T, Curran WJ, Liu TJ, Yang X. Deep learning in medical image registration: a review. Phys Med Biol. 2020. [Epub ahead of print] 10.1088/1361-6560/ab843e [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Nijkamp J, Pos FJ, Nuver TT, et al. Adaptive radiotherapy for prostate cancer using kilovoltage cone-beam computed tomography: first clinical results. Int J Radiat Oncol Biol Phys. 2008;70:75–82. [DOI] [PubMed] [Google Scholar]
  • 13.Fotina I, Hopfgartner J, Stock M, Steininger T, Lutgendorf-Caucig C, Georg D. Feasibility of CBCT-based dose calculation: comparative anal-ysis of HU adjustment techniques. Radiother Oncol. 2012;104:249–256. [DOI] [PubMed] [Google Scholar]
  • 14.Qin A, Gersten D, Liang J, et al. A clinical 3D/4D CBCT-based treatment dose monitoring system. J Appl Clin Med Phys. 2018;19:166–176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Zhang X Validation of planning CT to CBCT deformable image regis-tration-based dose calculation for prostate cancer adaptive radiotherapy. Med Phys. 2019;46:E384–E384. [Google Scholar]
  • 16.Lei Y, Harms J, Wang T, et al. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med Phys. 2019;46:3565–3581. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Lei Y, Wang T, Harms J, et al. CBCT-based synthetic MRI generation for CBCT-guided adaptive radiotherapy. Paper presented at: Artificial Intelligence in Radiation Therapy; 2019//, 2019; Cham. [Google Scholar]
  • 18.Yang X, Lei Y, Wang T, et al. CBCT-guided Prostate Adaptive Radiotherapy with CBCT-based Synthetic MRI and CT. Int J Radiat Oncol Biol Phys. 2019;105;S250. [Google Scholar]
  • 19.Yao L, Zhu L, Wang J, et al. Positioning accuracy during VMAT of gynecologic malignancies and the resulting dosimetric impact by a 6-de-gree-of-freedom couch in combination with daily kilovoltage cone beam computed tomography. Radiat Oncol. 2015;10:104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Hering A, Kuckertz S, Heldmann S, Heinrich MP. Enhancing label-driven deep deformable image registration with local distance metrics for state-of-the-art cardiac motion tracking. 2019.
  • 21.Sonke JJ, Aznar M, Rasch C. Adaptive Radiotherapy for Anatomical Changes. Semin Radiat Oncol. 2019;29:245–257. [DOI] [PubMed] [Google Scholar]
  • 22.Chao M, Xie Y, Xing L. Auto-propagation of contours for adaptive prostate radiation therapy. Phys Med Biol. 2008;53:4533–4542. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Hou J, Guerrero M, Chen W, D’Souza WD. Deformable planning CT to cone-beam CT image registration in head-and-neck cancer. Med Phys. 2011;38:2088–2094. [DOI] [PubMed] [Google Scholar]
  • 24.Xie Y, Chao M, Lee P, Xing L. Feature-based rectal contour propagation from planning CT to cone beam CT. Med Phys. 2008;35:4450–4459. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Fu Y, Liu S, Li H, Yang D. Automatic and hierarchical segmentation of the human skeleton in CT images published online ahead of print 2017/02/15]. Phys Med Biol. 2017;62:2812–2833. [DOI] [PubMed] [Google Scholar]
  • 26.Thor M, Petersen JB, Bentzen L, Hoyer M, Muren LP. Deformable image registration for contour propagation from CT to cone-beam CT scans in radiotherapy of prostate cancer. Acta Oncol. 2011;50:918–925. [DOI] [PubMed] [Google Scholar]
  • 27.Boydev C, Pasquier D, Derraz F, Peyrodie L, Taleb-Ahmed A, Thiran J. Automatic prostate segmentation in cone-beam computed tomography images using rigid registration. Paper presented at: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 3–7 July 2013, 2013. [DOI] [PubMed] [Google Scholar]
  • 28.Li X, Zhang Y, Shi Y, et al. Comprehensive evaluation of ten deformable image registration algorithms for contour propagation between CT and cone-beam CT images in adaptive head & neck radiotherapy. PLoS ONE. 2017;12;e0175906. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Lei Y, Fu Y, Wang T, et al. Deep learning in multi-organ segmentation. ArXiv. 2020;abs/2001.10619 [Google Scholar]
  • 30.Sharma N, Aggarwal LM. Automated medical image segmentation techniques. J Med Phys. 2010;35:3–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Lutgendorf-Caucig C, Fotina I, Stock M, Potter R, Goldner G, Georg D. Feasibility of CBCT-based target and normal structure delineation in prostate cancer radiotherapy: multi-observer and image multi-modality study. Radiother Oncol. 2011;98:154–161. [DOI] [PubMed] [Google Scholar]
  • 32.Chai X, van Herk M, Betgen A, Hulshof M, Bel A. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model. Phys Med Biol. 2012;57:3945–3962. [DOI] [PubMed] [Google Scholar]
  • 33.van de Schoot AJ, Schooneveldt G, Wognum S, et al. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model. Med Phys. 2014;41;031707. [DOI] [PubMed] [Google Scholar]
  • 34.Mason SA, White IM, O’Shea T, et al. Combined Ultrasound and cone beam CT improves target segmentation for image guided radiation therapy in uterine cervix cancer. Int J Radiat Oncol Biol Phys. 2019;104:685–693. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Lei Y, Wang T, Tian S, et al. Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI. Phys Med Biol. 2019;65:035013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Trans Med Imaging. 2019;1–1. 10.1109/TMI.2019.2959609 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Lei Y, Dong X, Tian Z, et al. CT prostate segmentation based on synthetic MRI-aided deep attention fully convolution network. Med Phys. 2020;47:530–540. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Zhou T, Ruan S, Canu S. A review: Deep learning for medical image segmentation using multi-modality fusion. Array. 2019;3–4:100004. [Google Scholar]
  • 39.Bengio Y Practical recommendations for gradient-based training of deep architectures In: Montavon G, Orr GB, Müller K-R, eds. Neural Networks: Tricks of the Trade (2nd ed.). Berlin, Heidelberg: Springer Berlin Heidelberg; 2012:437–478. [Google Scholar]
  • 40.Brion E, Léger J, Javaid U, Lee J, De Vleeschouwer C, Macq BUsing planning CTs to enhance CNN-based bladder segmentation on cone beam CT. 2019;10951:SPIE. [Google Scholar]
  • 41.Choi HJ, Kim YS, Lee SH, et al. Inter-and intra-observer variability in contouring of the prostate gland on planning computed tomography and cone beam computed tomography. Acta Oncol. 2011;50:539–546. [DOI] [PubMed] [Google Scholar]
  • 42.Wang T, Lei Y, Manohar N, et al. Dosimetric study on learning-based cone-beam CT correction in adaptive radiation therapy. Med Dosim. 2019;44:e71–e79. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Marchant TE, Joshi KD, Moore CJ. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods. Phys Med Biol. 2018;63:065003. [DOI] [PubMed] [Google Scholar]
  • 44.Yang X, Liu T, Dong X, et al. A patch-based CBCT scatter artifact correction using prior CT. Vol 10132: SPIE; 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Almatani T, Hugtenburg RP, Lewis RD, Barley SE, Edwards MA. Automated algorithm for CBCT-based dose calculations of prostate radiotherapy with bilateral hip prostheses. Br J Radiol. 2016;89:20160443. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES