Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Apr 29.
Published in final edited form as: Phys Med Biol. 2020 Sep 30;65(19):195007. doi: 10.1088/1361-6560/aba165

Verification of the machine delivery parameters of a treatment plan via deep learning

Jiawei Fan 1,2, Lei Xing 1, Ming Ma 1, Weigang Hu 2, Yong Yang 1
PMCID: PMC8084707  NIHMSID: NIHMS1692529  PMID: 32604082

Abstract

We developed a generative adversarial network (GAN)-based deep learning approach to estimate the multileaf collimator (MLC) aperture and corresponding monitor units (MUs) from a given 3D dose distribution. The proposed design of the adversarial network, which integrates a residual block into pix2pix framework, jointly trains a ‘U-Net’-like architecture as the generator and a convolutional ‘PatchGAN’ classifier as the discriminator. 199 patients, including nasopharyngeal, lung and rectum, treated with intensity-modulated radiotherapy and volumetric-modulated arc therapy techniques were utilized to train the network. An additional 47 patients were used to test the prediction accuracy of the proposed deep learning model. The Dice similarity coefficient (DSC) was calculated to evaluate the similarity between the MLC aperture shapes obtained from the treatment planning system (TPS) and the deep learning prediction. The average and standard deviation of the bias between the TPS-generated MUs and predicted MUs was calculated to evaluate the MU prediction accuracy. In addition, the differences between TPS and deep learning-predicted MLC leaf positions were compared. The average and standard deviation of DSC was 0.94 ± 0.043 for 47 testing patients. The average deviation of predicted MUs from the planned MUs normalized to each beam or arc was within 2% for all the testing patients. The average deviation of the predicted MLC leaf positions was around one pixel for all the testing patients. Our results demonstrated the feasibility and reliability of the proposed approach. The proposed technique has strong potential to improve the efficiency and accuracy of the patient plan quality assurance process.

Keywords: patient-specific plan QA, MU/MLC shapes calculation, deep learning, plan second-check

1. Introduction

Modern radiation therapy (RT), such as intensity-modulated radiotherapy (IMRT) and volumetric-modulated arc therapy (VMAT) or, more generally, station parameter optimized radiation therapy (SPORT) (Li and Xing 2013, Dong et al 2016), are widely used for cancer treatment. To ensure the safety and accuracy of patient treatment, patient-specific plan validation is required to be carried out prior to actual treatment. Traditionally, this is done by checking the dose at one or multiple points in a patient-mimicking phantom or by using an independent dose calculation algorithm, which can be as simple as a Clarkson methodfor point dose calculation to an independent full 3D dose calculation with the patient CT and treatment parameters exported from the treatment planning system (TPS) (Xing et al 2000, Yang et al 2003). For a complicated IMRT/VMAT/SPORT treatment, a dosimetric measurement using a surrogate phantom or electronic portal imaging device is generally used for plan validation. While practically useful, the measurement is performed in a surrogate phantom instead of the actual patient. Thus, the measurement does not fully reflect the actual clinical treatment and patient-specific dose errors may not be able to be detected (Nelms et al 2013, Stojadinovic et al 2015, Templeton et al 2015).

An IMRT/VMAT beam consists of a large number of irregularly shaped field segments designed on a patient-specific basis. In practice, plan verification can proceed in different ways for a given treatment plan. One is to verify the monitor units (MUs) and the fluence maps or multileaf collimator (MLC) apertures of all nodes or control points. The challenge of the approach is that, unlike 3D conformal radiation therapy (3DCRT), it is not possible to follow a manual MU check procedure to validate a plan. An alternative approach is to verify the dose distribution directly by measurement or calculation. Gamma index, which combines the point-by-point percent dose difference and distance-to-agreement, is commonly used to assess agreement with the planed dose distribution (Bedford et al 2009, Qian et al 2011, Wang et al 2012). Recently, a machine learning-based prediction model that uses a set of treatment plan parameters, such as the MLC apertures, gantry/collimator angles, couch positions, etc, as input, to attempt to predict the dosimetric gamma passing rate, has been proposed (Gilmer et al 2017, Seiji et al 2018). This approach eliminates the potential inaccuracy associated with the use of an unrealistic surrogate phantom and/or measuring instruments. A potential drawback of the approach is that the quality assurance (QA) decision based purely on the gamma passing rate may not be interpretable and robust enough for practical applications. This is because of the inherent complexity of the IMRT/VMAT dose distributions and the fact that the passing rate for given Gamma index criteria is heavily dependent on the selected region of interest (Zhen et al 2011).

Physically, MU/MLC shapes and dose distribution represent related, but different facets of an RT treatment plan. To validate a treatment plan, all that is needed is to ensure the correctness of the MU/MLC shapes. As a matter of fact, the 3DCRT plan was checked in this way. In current practice, verification of a treatment plan is commonly done in dose domain, in which a phantom measurement or forward dose calculation is performed to examine the dosimetric accuracy and the MU settings of a given treatment plan. However, it is important to emphasize that validation in dose domain is not a direct validation of the treatment plan as it involves an extra layer of operation by using the MU/MLC settings to derive the dose distribution. For plan validation, while it is desirable to verify directly the MU/MLC shapes, a computational framework for obtaining them from a treatment plan with known dose distribution has yet to be developed.

This work presents a deep learning strategy to calculate independently the MU/MLC shapes from a given dose distribution of IMRT and VMAT. The dose at a point depends on the MU and MLC shapes of all the station points (or control points). This relationship forms the basis of the proposed deep learning-based verification technique. To proceed, we train a deep learning model by mapping this relationship. Since the MU/MLC shapes are fundamental machine delivery parameters, this approach allows us to verify the plan from the machine parameter level in a more intuitive and accurate way.

2. Methods and materials

The workflow of this study is presented in figure 1. A total of 246 previously treated patient data sets were retrieved for this study. A deep learning neural network, with the 3D dose distributions and CT images as input, was trained to predict the MU/MLC shapes. Similarity index and statistical judgement were used to quantitatively evaluate the performance of our predictive model.

Figure 1.

Figure 1.

Workflow of the proposed deep learning-based MU/MLC shapes prediction framework.

2.1. Patient data collection and preprocessing

The treatment planning of these patients was done using RayStation TPS (RaySeacrh Medical Laboratories, Stockholm, Sweden). These plans were randomly separated into two groups, 199 plans for model training and 47 plans for model performance testing. The training group includes 175 nasopharyngeal, lung and rectum IMRT cases and 24 rectum VMAT cases. Typically, an IMRT treatment plan consists of multiple intensity-modulated beams from different beam directions with each beam consisting of multiple segments (Brahme 1988). In VMAT, a plan is characterized by a series of control or state points, each at different gantry angle (Yu and Tang 2011). The intensity and aperture of these segments are modulated to achieve an optimal dose distribution (Li and Xing 2013).

The resliced volumetric dose of each segment and CT image data sets were labeled by the corresponding MU/MLC maps. A Varian linac with 120 leaf Millennium MLC (the leaf width of the central 40 pairs is 5 mm and the leaf width of the other 20 leaves is 10 mm) was used in this study. Within the MLC aperture, the corresponding MU values were assigned as the pixel value of the image. Figure 2 shows an example of the segmental resliced dose image passing through the isocenter and the corresponding labeled MU/MLC map. All the preprocessed images were resampled to 256 × 256 with a resolution of 2.5 mm for computational efficiency during the deep learning network training.

Figure 2.

Figure 2.

Dose distribution overlaid on a patient’s CT image plane passing through the isocenter and the corresponding labeled MU/MLC map. Gantry angle is zero and the MU value is 28.24.

2.2. Model architecture

Deep learning, which has dramatically improved the state-of-the-art techniques in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics, allows inference models composed of multiple layers to learn representations of data with multiple levels of abstraction (Yann et al 2015). It has been applied to many problems in medical physics and radiation therapy (Xing et al 2020), such as medical imaging (Shen et al 2019), organ segmentation (Ibragimov and Xing 2017), dose calculation and prediction (Korani et al 2016, Fan et al 2019, Ma et al 2019, Dong and Xing 2020). Our predictive model is constructed based on the architecture of a generative adversarial network (GAN) named pix2pix (Isola et al 2016). A GAN is a framework in which two models are trained sequentially and iteratively in a competing manner: a generative model G captures the features of data distribution, and a discriminative model D estimates the probability that a sample will be coming from the training data rather than G. The goal of GAN is to generate synthetic samples that cannot be differentiated from real samples. This network learns not only the mapping from input image to output image, but also a loss that classifies whether the output image is real or fake. This advantage makes it possible to apply the same generic approach to problems that would otherwise require very different loss formulations (Dong and Xing 2020).

Pix2pix, which applies GANs in the conditional setting, is a general-purpose solution to image-to-image translation problems. Different to the existing GANs, the pix2pix generator G adopts a ‘U-Net’-based architecture (Seo et al 2019), and its discriminator D is a convolutional ‘PatchGAN’ classifier, which only penalizes structure at the scale of image patches (Dong and Xing 2020). Specifically, the generator, which functions in a similar way to an autoencoder, is made up of a series of convolution layers followed by a series of deconvolution layers. In this work, we implemented several residual blocks in the encoder section of the generator, Res-pix2pix, to make the network much deeper. In the decoder section, the skip connections, which encourage the combination of both high-frequency information (such as textural information) and low-frequency information (such as structural information) to represent the image patch, were used to concatenate features from equal-sized convolution and deconvolution layers. The structure of the discriminator D looks like the encoder section of the generator, but works a little differently. Instead of predicting the whole image as fake or real, this ‘PatchGAN’ structure takes an NxN patch image and classifies every pixel in the patch as real or fake. In our implementation, the output is a 16 × 16 image with each pixel value (0 or 1) representing whether the corresponding section of the unknown image is real or fake. Because every pixel has a label, the Res-pix2pix produces a sharp image with rich detail. In addition, the Leaky ReLU (Maas 2013), which reduces the vanishing gradient problem by introducing a very small gradient when the input is negative and its derivative is not bounded by the range of [−1, 1], is employed to speed up the network training.

The maximum number of filters for the convolutional layer in the generator and discriminator was set to 1024 to maintain a reasonable GPU usage. The mean squared error (MSE) was used as the discriminator loss. The generator loss was computed as the sum of the MSE and mean absolute error (MAE). An Adam optimizer for gradient descent was applied to minimize these two losses. The batch size and learning rate were set to 2 and 0.0001, respectively. The data augmentation technique, randomly shifting the input and output images in the X or Y axis, was implemented to prevent the overfitting. The network was implemented with the Keras Python toolbox (Chollet 2015) and trained by a NVIDIA 2080Ti GPU. The entire network architecture is presented in figure 3

Figure 3.

Figure 3.

Architecture of the proposed Res-pix2pix deep learning framework. (a) Illustration of workflow of the Res-pix2pix network, (b) structure of the generator network, (c) structure of the discriminator network and (d) illustration of different neural network layers or blocks by using different color bars.

2.3. Performance evaluation

The model performance was first evaluated by five simple fields, including four square fields and one irregular field. 37 IMRT and 10 VMAT patients, who were not included in the training data set, were employed as the testing data sets to evaluate the model. Two main predicted results, the MLC aperture and the corresponding MU values, are presented. As stated above, the MU value was assigned as the pixel value of the output MU/MLC maps. The predicted pixel values of the output maps fluctuate slightly due to statistical nature of the algorithm. The median value, which was considered as the predicted MU of the segment, was calculated and compared with the planned values. The Dice similarity coefficient (DSC) was calculated to evaluate the similarity between the predicted and planned MLC apertures quantitatively. For a perfect match, the DSC would be equal to one (Taha and Hanbury 2015). In addition, the differences in pixel values between the predicted and planned MLC leaf positions were also calculated and compared.

3. Results

3.1. Simple field performance testing

The test results for the four simple fields, three square fields (5 × 5 cm, 10 × 10 cm and 15 × 15 cm) and an irregular field with MU value 20, are shown in figure 4. The left, middle and right columns show the planned and deep learning-derived MU/MLC maps, and the difference between the two maps, respectively. The predictive accuracy of the MLC leaf positions in each case is clearly seen from the difference map. The DSC is found to be 0.99, 0.99, 0.99 and 0.96, respectively, for the four fields. The predicted MUs are 19.4, 19.7, 20.4 and 20.4, respectively, for the four fields. Table 1 summarizes the results of MU prediction for five additional simple fields. It can be seen that three out of the five predictions agree with the original plans to within 5%. The predicted MUs of the 15 × 15 cm field and irregular field are slightly higher than the planned values, primarily because of the lack of sufficient information for large fields in the training data. We noticed that the accuracy of the prediction for these fields is improved significantly when the training data set is enlarged with the inclusion of fields larger than 15 × 15 cm.

Figure 4.

Figure 4.

Comparison between the planned and predicted MLC apertures for 5 × 5 cm, 10 × 10 cm and 15 × 15 cm square fields and an irregular field.

Table 1.

Summary of the predictive model performance for five simple fields with different target MU values shown in the first column. The shapes of 5 × 5, 10 × 10 and 15 × 15 are the same as in figure 4. The irregularly-shaped field is shown in the subplot (j,k) in figure 4.

MU Field size (cm) 2×2 5×5 10×10 15×15 Irregular
10 9.9 10.1 10.1 10.2 10.2
20 20.1 19.4 19.7 20.4 20.4
40 39.0 39.1 40.2 41.9 41.2
50 48.5 49.1 51.0 53.5 52.2
60 58.3 59.4 62.1 65.1 63.7

3.2. Prediction of MLC aperture and MU value

Figure 5 shows the planned (left panel), deep learning-derived MU/MLC maps (middle panel), and the pixel-wise differences between the two (right panel) of four segments from a nasopharyngeal IMRT case and a rectum VMAT case. The DSCs for the four segments are 0.97, 0.98, 0.95 and 0.97, respectively. The planned and predicted MUs of the segments are (14.90, 14.58), (8.36, 8.23), (5.30, 5.43) and (3.83, 3.73), respectively. Figure 6 displays similar results for four segments of a lung IMRT case and a rectum IMRT case. The corresponding DSCs are 0.95, 0.97, 0.99 and 0.99, respectively. The planned and predicted MUs are (29.13, 28.53), (22.28, 22.54), (12.32, 12.27) and (10.51, 10.67), respectively. DSC indices of all segmented fields for the above four patients are plotted in figure 7. The results of the planned and predicted MUs for all the segmented fields of the above four patients are displayed in figure 8. Generally, the shapes of the planned and predicted MLC apertures are very close and the predicted MU values agree with the planned ones to within 5% for these four patients.

Figure 5.

Figure 5.

Comparison between the planned and predicted MLC apertures for four segments in a nasopharyngeal IMRT and a rectum VMAT patient.

Figure 6.

Figure 6.

Comparison between the planned and predicted MLC apertures for four segments from a lung IMRT and a rectum IMRT case.

Figure 7.

Figure 7.

DSC indices for all the segmented fields of the four patients.

Figure 8.

Figure 8.

Planned and predicted MU values for all the segments in the four patients.

3.3. Statistics of MLC aperture and MU predicted results

For each of the 47 testing patients, we calculated the average DSC index, average relative error of the predicted MUs (normalized to the total MUs of the individual beam or arc) and average error of the predicted MLC leaf positions. The average errors of the predicted MUs and MLC leaf positions for each beam (or arc in VMAT) are presented in figures 9 and 10. For each category of the cases, the average values and standard deviations together with the t-test results are summarized in table 2. It is noted that, for any category, the average relative error in predicted MUs is within 2%, and the average error in predicted MLC leaf positions is around one pixel.

Figure 9.

Figure 9.

Average relative error and standard deviations in predicted MUs (normalized to each beam or arc) for each beam (or arc in VMAT) in the nasopharyngeal, VMAT, lung and rectum patients.

Figure 10.

Figure 10.

Average error and standard deviations in predicted MLC leaf positions (in pixels) for each beam (or arc in VMAT) in the nasopharyngeal, VMAT, lung and rectum patients.

Table 2.

The MU t-test p-value, average values and standard deviations for Dice index, relative error of the predicted MUs and error of the predicted MLC leaf positions.

Results Category Nasopharyngeal VMAT Lung Rectum
MU −0.082% ± 1.71% 0.016% ± 0.028% −0.57% ± 1.34% 0.049% ± 0.58%
MU t-test p-value 0.84 0.57 0.58 0.97
Dice 0.94 ± 0.047 0.93 ± 0.043 0.96 ± 0.038 0.97 ± 0.025
MLC leaf position −0.12 ± 1.15 0.035 ± 1.81 −0.13 ± 1.21 −0.10 ± 0.89

4. Discussion

We have established a Res-pix2pix deep neural network for the prediction of MU/MLC shapes for a given input IMRT/VMAT treatment plan. To the best of our knowledge, this is the first work to directly predict, from dose distribution, machine delivery parameters using a deep learning method. The results show that the proposed strategy is highly promising, and may find valuable applications in verifying the treatment plan on the machine parameter level. The prediction of the MLC apertures from a known dose distribution has been the main challenging task. The convolutional neural networks with a pre-specified L2 (MSE) loss function is widely used in image processing. However, we found that the formulation generally leads to blurred images without sharp detail. In general, L2 assumes the probability distribution of the data to be Gaussian and potentially blends multimodal distributions into a unimodal one. Meanwhile, L1 (MAE) loss, which is commonly used to produce images with sharp edges (Choi et al 2010), was also examined, but failed to provide satisfactory results. In this work, we applied the GAN, which learns a loss adapted to the task and data, to solve this problem. The discriminator, which is used to classify whether the output image is real or fake, also provides a loss function for training the generator. As can been seen from figures 310 and tables 1 and 2, the proposed Res-pix2pix network can predict the MU/MLC shapes efficiently and simultaneously.

For a given patient, the predicted MU/MLC shapes are obtained after importing the planned dose and CT images into the model. The model was trained from scratch, in which the model weights were randomly initialized, by using a large amount of clinical data from three different cancer sites and two different treatment techniques. It is important to point out that, in principle, the deep learning model here is not limited to the same TPS platform. The model input includes only CT and dose distribution. A trained model using data from one TPS should thus be applicable to verify treatment plans from a different TPS, provided that the dose calculation accuracies of both TPSs are comparable. Considering that possible discrepancy between any two types of TPS (e.g. RayStation and Eclipse) may well exist, it is practically sensible to train a TPS-specific model to ensure smooth clinical workflow. This should not add much work, since it only involves retraining the model without any change in the architecture of the neural network.

The test results from five simple fields and 47 patients with different cancer sites and treatment techniques indicate that the proposed model is capable of deriving the machine delivery parameters with acceptable accuracy. We found that the prediction accuracy of the leaf position is around one pixel and the corresponding MU prediction accuracy is within 2%. As noted, the relative error for the nasopharyngeal and VMAT is slightly higher than the lung and rectum. It is because the nasopharyngeal and VMAT plan have a higher degree of modulation due to the inherent complexity of the treatment plan or the delivery technique. In nasopharyngeal radiotherapy, several target volumes and a number of adjacent organs at risk (OARs) are involved. In VMAT, multiple beams with an angular interval of 2–3 degrees are used. All of these increase the complexity of dose delivery and require more MLC-shaped irregular field segments. It is remarkable that the proposed method performed well even for these challenging cases, as the relatively larger errors of the predicted MU/MLC are within the clinically acceptable range according to current practice guideline.

Although it takes about a week to train the model, the execution of the trained model only takes several seconds. It is worth noting that the treatment plan verification strategy may find useful application in adaptive therapy QA. In practice, despite its significant promise, widespread realization of adaptive therapy in clinical settings has yet to be accomplished. Robust and efficient QA of an adaptively modified treatment plan represents one of the major challenges in clinical implementation of the technique. The proposed data-driven plan QA approach may help to alleviate the bottleneck and pave the way for clinically sensible adaptive therapy. Meanwhile, this work can be extended in such a way that the deep learning-derived MU/MLC be used as input to a dose calculation engine (this engine can also be data driven). This will provide a cycle-consistency check of the proposed deep learning model. Implementation of the idea is clearly beyond the scope of this study, and will be the focus of our future investigation.

While the proposed algorithm works well, there is room for improvement. Due to GPU memory limitations, the MLC aperture image size is 256 × 256 with a resolution of 2.5 mm. This limits the maximum prediction accuracy of the leaf position to 2.5 mm. However, we emphasize that the network learns correlations between image pixels without relying on the actual size of the pixel. Thus, the accuracy of the predicted results is not affected by the pixel size. The absolute prediction accuracy could be improved when the actual pixel size is reduced in the future as the memory of GPU increases. It is noted that the improved resolution may be important in predicting the small fields in the stereotactic body radiation therapy or stereotactic radiosurgery plan.

5. Conclusion

A novel deep learning-based approach was developed to calculate the involved MU/MLC shapes for a given isodose distribution. The proposed algorithm relies on learning a mapping from 3D dose and CT images to MU/MLC maps. Extensive testing of the model has been performed and the success of the model has been demonstrated. This work represents the first attempt in using the deep learning method to predict fundamental machine delivery parameters and the study may provide the radiation oncology community with a useful tool for improving the efficiency and accuracy of patient-specific QA and planning a second-check process.

Acknowledgments

This work was partially supported by NIH (Grant Nos. R01CA227713 and R01CA223667), a Google Faculty Research Award (LX) and the National Natural Science Foundation of China (Grant No. 11805039) (JF).

Footnotes

Conflict of interest

The authors declare that there is no conflict of interest.

References

  1. Bedford JL, Lee YK, Wai P, South CP and Warrington AP 2009. Evaluation of the Delta4 phantom for IMRT and VMAT verification Phys. Med. Biol 54 167–76 [DOI] [PubMed] [Google Scholar]
  2. Brahme A 1988. Optimization of stationary and moving beam radiation therapy techniques Radiother. Oncol 12 129–40 [DOI] [PubMed] [Google Scholar]
  3. Choi K, Wang J, Zhu L, Suh TS, Boyd S and Xing L 2010. Compressed sensing based cone beam computed tomography with first-order method Med. Phys 37 5113–25 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Chollet. Keras. 2015 https://github.com/fchollet/keras.
  5. Dong P, Ungun B, Boyd S and Xing L 2016. Optimization of rotational arc station parameter optimized radiation therapy Med. Phys 43 4973. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Dong P and Xing L 2020. Deep DoseNet: a deep neural network for accurate dosimetric transformation between different spatial resolutions and/or different dose calculation algorithms for precision radiation therapy Phys. Med. Biol 65 035010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Fan JW, Wang JZ, Chen Z, Hu C, Zhang Z and Hu W 2019. Automatic treatment planning based on three-dimensional dose distribution predicted from deep learning technique Med. Phys 46 370–81 [DOI] [PubMed] [Google Scholar]
  8. Gilmer V et al. 2017. IMRT QA using machine learning: A multi-institutional validation Med. Phys 18 279–84 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Ibragimov B and Xing L 2017. Deep learning for segmentation of organs-at-risks in head and neck CT images Med. Phys 44 547–57 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Isola P et al. 2016. Image-to-image translation with conditional adversarial networks 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (Piscataway, NJ: IEEE; ) pp 1063–6919 [Google Scholar]
  11. Korani MM, Dong P and Xing L 2016. Deep-learning based prediction of achievable dose for personalizing inverse treatment planning Med. Phys 43 3724 [Google Scholar]
  12. Li R and Xing L 2013. An adaptive planning strategy for station parameter optimized radiation therapy (SPORT): segmentally boosted VMA Med. Phys 40 501–51 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Ma M, Kavolchuk N, Buyounoski M, Xing L and Yang Y 2019. Incorporating dosimetric features into the prediction of 3D VMAT dose distributions using deep convolutional neural network Phys. Med. Biol 64 125017. [DOI] [PubMed] [Google Scholar]
  14. Maas AL. Rectifier nonlinearities improve neural network acoustic models. Proc. ICML. 2013;30:3. [Google Scholar]
  15. Nelms BE, Chan MF, Jarry G, Lemire M, Lowden J, Hampton C and Feygelman V 2013. Evaluating IMRT and VMAT dose accuracy: practical examples of failure to detect systematic errors when applying a commonly used metric and action levels Med. Phys 40 711–22 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Qian J, Xing L and Luxton G 2011. Dose verification for respiratory-gated volumetric modulated arc therapy Phys. Med. Biol 56 4827–38 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Seiji T, Noriyuki K, Yoshiki T, Tomohiro K, Shima K, Narazaki K and Jingu K 2018. A deep learning-based prediction model for gamma evaluation in patient-specific quality assurance Med. Phys 45 4055–65 [DOI] [PubMed] [Google Scholar]
  18. Seo H, Huang C, Bassenne M and Xing L 2019. Modified U-Net (mU-Net) with incorporation of object-dependent high level features for improved liver and liver-tumor segmentation in CT images IEEE Trans. Med. Imaging 39 1316–25 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Shen L, Zhao W and Xing L 2019. Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning Nat. Biomed. Eng 3 880–8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Stojadinovic S, Ouyang L, Gu X, Pompos A, Bao Q and Solberg TD 2015. Breaking bad IMRT QA practice J. Appl. Clin. Med. Phys 16 5242–54 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Taha AA and Hanbury A 2015. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool BMC Med. Imaging 15 29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Templeton AK, Chu JCH and Turian JV 2015. The sensitivity of ArcCHECK-based gamma analysis to manufactured errors in helical tomotherapy radiation delivery J. Appl. Clin. Med. Phys 16 32–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Wang L, Kielar KN, Mok E, Hsu A, Dieterich S and Xing L 2012. An end-to-end examination of geometric accuracy of IGRT using a new digital accelerator equipped with onboard imaging system Phys. Med. Biol 57 757–69 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Xing L, Chen Y, Luxton G, Li JG and Boyer AL 2000. Monitor unit calculation for an intensity modulated photon field by a simple scatter-summation algorithm Phys. Med. Biol 45 1–7 [DOI] [PubMed] [Google Scholar]
  25. Xing L, Giger ML and Min JK 2020. Artificial Intelligence in Medicine: Technical Basis and Clinical Applications (Amsterdam: Elsevier; ) [Google Scholar]
  26. Yang Y, Xing L, L JG, Palta J, Chen Y, Luxton G and Boyer A 2003. Independent dosimetric calculation with inclusion of head scatter and MLC transmission for IMRT Med. Phys 30 2937–47 [DOI] [PubMed] [Google Scholar]
  27. Yann LC, Yoshua B and Geoffrey H 2015. Deep learning Nature 521 436–44 [DOI] [PubMed] [Google Scholar]
  28. Yu C and Tang G 2011. Intensity-modulated arc therapy: principles, technologies and clinical implementation Phys. Med. Biol 56 31–54 [DOI] [PubMed] [Google Scholar]
  29. Zhen H, Nelms BE and Tome WA 2011. Moving from gamma passing rates to patient DVH-based QA metrics in pretreatment dose QA Med. Phys 38 5477–89 [DOI] [PubMed] [Google Scholar]

RESOURCES