Abstract
Purpose
To design a convolutional recurrent neural network (CRNN) that calculates 3D positions of lung tumors from continuously acquired cone beam CT (CBCT) projections, and facilitates the sorting and reconstruction of 4D-CBCT images.
Method
Under an IRB-approved clinical lung protocol, kilovoltage (kV) projections of the setup CBCT were collected in free-breathing. Concurrently, an electromagnetic-signal-guided system recorded motion traces of three transponders implanted in or near the tumor. CRNN was designed to utilize a convolutional neural network for extracting relevant features of the kV projections around the tumor, followed by a recurrent neural network for analyzing the temporal patterns of the moving features. CRNN was trained on the simultaneously collected kV projections and motion traces, subsequently utilized to calculate motion traces solely based on the continuous feed of kV projections. To enhance performance, CRNN was also facilitated by frequent calibrations (e.g., at 10° gantry-rotation intervals) derived from cross-correlation-based registrations between kV projections and templates created from the planning 4DCT. CRNN was validated on a leave-one-out strategy using data from 11 lung patients, including 5500 kV images. The root-mean-square error between the CRNN and motion traces were calculated to evaluate the localization accuracy.
Result
3D displacement around the simulation position shown in the Calypso traces was 3.4±1.7mm. Using motion traces as ground truth, the 3D localization error of CRNN with calibrations was 1.3±1.4mm. CRNN had a success rate of 86±8% in determining whether the motion was within a 3D displacement window of 2mm. The latency was 20ms when CRNN ran on a high-performance computer cluster.
Conclusion
CRNN is able to provide accurate localization of lung tumors with aid from frequent recalibrations using the conventional cross-correlation-based registration approach, and has the potential to remove reliance on the implanted fiducials.
Keywords: Deep learning, cone beam CT, Intrafractional motion management, Lung cancer
I. INTRODUCTION
Cone beam CT (CBCT) has been routinely integrated into the clinical workflow of image-guided radiotherapy.1 It serves as an indispensable tool for the daily setup of patients, monitoring anatomical changes of tumors over the treatment course, and reconstructing the delivered radiation dose. All these clinical tasks require that CBCT has a high image quality and fidelity to the planning CT. Image quality of CBCT has improved tremendously over the years under the rapid advancement of technologies, both in imaging hardware and reconstruction software. However, respiratory motion remains a major challenge for producing CBCT images with minimal blur, especially in thoracic and abdominal disease sites.2,3 Using an external respiratory signal such as the patient’s abdominal surface provides a means of sorting or gating the CBCT acquisitions and subsequently reconstructing gated or 4D CBCT scans. However, disagreement between the external surface and actual tumor motion frequently results in inappropriate binning and thereafter noticeable imaging artifacts. In this paper, we investigate the feasibility of direct localization of tumor motions from each CBCT projection to better facilitate the sorting and reconstructions of CBCT.
Robust localization of tumors can be achieved with help from implanted radiopaque fiducials or electromagnetic transponders.4 However, the invasive implantation remains a daunting barrier for its wide acceptance in the clinic. A markerless approach is highly desirable and has been investigated intensively.5–20 With highly customized image pre-processing and enhancement maneuvers, localization through registration based on cross-correlation is successful when image contrast between tumor and its background is reasonable, but often fails at angles where tumor is obscured by mediastinum or ribs.18 A potential solution is a markerless tracking scenario, which explores the large body of data collected in the marker-based setting, builds a framework to extract underlining non-marker features inside the tumor among a population of lung patients, and uses this information as a prior to help the detection and tracking of tumor in subsequent patients. Deep learning algorithms are well-suited for this task. Zhao et. al. demonstrated that accurate markerless localizations of prostatic 21 and pancreatic 22 tumors are achievable via a convolutional neural network (CNN). While localization was formulated as a registration problem in these studies, the temporal trajectory of tumor in a series of kilovoltage (kV) projections is an extra relevant source of information that should be explored. Recurrent Neural Networks (RNN) are specialized in processing time-series images.23 Such networks have been combined with CNN to track 2D motions of markers implanted in liver using ultrasound images,24 and recently to track lung tumor changes in response to radiotherapy at our institution.25 In this paper, we present a similar algorithm, a convolutional recurrence neural network (CRNN), for realtime 3D localization of mobile lung tumors from CBCT projections. The CRNN is designed to process both static features embedded in a particular kV projection, as well as the temporal evolution of such features in a stream of projections. We evaluated the localization accuracy of CRNN by comparing to the actual patient traces recorded by an electromagnetic-signal-guided (Calypso™) system.
II. METHOD AND MATERIALS
Figure 1 shows the workflow of the CRNN framework designed for acquiring CBCT in a radiotherapy setting. There are three main components: data acquisition and preprocessing, a convolutional neural network (CNN) for feature extraction and enhancement, and a recurrent neural network (RNN) for tumor location tracking. The real-time localization results can be used as feedback to guide CBCT acquisition and reconstruction.
Figure 1.
Schematic illustration of CRNN applied during CBCT acquisition.
2.1. Data Acquisition and preprocessing
Under an IRB-approved clinical protocol, lung patients were imaged and treated under the guidance of three electromagnetic transponders implanted in or near the primary tumor.18 3D motion traces of the monitored tumor were recorded throughout the entire session by the Calypso™ system (Varian Medical Systems, Palo Alto, CA) with a frequency of 25Hz. For patient setup, a full-fan kV CBCT (TrueBeam version 2.5) was acquired while patient breathed normally, using a titanium filter, 125 kV, 100 mA, and 150 cm source-to-detector distance. The kV projections of the setup CBCT were collected in real-time via an iToolsCapture workstation (software version 2.2Varian), with an image sampling rate of 15 frames per second, a pixel resolution of 0.38mm at isocenter, and image size 1024×768. The time stamp of each kV projection was extracted from the image header, and synchronized to the corresponding Calypso record by matching the action sequence in the treatment record and reviewing system (ARIA Offline-Review). Subsequently and independent to the imaging system, each kV projection was assigned a 3D position with a temporal uncertainty of up to 20ms, and positional uncertainty of up to 0.3mm along the anterior-posterior (AP), lateral (LAT), and superior-inferior (SI) directions.
To minimize the variances of image intensity among patients and gantry rotations, all the kV projections were first equalized as:
where Ij is the intensity of the j-th pixel, MEAN({Ij}) is the mean of the set of pixels {Ij} in the projection, and STDEV({Ij}) is the standard deviation. In order to focus on the region containing the tumor, a 10×10cm image patch around the isocenter (placed inside the tumor) was selected as a region of interest (ROI) to ensure all the studied tumors can be completely seen in the ROI from all the projection angles while minimizing the amount of involved surrounding backgrounds. Subsequently, a delta image (ΔI) was formed as the difference between the current and previous projection to highlight the moving features inside the ROI and serve as the primary input to the neural network. ΔI also contained features caused by the rotating gantry, 0.4° between two consecutive projections. Therefore, the gantry angle Ɵ of each kV projection, as well as the incremental ΔƟ and the time stamp t relative to the start of CBCT acquisition, were extracted from the projection header and additionally input to the CRNN.
2.2. CNN feature extraction
A 6-level deep CNN was constructed as shown in Figure 2.a for extracting representative image features related to motion from the delta image. It included 4 convolutional module layers and 2 attention module layers. Inside the convolutional module, both the convolutional layers and max pooling layers were used to extract the relevant global and local features for defining tumor locations. We set stride and padding as 2 and 1, respectively for the convolutional layers, with a 4×4 filter for the first two layers and a 3×3 filter for the remaining layers. The convolutional layer was directly followed by a batch normalization layer and max pooling layer in the first three modules. The batch normalization layer was used to reduce the patient variance, speed up the learning convergence, and increase the stability of the network. 26 The max pooling layer was used to extract the most representative local features by filtering the global features with a 2×2 kernel size. Attention maps were generated by self-attention modules 27 using generative adversarial networks to enhance the tumor region and ignore the surrounding backgrounds. The output of CNN was a representative image feature vector with a size of 512. Meanwhile, the gantry and time-stamp information were combined to form an embedding vector28 with the same size of 512. The output vector of CNN and the embedding vector were concatenated to form a motion feature vector with a size of 512×2. Three orthogonal motion feature vectors were created for three motion components, which were subsequently input to the RNN for calculating 3D tumor locations.
Figure 2.
Detailed design of CRNN for 3D localization of tumor.
2.3. RNN localization
The RNN as illustrated in Figure 2.b was used to calculate 3D tumor locations by parsing the feature vectors extracted from a series of consecutive CBCT projections. The length of the series determined length of memory to be analyzed by the CRNN. We configured the length as 10, simulating a 4° short arc, which is often used for 3D localization in the context of digital tomosynthesis.29 The RNN was constructed with gated recurrent units (GRU, as described in the first block of Figure 2.b) so as to achieve faster speed compared to the more commonly used long short-term memory. At each time point, three GRU units were used to reconstruct 3D tumor location by processing the imaging feature vector and hidden features. As depicted in the detailed diagram (Figure 2.b), the hidden features were the filtered features memorized by the previous GRU unit, where irrelevant features were removed from the chain. We processed the motion in a sequential order of SI, AP, and LAT, exactly following the order of the motion amplitude along these three directions. Finally, we used generators shown in Figure 2.c to reconstruct the 3D location of the tumor based on the output features of the RNN. A generator is constructed by using three linear transformations. The input feature sizes of the linear transformations are the 512, 256, and 128, and the output feature sizes are 256, 128 and 1, respectively. The ultimate output of CRNN was a 3D trace of tumor motion calculated by three separate generators.
2.4. Facilitation of external calibration
Since cross-correlation based registrations can yield accurate localizations when the tumor is visible on the kV projection, especially in the SI direction, we introduced a flexible calibration mechanism that incorporated registration results into the training and operation of CRNN as illustrated in Figure 1. In parallel and for a fixed frequency (ex., at every 10° gantry rotation intervals or every 25th projection), rigid registrations were performed between the kV projections and templates of the gross tumor volume (GTV) created from the planning 4DCT, which yielded the SI component of the tumor motion.18,29 Subsequently, the performance of CRNN was improved by calibrating the SI component of the CRNN output with corrections relative to the SI motion derived from each corresponding registration. We simulated the frequency of the calibration at 10°, 20°, and 30° gantry rotation intervals to investigate its potential application under various clinical scenarios.
2.5. Implementation
CRNN was implementedusing pytorch within the OpenNMT toolbox.30 We used a 2-layer GRU network with 500-dimensional hidden units, a learning rate of 10−4, and a batch size of 2. To minimize the effect of outliers, the Huber loss was selected as loss function to evaluate the differences between the system output and actual Calypso traces, and train the entire system, including the CNN, RNN, generator, and taking the calibration signal into account if the calibration mechnism was applied. Adaptive Moment Estimation was selected as the optimization function to minimize the loss with a dropout rate of 0.5. The early stopping rule was applied in the training process; otherwise the maximum number of iteration was 300. CRNN was validated with a leave-one-out strategy using kV-CBCT scans from 13 lung patients comprising a total of 6500 kV projections. In a paticular run, data were patitioned on a patient basis to avoid leakages between training and testing. The training, validation, and testing sampled approximated 4000 (8 patients), 2000 (4 patients), and 500 (1 patient) 10-frame-long sequences, respectively. This process repeated 13 times independently for 13 patients to gain statitics for evaluation.
The root-mean-square error between the CRNN and Calypso traces were calculated in SI, AP, LAT, and 3D to evaluate the localization accuracy. In addition, we calculated the rate of agreement between CRNN and Calypso in determining whether motion is within a 3D displacement window of 2mm around expiration. This accuracy can be used to evaluate the usage of CRNN in gating the acquisition of CBCT.
III. RESULTS
Although training of CRNN on a high-performance computer cluster with one GeForce GTX1080Ti GPU (12 GB memory) took 6 hours, the testing of one projection on the cluster only took 20 ms. The training and validation converged at the loss around 0.05 and 0.08, respectively. As illustrated in Table 1, without the recalibration from the registration method, the performance of CRNN was poor, yielding tracking accuracy of 2.9±2.7mm along the SI direction, given an averaged motion amplitude of 10mm. The tracking accuracy improved with increasing frequency of recalibration, eventually reaching submillimeter level. The overall 3D difference, calculated as the RSME between the 3D motion trace derived from CRNN and the Calypso trace, was 1.3±1.4mm. In addition, CRNN had a success rate of 86±8% in determining whether the motion was within a 3D displacement window of 2mm. A typical tracking example is shown in Figure 3, where the motion trace from Calypso, CRNN calibrated every 10°, and no calibration, is shown in solid blue, dash red, and dotted black, respectively, along the SI, AP, and LAT directions. When the size of the ROI was enlarged to 15×15cm, the 3D localization accuracy with calibration was 1.5±1.4mm, worse than the 10×10xm setting.
Table 1.
Comparison of tracking performance of CRNN
| RSME (MM) | Calibration-10° | Calibration-20° | Calibration-30° | No calibration |
|---|---|---|---|---|
| LAT | 0.7±0.9 | 1.7±2.0 | 1.8±2.1 | 2.7±3.5 |
| SI | 0.6±1.2 | 1.3±1.0 | 1.4±1.1 | 2.9±2.7 |
| AP | 0.5±0.8 | 1.4±1.5 | 1.5±1.9 | 2.4±3.1 |
| 3D | 1.3±1.4 | 2.8±2.3 | 3.2±2.5 | 5.3±4.5 |
Figure 3.
Illustrations of motion traces derived from CRNN.
IV. DISCUSSION
Our findings indicate that CRNN built upon deap learning algorithms can provide a real-time solution for tumor tracking in the chalenging clinical setting of lung radiotherapy. The localization accuracy of CRNN is similar to what has been reported for prostate 21 and liver 22 although the motion amplitude of lung tumor is larger. CRNN has potential for use in verifying breath-hold levels, or in gating the image acquisition and treatment delivery with a clinically reasonable tolerance of 2–3mm. In contrast to conventional registration-based approaches that only calculate tumor positions in the beams-eye view, CRNN can extract large amounts of relevent image features and calculate 3D tumor location in room coordinates, thereby offering more accurate information for image-guided procedures. Although CRNN doesn’t explicitly build a model of respiratory motion, the variations in lung tumor motion have been learned and stored in the weights of the RNN units. Therefore CRNN can serve as a Bayesian estimator 31–33 or Kalman filter 34 and determine current tumor position based on previous obervations from both individual patient and population data. Although there may be a fraction of kV projections where tumor is not visible and may cause the registration based on cross-correlation to be unreliable, this will probably not cause CRNN to fail in estimating tumor motion traces. To our knowledge, our study is unusual because CRNN is trained and validated using intratreatment imaging data included tumor motion traces recorded electromagnetically (by the Calypso system), resulting in a more credible system. As more data are accrued from this protocol, we expect further improvement in the accuracy and robustness of CRNN.
Although the implanted transponders are substantially smaller relative to tumor size and CRNN utilizes an ROI that encompasses the entire GTV, the transponders may nevertheless influence to some extent the training and learning of CRNN. In order to more closely simulate a markerless tracking scenario, one may remove the tranponders from the kV projections by replacing them with averaged pixel intensitiesfrom the surrounding regions, and will be investigated in a future study. In addition, training and validation studies using 3D printed phantoms with programmed motions 35 will also be investigated to determine the characteristics of CRNN under a true markerless setting.
V. Conclusion
A retrospective study of markerless tracking with intratreatment planar kV images from eleven patients indicates that CRNN is capable of providing real-time 3D localization of lung tumors. It does so without frequent interruptions caused by poor image contrast that can occur in a conventional cross-correlation-based image registration approach, and has the potential to remove the reliance on implanted fiducials.
Acknowledgment
Memorial Sloan Kettering Cancer Center has a research agreement with Varian Medical Systems. This research was partially supported by the MSK Cancer Center Support Grant/Core Grant (NIH/NCI P30 CA008748).
Footnotes
Submitted to: Medical Physics as a technical note.
Reference
- 1.Dawson LA, Jaffray DA. Advances in image-guided radiation therapy. J Clin Oncol. 2007. March 10;25(8):938–46. [DOI] [PubMed] [Google Scholar]
- 2.Dzyubak O, Kincaid R, Hertanto A, et al. Evaluation of tumor localization in respiration motion-corrected cone-beam CT: prospective study in lung. Med Phys. 2014;41:101918. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kincaid RE Jr, Hertanto AE, Hu YC, Wu AJ, Goodman KA, Pham HD, Yorke ED, Zhang Q, Chen Q, Mageras GS. Evaluation of respiratory motion-corrected cone-beam CT at end expiration in abdominal radiotherapy sites: a prospective study. Acta Oncol. 2018. August;57(8):1017–1024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Keall P, Nguyen D, O’Brien R, Zhang P, Happersett L, Bertholet J, and Poulsen P. Review of real-time 3D IGRT on standard-equipped cancer radiotherapy systems: are we at the tipping point for the era of real-time radiotherapy? Int J Radiat Oncol Biol Phys. 2018;102(4):922–931. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Richter A, Wilbert J, Baier K, et al. Feasibility study for markerless tracking of lung tumors in stereotactic body radiotherapy. Int J Radiat Oncol Biol Phys 2010;78:618–27. [DOI] [PubMed] [Google Scholar]
- 6.Rottmann J, Aristophanous M, Chen A, et al. A multi-region algorithm for markerless beam’s-eye view lung tumor tracking. Phys Med Biol. 2010;55:5585–98. [DOI] [PubMed] [Google Scholar]
- 7.Cho B, Poulsen PR, Sawant A, et al. Real-time target position estimation using stereoscopic kilovoltage/megavoltage imaging and external respiratory monitoring for dynamic multileaf collimator tracking. Int J Radiat Oncol Biol Phys. 2011;79:269–278. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Cho B, Poulsen P, Ruan D, et al. Experimental investigation of a general real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring. Phys Med Biol. 2012;57:7395–7407. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Yan H, Li H, Liu Z, et al. : Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose. Phys Med Biol. 2012;57:8455–69. [DOI] [PubMed] [Google Scholar]
- 10.Vergalasova I, Cai J, Yin FF. A novel technique for markerless, self-sorted 4D-CBCT: feasibility study. Med Phys. 2012;39:1442–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Rottmann J, Keall P Berbeco R. Markerless EPID image guided dynamic multi-leaf collimator tracking for lung tumors. Phys Med Biol 2013;58:4195–4204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.van Sörnsen de Koste JR, Dahele M, Mostafavi H, et al. Digital tomosynthesis (DTS) for verification of target position in early stage lung cancer patients. Med Phys. 2013;40:011904. [DOI] [PubMed] [Google Scholar]
- 13.Yip SS, Rottmann J, Berbeco R. Beam’s-eye-view imaging during non-coplanar lung SBRT. Med Phys. 2015;42:6776–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Patel R, Panfil J, Campana M, Block AM, Harkenrider MM, Surucu M, Roeske JC.Markerless motion tracking of lung tumors using dual-energy fluoroscopy. Med Phys. 2015;42(1):254–62. [DOI] [PubMed] [Google Scholar]
- 15.van Sörnsen de Koste JR, Dahele M, Mostafavi H, et al. Markerless tracking of small lung tumors for stereotactic radiotherapy. Med Phys. 2015;42:1640–52. [DOI] [PubMed] [Google Scholar]
- 16.Zhang X, Homma N, Ichiji K, Takai Y, Yoshizawa M. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study. Med Phys. 2015. May;42(5):2510–23. [DOI] [PubMed] [Google Scholar]
- 17.Hazelaar C, van der Weide L, Mostafavi H, Slotman BJ, Verbakel WFAR, Dahele M. Feasibility of markerless 3D position monitoring of the central airways using kilovoltage projection images: Managing the risks of central lung stereotactic radiotherapy. Radiother Oncol. 2018. November;129(2):234–241. [DOI] [PubMed] [Google Scholar]
- 18.Zhang P, Hunt M, Telles AB, Pham H, Lovelock M, Yorke E, Li G, Happersett L, Rimner A, Mageras G . Design and validation of a MV/kV imaging-based markerless tracking system for assessing real-time lung tumor motion. Med Phys. 2018. December;45(12):5555–5563. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.BhusalChhatkuli R, Demachi K, Uesaka M, Nakagawa K, Haga A. Development of a markerless tumor-tracking algorithm using prior four-dimensional cone-beam computed tomography. J Radiat Res. 2019. January 1;60(1):109–115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Hindley N, Keall P, Booth J, Shieh CC.Real-time direct diaphragm tracking using kV imaging on a standard linear accelerator. Med Phys. 2019. In press. [DOI] [PubMed] [Google Scholar]
- 21.Zhao W, Han B, Yang Y, Buyyounouski M, Hancock SL, Bagshaw H, Xing L. Incorporating imaging information from deep neural network layers into image guided radiation therapy (IGRT). Radiother Oncol. 2019. July 11;140:167–174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Zhao W, Shen L, Han B, Yang Y, Cheng K, Toesca DAS, Koong AC, Chang DT, Xing L. Markerless Pancreatic Tumor Target Localization Enabled By Deep Learning. Int J Radiat Oncol Biol Phys. 2019. June 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Graves A, Liwicki, M, Fernandez S, Bertolami R, Bunke H,; Schmidhuber, J.A. Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009. 31 (5): 855–868. [DOI] [PubMed] [Google Scholar]
- 24.Huang P, Yu G, Lu H, Liu D, Xing L, Yin Y, Kovalchuk N, Xing L, Li D. Attention-aware fully convolutional neural network with convolutional long short-term memory network for ultrasound-based motion tracking. Med Phys. 2019. May;46(5):2275–2285. [DOI] [PubMed] [Google Scholar]
- 25.Wang C, Rimner A, Hu Y, Tyagi N, Jiang J, Yorke E, Riyahi S, Mageras G, Deasy J, and Zhang P. Towards Predicting the Evolution of Lung Tumors During Radiotherapy Observed on a Longitudinal MR Imaging Study Via a Deep Learning Algorithm. Med Phys. 2019;46(10):4699–4707. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Szegedy C, Ioffe S, Vanhoucke V and Alemi AA, 2017, February Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence. [Google Scholar]
- 27.Zhang H, Goodfellow I, Metaxas D and Odena A, 2018. Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318. [Google Scholar]
- 28.Mikolov T, Sutskever I, Chen K, Corrado GS and Dean J, 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111–3119). [Google Scholar]
- 29.Hunt M, Sonnick M, Pham H, Regmi R, Xiong J, Morf D, Mageras G, Zelefsky M, Zhang P. Simultaneous MV-kV imaging for intra-fractional motion management during Volume Modulated Arc Therapy delivery. JACMP, 2016;17:473–486. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Klein G, Kim Y, Deng Y, Senellart J. and Rush A.M. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:2017;1701.02810. [Google Scholar]
- 31.Lewis JH, Li R, Jia X, Watkins WT, Lou Y, Song WY, Jiang SB. Mitigation of motion artifacts in CBCT of lung tumors based on tracked tumor motion during CBCT acquisition. Phys Med Biol. 2011. September 7;56(17):5485–502. [DOI] [PubMed] [Google Scholar]
- 32.Li R, Fahimian BP Xing L. A Bayesian approach to real‐time 3D tumor localization via monoscopic x‐ray imaging during treatment delivery. Medical physics 2011;38:4205–4214. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Shieh C-C, Caillet V, Dunbar M, et al. A Bayesian approach for three-dimensional markerless tumor tracking using kV imaging during lung radiotherapy. Physics in Medicine and Biology 2017;62:3065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.McNamara JE, Regmi R, Michael Lovelock D, Yorke ED, Goodman KA, Rimner A, Mostafavi H, Mageras GS. Toward correcting drift in target position during radiotherapy via computer-controlled couch adjustments on a programmable Linac. Med Phys. 2013. May;40(5):051719. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Hazelaar C, van Eijnatten M, Dahele M, et al. Using 3D printing techniques to create an anthropomorphic thorax phantom for medical imaging purposes. Med Phys.2018;45:92–100. [DOI] [PubMed] [Google Scholar]



