Abstract
Purpose;
For shoot-through proton treatments, like FLASH radiotherapy, there will be protons exiting the patient which can be used for proton portal imaging (PPI), revealing valuable information for the validation of tumor location in the beam’s-eye-view at native gantry angles. However, PPI has poor inherent contrast and spatial resolution. To deal with this issue, we propose a deep-learning-based method to use kV digitally reconstructed radiographs (DRR) to improve PPI image quality.
Method;
We used a residual generative adversarial network (GAN) framework to learn the nonlinear mapping between PPIs and DRRs. Residual blocks were used to force the model to focus on the structural differences between DRR and PPI. To assess the accuracy of our method, we used 149 images for training and 30 images for testing. PPIs were acquired using a double-scattered proton beam. The DRRs acquired from CT acted as learning targets in the training process and were used to evaluate results from the proposed method using a six-fold cross-validation scheme.
Results;
Qualitatively, the corrected PPIs showed enhanced spatial resolution and captured fine details present in the DRRs that are missed in the PPIs. The quantitative results for corrected PPIs show average normalized mean error (NME), normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index of −0.1%, 0.3%, 39.14 dB, and 0.987, respectively.
Conclusion;
The results indicate the proposed method can generate high quality corrected PPIs and this work shows the potential to use a deep-learning model to make PPI available in proton radiotherapy. This will allow for beam’s-eye-view (BEV) imaging with the particle used for treatment, leading to a valuable alternative to orthogonal x-rays or cone-beam CT for patient position verification.
Keywords: proton portal imaging, generative adversarial network, deep learning, beam’s-eye-view
1. Introduction
Fueled by the expansion and success of photon radiotherapy (RT), on-board image guidance (OBI) technologies, including x-ray planar imaging and cone-beam computed tomography (CBCT), have been implemented in proton therapy as well [1-3]. However, with some exceptions, there is still no widely available solution for beam’s-eye-view (BEV) imaging, which can provide complementary information for patient alignment and direct validation that the target is in the treatment field at native gantry angles.
Portal imaging has been shown to be feasible in proton therapy by detecting high energy protons, the same particle used for treatment [4-7]. However, the image quality in terms of spatial and contrast resolution in proton images is inferior to that of x-ray imaging [6, 8, 9]. Spatial resolution in proton imaging is limited by multiple Coulomb scattering (MCS), i.e. small angle deflections of protons due to their interaction with the Coulomb field of the nuclei of the traversed medium. The spatial resolution is also dependent on initial energy and angular distribution of the protons as well as the thickness of the object to be imaged. With numerous proton imaging techniques developed by several groups [9], the best performance, ~0.3 lp mm−1, in terms of spatial resolution is in single proton tracking methods, where pairs of proton tracking detectors are placed in front of and behind the patient to register single proton events [10-13]. However, the equipment complexity, size, and acquisition speed hinder applicability in a clinical setting. Another approach to proton imaging is a proton integrating approach, i.e. detecting the signal of an undetermined number of incident protons with a single pixelated imager [14, 15]. This technique is more clinically implementable due to its simplicity. However, without the ability to track individual protons, the spatial resolution is further limited by MCS. The spatial resolution has been quantified as ~7 mm at one standard deviation in water for 250 MeV, meaning the spatial resolution cannot be better than 7 mm regardless of the detector technology [8, 16]. In reality, with heterogeneities in the region of interest (ROI), especially in proximity to boundaries of different materials, protons will transmit through different materials with mixed energies or scattering angles, resulting in different residual ranges. This effect, termed as range mixing, stands as a persistent challenge to identifying fine details in integration-based proton imaging and hinders the implementation of proton portal imaging (PPI, this term is also used for proton portal image) for patient alignment and target position verification [17].
Verifying the target location during treatment is essential to avoid the damage to critical structures or missing the target. In proton therapy, the current practice is to acquire x-ray radiographs or CBCT of the patient in the treatment position and to compare these images with digitally reconstructed radiographs (DRRs) from the CT data used for planning the treatment. DRRs generated from the planning CT typically inherit fine image qualities from the CT. Ideal PPI would display image quality comparable to kV-DRRs, allowing for complementary image guidance in the BEV. In this study, we propose a deep-learning-based method to use kV-DRRs to improve PPI quality. The proposed method synthesizes a DRR-like image from a PPI, creating a synthetic PPI (sPPI) with comparable image quality to that of a routine DRR. We used a residual generative adversarial network (GAN) framework to learn the nonlinear mapping between PPIs and DRRs. PPIs are acquired using a double scattering system and DRRs are generated from CT. To assess the accuracy of our method, we used a head phantom with 179 PPI-DRR pairs from non-redundant angle projections. The DRRs act as learning targets in the training process and are used to evaluate results from the proposed method using a six-fold cross-validation scheme. The proposed method could allow for BEV imaging with the particle used for treatment, leading to a valuable alternative to x-ray-based imaging in patient position verification.
2. Materials and methods
2.1. Image acquisition
For the acquisition of training PPIs, the setup of our system is similar to those presented in previous studies [18, 19] and is shown in figure 1. The imager (PaxScan® 4030CB, Varian Medical Systems, Inc., Palo Alto, CA) (PaxScan® V 2004) was placed downstream from the phantom in order to measure the time-dependent detector response from a proton beam (224.2 MeV) modulated by a modulator wheel (MW). This time-resolved detector response function is termed as the dose rate function (DRF). The imager was set up perpendicular to the beamline and placed 215 cm away from the base of the MW. The MW was designed to reduce the range in water by approximately 2 mm per step. The imager (physical dimension: W × L × H = 36.6 cm × 46.6 cm × 3.8 cm) has an active area of W × L = 30 cm × 40 cm with 1024 × 768 pixels (pitch of 0.388 mm). For these acquisitions, the detector was run in 2 × 2 binned mode, increasing the area of each pixel to 0.776 × 0.776 mm2. The detector readout rate was 30 frames per second. The rotation of the MW was controlled by a stepper motor which was set to 12 revolutions per min (rpm), acquiring 150 frames per revolution. The signal recorded per detector pixel for one revolution of the MW is then the DRF for that pixel. The PPI image was generated by summing the DRF acquired over one revolution of the MW. The head phantom was placed on a rotary stage controlled by a stepper motor. DRFs were acquired for the phantom at every 2° from 0 to 358°, 179 image projections in total. Out of those 179 images, 149 were used for training and 30 for testing. The phantom was then scanned by x-rays axially on a clinical CT simulator (Discovery RT, GE Healthcare, Chicago, IL) with a routine procedure (140 kVp and 2.5 mm slice thickness).
Figure 1.
(a) The experimental set-up for proton portal imaging (PPI) with the flat-panel imager. The head phantom was placed on a rotary stage. PPIs were acquired at projection angles from 0 to 358° with an increment of 2° by rotating the phantom.
2.2. Synthetic PPI generation
2.2.1. Workflow
Figure 2 outlines the schematic workflow of the proposed method. A deep learning-based model was trained with a pair of PPIs and DRRs, where the DRR was used as a learning-based target of the PPI. The input image size was [384, 256]. Data augmentation, including flipping, rotation, and rigid warping were used to enlarge the variation present in the training data. For training, a PPI was first fed into the deep learning-based model to map to a synthesized image that has a similar pixel intensity level to ground truth (training) DRR. This component of the network is called a generator, and we call the synthesized image a synthetic PPI (sPPI). The sPPI was then fed into another network, called the discriminator, which judges whether the synthetic image is real (DRR) or synthetic (sPPI). The ground truth DRR is also fed into discriminator to provide guidance to the network in differentiating an sPPI from a DRR. During training, the generator and the discriminator were trained alternatively in each iteration to enhance the prediction accuracy of the network. We used a fixed number of iterations rather than using a validation set to monitor a residual. Thus, in this work, our test set is an independent test dataset. We only train the learnable parameters of network by six-fold cross-validation. The six-fold cross-validation was performed as follows: we first equally and randomly separate all data into six subgroups, then for each experiment, one subgroup is used as test data, and the rest five subgroups are used as training data. The network was trained with these training data. The test subgroup is independent from training data. Then, the experiment was repeated six times to let each subgroup to be used as a test data exactly once. The hyperparameters, set empirically, for the loss function and the network were fixed during each experiment. We stopped the training after 500 iterations after which the loss change was not significant.
Figure 2.
Workflow for the synthetic PPI-generating GAN.
2.2.2. Network architecture
Figure 3 shows the generator and discriminator network architectures used in the proposed network. The discriminator architecture is a traditional fully convolution network (FCN) [20]. The generator is an end-to-end residual FCN including both encoding and decoding paths [21, 22]. The encoding path is composed of one convolution layer with a stride size of one followed by five convolution layers with stride size of two to reduce the size of the feature maps. The decoding path is composed of five deconvolution layers to obtain an end-to-end mapping followed by the last convolution layer to combine feature maps extracted from previous layers. Finally, a tanh layer was used to translate feature maps of the last convolution layer to a regression output. In order to learn the deep distribution representation between the training PPI and DRR images, nine residual blocks were used as short skip connections between encoding and decoding paths. Each residual block was implemented by two convolution layers within residual connection and an element-wise sum operator as implemented for other deep learning image-generation tasks [23].
Figure 3.
Architecture of the generator and discriminator networks.
2.2.3. Loss function
Learnable parameters of the generator and discriminator were optimized iteratively in an alternating manner [24]. The accuracy of both networks relies on the design of their corresponding loss functions. The generator loss consists of an adversarial loss and an image difference loss. The goal of the generator is to produce the synthetic images that can fool the discriminators via minimizing adversarial losses, which relies on the output of the discriminators, i.e., the distribution of feeding sPPI (generated from generator G) into the DRR discriminator [25]. The adversarial loss of the generator is defined as:
(1) |
where denotes the and is the output sPPI of generator . is the output of discriminator , which is designed to return a binary value indicating whether a pixel is real (from DRR) or fake (from sPPI). The function is the sigmoid cross entropy between the discriminator map of the sPPI and a unit map of the same size [26].
In this work, normalized mean absolute error (NMAE) and gradient difference error (GDE) were used as components of the image difference loss for the generator [20]. The NMAE loss forces the generator to synthesize sPPI with similar pixel intensity to the ground truth DRR image. The GDE loss forces the sPPI’s edge structure to a level of ground truth DRR image, given by:
(2) |
where is a parameter which controls the and losses. Finally, the optimizations of generator and discriminator are obtained as guided by following equations:
(3) |
(4) |
2.3. Evaluation
For quantitative comparisons, sPPIs are compared to DRRs, which are taken as the ground truth, for calculation of NMAE, normalized mean error (NME), peak signal-to-noise ratio (PSNR), and the structural similarity (SSIM) index. All comparison metrics are calculated within phantom body for each projection.
NME and NMAE are the difference and the magnitude of the difference, respectively, between the ground truth image and the evaluated image and are given by:
(5) |
(6) |
where is the vector of pixels in ground truth DRR and is the vector of pixels in sPPI, and is the total number of pixels in calculation, and indicates the -norm.
PSNR is the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation and is calculated as follows:
(7) |
where is the maximum pixel intensity in and , and indicates the -norm.
SSIM is an image quality metric that assesses the visual impact of three characteristics of an image: luminance, contrast, and structure. The overall index is a multiplicative combination of the three terms [27], and defined as:
(8) |
where , , , , and are the local means, standard deviations, and cross-covariance for images , . and are defined as:
(9) |
(10) |
with being a dynamic range value.
3. Results
Figure 4 illustrates the qualitative results of the proposed PPI correction method on two projections, 0° (first two rows) and 104° (last two rows). Ground truth DRRs (column 1) and sPPIs (column 3) have similar contrast and are scarcely discernible without detailed analysis. SPPIs show enhanced spatial resolution and capture fine details present in the DRRs that are missed in the PPIs. Rows 2 and 4 are the magnified images of the regions of interest outlined by the yellow rectangles in figures 4(a) and (g) respectively. It can be seen that the algorithm can synthesize finely detailed image structures of the ground truth DRR. Line profiles in figure 5 (as indicated by the red lines in figures 4(c) and (i)) show the deep learning-based method closely matches the DRR in both shape and pixel value across heterogeneous regions, while the input PPI fails to capture these fine features.
Figure 4.
Representative examples of the sPPIs (last column) obtained by our method as compared to ground-truth DRRs (first column) and PPIs (second column) at projection angles 0° and 104°. All images are normalized and shown on the window [0 1]. Rows 2 and 4 are the magnified images of the regions of interest indicated by yellow rectangle selections in (a) and (g) respectively.
Figure 5.
Normalized line profiles of 0° (top, corresponding to the red line drawn in figure 4(c)) and 104° (bottom, corresponding to the red line drawn in figure 4(i)) projections.
Figures 6(a)-(c) show ground truth DRR, sPPI, and the element-wise subtraction image between these images for the 270° projection. Figure 6(d) is the checkerboard overlay of the DRR and sPPI. Checkerboard displays are a commonly used form of qualitative visualization tool for detecting registration error in the clinical setting, and are particularly effective in identifying mismatches between corresponding structures at high contrast tissue interfaces [28]. Insets e and f show the ground truth DRR with an sPPI rotated by 7° to show that the proposed method can be used for position verification. Figure 6(e) is a blended overlay between the DRR and sPPI. Figure 6(f) shows the composite DRR and sPPI overlaid in different color bands. Gray regions in the composite image show where the two images have the same intensities. Magenta and green regions highlight intensity differences.
Figure 6.
(a) DRR at 270° projection. (b) sPPI at 270° projection. (c) Elementwise subtraction image of (a) and (b). (d) checkerboard overlay of (a) and (b). (e) blend overlay of (a) and (b) with 7° CCW offset applied to (b). (f) Composite image of (a) and (b) overlaid in different color bands. The DRRs, sPPIs, and the subtraction image are shown on a window of [0 1] normalized DRR pixel value.
The quantitative results for all 149 projections of the head phantom are summarized in table 1. The mean ± standard deviation (SD) of NME, NMAE, PSNR and SSIM for sPPIs are −0.0009 ± 0.0004, 0.003 ± 0.0007, 39.14 ± 1.47 dB, and 0.987 ± 0.0099, respectively. Figure 7 shows NME, NMAE, and SSIM with respect to projection angle. Red dashed lines, (a) and (b), correspond to the projections at 0° and 104° (shown in figure 4).
Table 1.
Statistics of sPPIs for the NME, NMAE, PSNR and SSIM values for the head phantom for all projections.
Mean ± SD | Median | Min | Max | |
---|---|---|---|---|
NME | −0.0009 ± 0.0004 | −0.00088 | −0.0018 | −3.48E-05 |
NMAE | 0.003 ± 0.0007 | 0.003 | 0.0026 | 0.0083 |
PSNR (dB) | 39.14 ± 1.47 | 39.33 | 31.17 | 41.61 |
SSIM | 0.987 ± 0.0099 | 0.989 | 0.956 | 0.999 |
Figure 7.
NME (top), NMAE (middle) and SSIM (bottom) over different projection angles. Dashed red lines, (a) and (b), indicate projections at 0° and 104° respectively.
4. Discussion
In this study, we proposed a machine-learning based method to synthesize a DRR-like image from PPI for potential use in proton RT with the aim of achieving precise patient setup for treatment. We evaluated the accuracy of synthesized PPI maps using our method with a head phantom. The proposed method successfully generated sPPIs from PPI with an average NMAE around 0.3% and structural similarity of 98.7%. PPIs are true beam’s-eye-view (BEV) images, as opposed to x-ray radiographs which are conical projections from a given point source. Another advantage of PPI is that dose can be reduced by 50–100 times as compared to x-ray imaging for proton tracking systems [8]. To our knowledge, this is the first study to show the concept of improving visual quality of BEV PPIs using a deep-learning model.
Although portal imaging with the therapeutic beam offers the most direct imaging in the treatment fields in photon RT, no such technique is available in conventional proton RT simply because the beam typically ranges out in the patient. Recently, techniques where proton beams shoot through the patient from different angles and irradiate the tumor with the plateau region, namely ‘shoot-through’ proton FLASH RT, have been proposed [29-31]. For shoot-through proton FLASH RT, there will be protons exiting the patient which can be used for portal imaging, revealing valuable information for the validation of tumor location. The implementation of such a portal imaging technique for the direct verification of shoot-through proton FLASH RT is especially relevant to the treatment of moving targets, such as lung tumors [32, 33]. Image quality of these portal images can be improved by our proposed method and used for further target verification.
Pencil beam scanning may be a preferred delivery method for shoot-through FLASH RT to maximize the dose rate. Although PPIs in this study were acquired using a double-scattering beamline, the major conclusion can be extended to PPIs acquired using pencil beam scanning. Recently we showed a proof-of-concept of electronic portal imaging for shoot-through protons using pencil beam scanning to acquire BEV PPIs.
In this study we applied our method on 2D PPI projections. Enhanced image quality of these projections shows promise that, in the future, proton CT (pCT) could benefit from our method as well. With further validation of this approach for producing high quality PPIs, extension to pCT reconstruction using sPPIs could produce x-ray CT quality images that could be used for identification and delineation of malignancies without exposing a patient to additional x-ray dose.
In this study, we tested our method in the head-and-neck region. Head-and-neck patients feature high anatomical complexity and variability between patients. The tumor shape, size, and location can vary greatly for different patients, and it is common to see the tumor changing the exterior body shape, which is challenging for learning-based methods. Nevertheless, this is a single phantom study with a very limited data-set, which is not desirable for learning-based methods. Without a feasible method for acquisition of clinical PPIs, the inclusion of real patient data for this kind of study is unrealistic. Consequently, future studies should involve a comprehensive evaluation with different datasets, perhaps pelvis and thorax phantoms, to further reduce bias during the model training. This is a proof-of-concept study and future direction should include use of simulated data of PPIs from various patient CTs paired with DRRs to demonstrate the feasibility of the proposed method on realistic data-sets. Since there are difficulties associated with stability of the rotary stage with large phantoms, we are also investigating a setup where we can make flat-panel imager rotate to acquire the projections. Different testing and training datasets from different institutions would also be valuable to evaluate the clinical utility of our method.
As with any other machine learning methods, computational cost for training a model is a challenge for our method. We implemented the proposed algorithm with Python 3.7 and TensorFlow as in-house software on an NVIDIA Tesla V100 GPU with 32GB of memory. Adam gradient optimizer with learning rate of 2E-4 was used for optimization. In the present study, the training stage requires ~16 GB and ~2 h for the training datasets of 149 projections, and ~1 s for each projection in testing stage.
5. Conclusions
We applied a novel deep learning-based approach to integrate dense blocks into a GAN framework to synthesize sPPIs from PPIs for potential use in shoot-through proton RT with the aim of achieving precise patient setup during the treatment course. The proposed method demonstrated a comparable level of precision in reliably generating synthetic images when compared to ground truth with several qualitative and quantitative metrics.
Acknowledgments
This research is supported in part by the National Cancer Institute of the National Institutes of Health under Award Number R01CA215718, and Emory Winship Pilot Grant.
References
- [1].Hua C, Yao W, Kidani T, Tomida K, Ozawa S, Nishimura T, Fujisawa T, Shinagawa R and Merchant TE 2017. A robotic C-arm cone beam CT system for image-guided proton therapy: design and performance Br. J. Radiol 90 20170266. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Veiga C et al. 2016. First clinical investigation of cone beam computed tomography and deformable registration for adaptive proton therapy for lung cancer International Journal of Radiation Oncology, BIOLOGY, Physics 95 549–59 [DOI] [PubMed] [Google Scholar]
- [3].Chen GT, Sharp GC and Mori S 2009. A review of image-guided radiotherapy Radiological Physics and Technology 2 1–12 [DOI] [PubMed] [Google Scholar]
- [4].Wilson RR 1946. Radiological use of fast protons Radiology 47 487–91 [DOI] [PubMed] [Google Scholar]
- [5].Steward VW and Koehler AM 1973. Proton beam radiography in tumor detection Science (New York, N.Y.) 179 913–4 [DOI] [PubMed] [Google Scholar]
- [6].Schneider U and Pedroni E 1995. Proton radiography as a tool for quality control in proton therapy Med. Phys 22 353–63 [DOI] [PubMed] [Google Scholar]
- [7].Doolan P, Bentefour E, Testa M, Cascio E, Royle G, Gottschalk B and Lu H 2015. WE-EF-303-10: single-detector proton radiography as a portal imaging equivalent for proton therapy Med. Phys 42 3680 [Google Scholar]
- [8].Schneider U, Besserer J, Pemler P, Dellert M, Moosburger M, Pedroni E and Kaser-Hotz B 2004. First proton radiography of an animal patient Med. Phys 31 1046–51 [DOI] [PubMed] [Google Scholar]
- [9].Krah N, Khellaf F, Létang JM, Rit S and Rinaldi I 2018. A comprehensive theoretical comparison of proton imaging set-ups in terms of spatial resolution Physics in Medicine & Biology 63 135013. [DOI] [PubMed] [Google Scholar]
- [10].Schulte R et al. 2004. Conceptual design of a proton computed tomography system for applications in proton radiation therapy IEEE Trans. Nucl. Sci 51 866–72 [Google Scholar]
- [11].Penfold SN, Rosenfeld AB, Schulte RW and Sadrozinksi HFW 2011. Geometrical optimization of a particle tracking system for proton computed tomography Radiat. Meas 46 2069–72 [Google Scholar]
- [12].Civinini C et al. 2013. Recent results on the development of a proton computed tomography system Nucl. Instrum. Methods Phys. Res., Sect. A 732 573–6 [Google Scholar]
- [13].Taylor JT. et al. Proton tracking for medical imaging and dosimetry. Journal of Instrumentation: an IOP and SISSA Journal. 2015;10:C02015. doi: 10.1088/1748-0221/10/02/C02015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Seco J and Depauw N 2011. Proof of principle study of the use of a CMOS active pixel sensor for proton radiography Med. Phys 38 622–3 [DOI] [PubMed] [Google Scholar]
- [15].Zygmanski P, Gall KP, Rabin MS and Rosenthal SJ 2000. the measurement of proton stopping power using proton-cone-beam computed tomography Phys. Med. Biol 45 511–28 [DOI] [PubMed] [Google Scholar]
- [16].Poludniowski G, Allinson NM and Evans PM 2015. Proton radiography and tomography with application to proton therapy Br. J. Radiol 88 20150134. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [17].Priegnitz M, Helmbrecht S, Janssens G, Perali I, Smeets J, Vander Stappen F, Sterpin E and Fiedler F 2016. Detection of mixed-range proton pencil beams with a prompt gamma slit camera Phys. Med. Biol 61 855–71 [DOI] [PubMed] [Google Scholar]
- [18].Jee KW, Zhang R, Bentefour EH, Doolan PJ, Cascio E, Sharp G, Flanz J and Lu HM 2017. Investigation of time-resolved proton radiography using x-ray flat-panel imaging system Phys. Med. Biol 62 1905–19 [DOI] [PubMed] [Google Scholar]
- [19].Zhang R, Sharp GC, Jee KW, Cascio E, Harms J, Flanz JB and Lu HM 2019. Iterative optimization of relative stopping power by single detector based multi-projection proton radiography Phys. Med. Biol 64 065022. [DOI] [PubMed] [Google Scholar]
- [20].Lei Y, Harms J, Wang T, Liu Y, Shu HK, Jani AB, Curran WJ, Mao H, Liu T and Yang X 2019. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks Med. Phys 46 3565–81 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Wang T, Lei Y, Tian Z, Dong X, Liu Y, Jiang X, Curran WJ, Liu T, Shu HK and Yang X 2019. Deep learning-based image quality improvement for low-dose computed tomography simulation in radiation therapy J Med Imaging 6 043504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Dong X, Wang T, Lei Y, Higgins K, Liu T, Curran WJ, Mao H, Nye JA and Yang X 2019. Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging Phys. Med. Biol 64 215016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Harms J, Lei Y, Wang T, Zhang R, Zhou J, Tang X, Curran WJ, Liu T and Yang X 2019. Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography Med. Phys 46 3998–4009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Yang X, Lei Y, Dong X, Wang T, Higgins K, Liu T, Shim H, Curran WJ, Mao H and Nye JA 2019. Attenuation and Scatter Correction for Whole-body PET Using 3D Generative Adversarial Networks J. Nucl. Med 60 174 In http://jnm.snmjournals.org/content/60/supplement_1/174 [Google Scholar]
- [25].Zhu J, Park T, Isola P and Efros AA 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks 2017 IEEE Int. Conf. on Computer Vision (ICCV) pp 2242–51 [Google Scholar]
- [26].Yang X, Lei Y, Wang T, Dong X, Higgins K, Curran WJ, Mao H and Nye JA 2019. Whole-body PET estimation from ultra-short scan durations using 3D cycle-consistent generative adversarial networks J. Nucl. Med 60 247 In http://jnm.snmjournals.org/content/60/supplement_1/247.short [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Zhou W, Bovik AC, Sheikh HR and Simoncelli EP 2004. Image quality assessment: from error visibility to structural similarity IEEE Trans. Image Process 13 600–12 [DOI] [PubMed] [Google Scholar]
- [28].Brock KK, Mutic S, McNutt TR, Li H and Kessler ML 2017. Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132 Med. Phys 44 e43–76 [DOI] [PubMed] [Google Scholar]
- [29].Katsis A, Busold S, Mascia A, Heese J, Marshall A, Smith C, Magliari A, Parry R and Abel E 2019. Treatment planning and dose monitoring for small animal proton FLASH irradiations Med. Phys 46 e380 [Google Scholar]
- [30].Eley J, Abel E, Zodda A, Katsis A, Marshall A, Girdhani S, Parry R, Vujaskovic Z and Jackson I 2019. Experimental Platform for Ultra-High-Dose-Rate FLASH Proton Therapy Med. Phys 46 e665 [Google Scholar]
- [31].Diffenderfer ES et al. 2020. Design, Implementation, and in vivo Validation of a Novel Proton FLASH Radiation Therapy System International Journal of Radiation Oncology • Biology • Physics 106 440–8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Maxim PG, Keall P and Cai J 2019. FLASH radiotherapy: Newsflash or flash in the pan? Med. Phys 46 4287–90 [DOI] [PubMed] [Google Scholar]
- [33].Favaudon V. et al. Ultrahigh dose-rate FLASH irradiation increases the differential response between normal and tumor tissue in mice. Sci. Transl. Med. 2014;6:245ra93. doi: 10.1126/scitranslmed.3008973. [DOI] [PubMed] [Google Scholar]