Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Nov 1.
Published in final edited form as: Comput Biol Med. 2021 Oct 4;138:104917. doi: 10.1016/j.compbiomed.2021.104917

Synthetic Digital Reconstructed Radiographs for MR-only Robotic Stereotactic Radiation Therapy: A Proof of Concept

Gregory Szalkowski 1, Dong Nie 2, Tong Zhu 1, Pew-Thian Yap 2, Jun Lian 1,*
PMCID: PMC8627784  NIHMSID: NIHMS1750670  PMID: 34688037

Abstract

Purpose:

To create synthetic CTs and DRRs from MR images that allow for fiducial visualization and accurate dose calculation for MR-only radiosurgery.

Methods:

We developed a machine learning model to create synthetic CTs from pelvic MRs for prostate treatments. This model has been previously proven to generate synthetic CTs with accuracy on par or better than alternate methods, such as atlas based registration. Our dataset consisted of 11 paired CT and conventional MR (T2) images used for previous CyberKnife (Accuray, Inc) radiotherapy treatments. The MR images were pre-processed to mimic the appearance of fiducial-enhancing images. Two models were trained for each parameter case, using a sub-set of the available image pairs, with the remaining images set aside for testing and validation of the model to identify the optimal patch size and number of image pairs used for training. Four models were then trained using the identified parameters and used to generate synthetic CTs, which in turn were used to generate DRRs at angles 45 and 315 degrees, as would be used for a CyberKnife treatment. The synthetic CTs and DRRs were compared visually and using the mean squared error and peak signal-to-noise ratio against the ground-truth images to evaluate their similarity.

Results:

The synthetic CTs, as well as the DRRs generated from them, gave similar visualization of the fiducial markers in the prostate as the true counterparts. There was no significant difference found for the fiducial localization for the CTs and DRRs. Across the 8 DRRs analyzed, the mean MSE between the normalized true and synthetic DRRs was 0.66 ± 0.42% and the mean PSNR for this region was 22.88 ± 3.74 dB. For the full CTs, the mean MAE was 72.9 ± 88.1 HU and the mean PSNR was 31.23 ± 2.16 dB.

Conclusions:

Our machine learning-based method provides a proof of concept of a way to generate synthetic CTs and DRRs for accurate dose calculation and fiducial localization for use in radiation treatment of the prostate.

Keywords: Synthetic CT, Robotic Radiotherapy, Deep Learning

1. INTRODUCTION

In the field of radiation therapy, magnetic resonance imaging (MRI) has long been an important imaging modality due to its superior soft-tissue contrast compared to computed tomography (CT) images. This is particularly important in brain and abdominal treatments, where the tumor volume and important organs at risk (OARs) are surrounded by soft tissue with similar electron density, making the delineation of these structures on a CT image difficult to impossible. For prostate stereotactic body radiation therapy (SBRT), the requirement on the accuracy of segmentation for the target and OARs is even higher compared to conventional radiation therapy since the dose per fraction is higher. Additionally, the urethra, an important structure to spare, is visible on MR but not on CT. However, conventional simulation CT scans are generally still an integral part of the radiation therapy workflow as the electron density information is necessary for dose calculation, especially for heterogeneity corrections. Registering an MR image to the treatment planning CT allows for target and normal tissue contours to be created on the MR and transferred to the CT, but this requires additional imaging procedures for the patient and introduces geometric uncertainties on the order of 2–3 mm from the imperfect registration process13.

There has been extensive research on converting MR image to CT to allow a MR-only workflow for radiation therapy. While MR images can be more prone to geometric distortions than CTs, previous work has shown that the effect of these distortions on dose calculation accuracy is minimal4,5. There are several techniques to create synthetic CTs from MRIs, including conventional methods like contour-based density assignment6,7, deformable atlas methods810, and tissue classification1113. In recent years, a variety of machine learning algorithms have proven effective for image synthesis1418. The advantage of the machine learning method is a more realistic synthetic image, as it does not rely on assigning HU values based on contours. Our group developed such kind of method in 2016 and it is one of the first publications on this topic16. However, all publications so far have focused on the planning side, and there are sparse reports about the application of synthesized images for treatment delivery. CyberKnife (Accuray Inc) is a robotic radiosurgery machine with a compact accelerator mounted on a six-degree freedom robotic arm. It is capable of delivering precise, non-isocentric stereotactic radiation therapy to targets in both the brain and body. Through the use of intra-fraction planar images, the CyberKnife can track targets and adjust the aim of the beam modulator to correct for motion of the target. Unfortunately, CyberKnife has no in-room 3D volumetric scanning capability. The accurate delivery of pelvic treatments often demands an adequate visualization of implanted fiducials on two orthogonal 2D X-ray images. Historically, fiducials could not be easily visualized in conventional clinical T1 or T2 MR images, however, recently developed sequences have shown promising results in enhancing the contrast of implanted fiducials19,20.

Based on this, we have developed the first machine learning models that are optimally tuned for pelvic CyberKnife treatments. The models can create synthetic CTs and DRRs that include fiducial information while also achieving similar, or superior, HU accuracy in the rest of the image compared to alternate synthetic CT techniques, demonstrating the potential for an MR-only workflow for fiducial-based treatments.

2. MATERIALS AND METHODS

2.A. Image acquisition

Our prostate image data set consists of 11 subjects, each with MR and CT images. This data set is a subset of the pelvic data set used in our previous publication (Nie et al, 201816), which also includes a full description of the initial image data and processing methods. In brief, the MR and CT images for each subject were first rigidly registered with the stand-alone FLIRT toolbox21, then deformably registered using ANTs-SyN. Diffeomorphic Demons based on manual prostate, bladder, and rectum contours. The aligned images were sampled to have a spacing of 1 × 1 × 1 mm3 and cropped to a size of 153 × 193 × 50 voxels.

As we currently do not acquire fiducial enhancing MR images for clinical use at our institution, as a proof-of-principle study, we processed existing clinical T2 MRs used in previous CyberKnife patient treatment planning to enhance the fiducial contrast. The simulated MR dataset has a similar appearance, intensity distribution and local contrast of the fiducials to the sample MR images acquired with fiducial enhancing sequence. We believe the use of pre-processed images will not affect the structure of our machine learning model and model training process. More discussion will follow in the Discussion section.

2.B. Image padding for dose calculation

As the images were cropped due to computational constraints on the model, we created larger images to compare the calculated dose on the real and synthetic images. To do this, we repeated the last 10 slices on each side of the image 5 times in the superior, inferior, left, and right directions, 8 times anteriorly, and 2 times posteriorly to increase the dimensions of the image from 19.3×15.3×5 cm to 29.3×25.3×15 cm. The larger image size resembles the dimensions of a typical patient. More slices were added anteriorly than posteriorly to give a more realistic position of the prostate within the “patient.” The image was then padded with air to meet the 512×512×256 voxel minimum image size requirement in the CyberKnife treatment planning system (Precision, Accuray Inc., Sunnyvale, CA). Since the padding process is the same for both the real and synthetic images and no voxel values were adjusted, this should not add any additional error in the dose calculation comparison.

2.C. Model structure

The full network structure can be found in our previous paper, Nie et al, 201816. In brief, we used a supervised deep convolutional adversarial framework comprised of a generator, to create the synthetic images, and a discriminator, which estimates the probability that the input image is real vs synthetic (Fig 2). During the testing stage, the input source images are split into overlapping patches and the generator, a 3D fully convolutional network (FCN), estimates a corresponding target for each patch. These generated patches are then merged back into a single image, averaging the intensities of all overlapping regions. The generator incorporates the image gradient difference in the loss function, to preserve the sharpness of the image, and does not use pooling, to avoid a potential loss of resolution. The output of the generator is then fed to the discriminator, a convolutional neural network (CNN), along with real images which it will then label as “real” or “synthetic.” Effectively, these two networks are trained simultaneously. The discriminator attempts to classify images correctly as real or synthetic, and the generator attempts to create realistic enough images to fool the discriminator. During the GAN training process, training of the two sub-networks is alternated; the generator is trained with a mini-batch of source and target image data to produce a set of synthetic data, then this synthetic data is taken along with the corresponding real data to train the discriminator. This is continued until the training is complete.

FIG 2:

FIG 2:

Illustration of the network design used in the generative adversarial network for image generation. For both the generator and discriminator networks, the initial layers contain convolution (Conv), batch normalization (BN) and ReLU operations. ReLU is used as the activation function to reduce the computational burden of the model and increase the training speed, as is common in current deep learning models. Since the final layer of the discriminator network outputs the probability that the input data is drawn from the real target image, it uses a sigmoid activation function instead (as it must output between 0 and 1).

2.D. Model training for synthetic CT and DRR

Model training parameters are important for the model’s performance besides the architecture of the model itself. We carefully investigated their effect on the quality of the synthetic CT and fiducials for Cyberknife treatment. These studies include; 1) effect of patch size and 2) effect of training sample number.

For each case, we trained multiple models using different subsets of our dataset to evaluate the robustness of the model. For the initial investigate of the training parameters, 2 models were trained for each condition. Once we identified the optimal combination, we trained a total of 4 models to produce images for comparison against the ground truth data.

The quality of the image is quantified by the mean absolute error (MAE), mean squared error (MSE), and peak signal to noise ratio (PSNR), where the definition of MAE is

MAE =1mnoi=0m1j=0n1k=1o1|S(i,j,k)R(i,j,k)|

and where the definition of MSE is

MSE =1mnoi=0m1j=0n1k=1o1(S(i,j,k)R(i,j,k))2,

where S(i,j,k) and R(i,j,k) are the pixel values in the synthetic and real images, respectively. The i, j, k is the coordinate in each direction and the image size is m × n × o.

Here, MAE was used to compare the real and synthetic CTs while MSE was used to compare the DRRs, as these were normalized to the range [0,1] during the generation process.

The definition of PSNR is:

PSNR=10log10(MAXI2MSE)

where MAXI is the maximum possible pixel intensity in the image.

2.E. CT analysis and comparison

To compare the synthetic and true CT images, we analyzed the mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and the contrast ratio of the image in the area of the fiducials to evaluate how accurately they were reconstructed. Contrast ratio (CR) is defined as the ratio of the contrast in the synthetic image to the real image, where contrast is defined as the ratio of the average fiducial pixel value and the average surrounding tissue pixel value (pixels between 2 to 5 mm of the fiducial edge).

We additionally analyzed how well the model reconstructed tissue by mean, standard deviation, and uniformity of regions of interest (ROIs) in the bladder and prostate. We assessed uniformity in a manner similar to the test used for CBCT image quality assurance testing22, where uniformity is defined as

Max pixel intensity in ROIMin pixel intensity in ROI Max pixel intensity in ROI + Min pixel intensity in ROI 

Regions were chosen specifically to be areas where the true CT was mostly uniform. A visual representation of the cross-section of the 1 cm cylindrical regions can be seen in Figure 3.

FIG. 3:

FIG. 3:

Visualization of the 5 ROIs used to assess the model’s performance in normal tissue

2.F. Dose calculation comparison

To compare the dose calculation on the real and synthetic images, we imported the enlarged images into our CyberKnife treatment planning system. After contouring the bladder, rectum, prostate, and urethra, a treatment plan was created using our in-house standard prostate SBRT protocol (3625 cGy in 5 fractions). The dose distribution for this plan was calculated on both the real CT and synthetic CT images, and were compared using a 3D gamma analysis test, a standard dose comparison metric. The gamma analysis test, detailed in Low et al23, calculates a quality index, γ, at each point in the test dose distribution, it, compared against the points in the reference dose distribution, ir, using specified distance-to-agreement (ΔdM) and dose-difference (ΔDM) criteria. For each point, the γ index is given by

γ(it)=min{Γ(it,ir)}{ir}

Where

Γ(it,ir)=r2(it,ir)ΔdM2+δ2(it,ir)ΔDM2r(it,ir)=|itir|

Is the distance between points it and ir, and

δ(it,ir)=Dt(it)Dr(ir)

Is the difference between the dose values from the test and reference dose distributions at points it and ir. A point is considered “passing” if γ(it) < 1.

For this work, we used 3%/2mm for the dose-difference and distance-to-agreement criteria, respectively, a low-dose threshold of 10% of the prescription dose and a tolerance level of ≥90% of points passing, as used in our clinical practice and also recommended by AAPM task group report 21824.

2.G. DRR creation and analysis

For each of the two testing images, two DRRs were created using Siddon’s incremental algorithm25 implemented in MATLAB (MathWorks Inc., Natick, MA)26, corresponding to angles of 45 and 315 degrees as used by CyberKnife system. We compared the DRRs created using the synthetic CTs versus those created using the ground truth CTs via the mean squared error (MSE) and the peak signal to noise ratio (PSNR). Additionally, we recruited five volunteers from our physics group to complete a fiducial identification task. Figure 4 shows the task interface. During this task, the volunteer was shown a series of 8 images, consisting of 4 synthetic DRRs and their real counterparts in a randomized order. The volunteer was asked to identify the center of each fiducial they could see. Volunteers were able to manipulate the window/level in order to better visualize the fiducials. We compared both the number and centroid locations of the identified fiducials between the synthetic and true DRRs for each of the image pairs from these responses. We also asked our volunteers to identify the fiducials directly in the real and synthetic CTs imported into a treatment planning software (Raystation, RaySearch Laboratories, Stockholm, Sweden). The fiducial locations on the real and synthetic images were compared using a Wilcoxon signed-rank test to determine if there was a statistically significant difference between the selected coordinates.

FIG. 4:

FIG. 4:

Illustration of the fiducial identification task presented to volunteers to compare the visualization of fiducials between the real and synthetic DRRs

3. RESULTS

3.A. Effect of patch size on model performance

The size of the patch used in model training affects the image quality of synthetic CT and also significantly contributes to the efficiency of the model training. We trained models using three different patch sizes: 32, 64 and 128. As shown qualitatively in Figure 5 and quantitatively in Table I, the model with patch size 128 achieved better contrast for the fiducials than the other patch sizes. Table II shows that the uniformity and accuracy of HU values were also closer to that of the true image using a patch size of 128 compared to the other patch sizes. Table III shows this patch size also gives the synthetic image that is overall closest to the true image, as measured by MAE and PSNR. The training time for the model did increase as the patch size increased, from 2 days for the 32 patch size to 5 days for the 128 patch size. As the training is a one-time cost, we chose the 128 patch size since it generated the results with the best image quality. On analysis of the images, the high variance of the MAE was found to be mostly caused by differences in rectal filling between the CT and MR images in our dataset. While the outline of the rectum was similar because of the deformable registration, change in location of air pockets inside rectum contour between the CT and MR images led to areas where the MAE was close to 900 HU (the difference between tissue and air). In common radiation therapy practice, the rectal filling often differs in this manner between simulation and the patient’s treatment fractions, and this difference has been found not to be clinically relevant.27 Online re-planning to adapt the treatment plan based on the daily anatomy can further improve dosimetric accuracy, however it has only been implemented clinically in a few centers and has not become a common practice yet due to the massive increase in workload and time needed to treat patients.28

FIG. 5:

FIG. 5:

Comparison of the real CT and the synthetic CTs created using models trained with 32, 64, and 128 pixel patch sizes

TABLE I:

Comparison of fiducials in true and synthetic images for different patch sizes

Patch Size Mean MAE (HU) Mean PSNR (dB) Contrast Ratio
32 109.8 ± 20.2 29.80 ± 1.41 0.871 ± 0.100
64 110.3 ± 21.26 29.90 ± 1.30 0.890 ± 0.048
128 61.0 ± 23.7 33.45 ± 2.17 0.92 ± 0.056

TABLE II:

Comparison of uniformity in the bladder/prostate in true and synthetic images for different patch sizes

Patch Size Mean (HU) Uniformity
True Image 1038.0 ± 11.0 0.036 ± 0.007
32 984.1 ± 11.0 0.032 ± 0.019
64 954.38 ± 11.4 0.031 ± 0.020
128 1036.1 ± 10.1 0.028 ± 0.014

TABLE III:

Comparison of overall CT quality for different patch sizes

Patch Size Mean MAE (HU) Mean PSNR (dB)
32 113.8 ± 95.1 28.77 ± 0.27
64 128.1 ± 89.5 28.40 ± 0.73
128 72.9 ± 88.1 31.23 ± 2.16

3.B. Effect of training sample on model performance

Next, we analyzed the effect of the number of training samples on the quality of synthetic CT. Based on the previous results, a patch size of 128 was used to train all of these models. Table IV gives a qualitative measurement of the fiducial appearance as a function of training set size, and Table V shows the pixel value and uniformity. While the fiducial appearance does not show a significant change as the training set size is reduced, the normal tissue pixel values become marginally more accurate as the training set increases. Figure 6 shows a quantitative comparison of the synthetic CTs. The fiducial appearance does not appreciably change as the training set is increased from 5 to 9, but the soft tissue boundaries and uniformity approach the real CT (Tables IV and V). The overall accuracy of the CT, as determined by the MAE and PSNR, also increased gradually as the training set was increased (Table VI). Our model’s PSNR of synthetic CT has a range 27–31, which agrees with other published results,1418 which range from 24.5 ± 1.3118 to 34.0 ± 1.016 dB. The mean average error (MAE) of our images, 72.9 ± 88.1, is also similar to other studies1418, which range from 39.0 ± 4.616 to 84.8 ± 17.314 HU. Based on this experiment, we selected to use all 9 available training samples, which was sufficient to generate images acceptable for radiotherapy use.

TABLE IV:

Comparison of fiducials in true and synthetic images for different training set size

Image Number Mean MAE (HU) Mean PSNR (dB) Contrast Ratio
5 133.4 ± 81.1 29.17 ± 4.39 0.88 ± 0.084
7 132.3 ± 101.8 29.92 ± 5.81 0.88 ± 0.055
9 61.0 ± 23.7 33.45 ± 2.17 0.92 ± 0.056

TABLE V:

Comparison of uniformity in the bladder/prostate in true and synthetic images for different training set size

Image number Mean (HU) Uniformity
True Image 1038.0 ± 11.0 0.036 ± 0.007
5 983.4 ± 30.0 0.13 ± 0.022
7 983.2 ± 22.5 0.12 ± 0.028
9 1036.1 ± 10.1 0.028 ± 0.014

FIG. 6:

FIG. 6:

FIG. 6:

Comparison of the real CT and the synthetic CTs created using models trained with 5, 7, and 9 patient image sets in the training set.

TABLE VI:

Comparison of overall CT quality for different training set size

Image number Mean MAE (HU) Mean PSNR (dB)
5 169.5 ± 131.0 25.79 ± 3.14
7 142.7 ± 114.0 27.20 ± 3.90
9 72.9 ± 88.1 31.23 ± 2.16

3.D. DRR comparison and analysis

The mean MSE of the DRRs across the 4 testing sets (8 images) was 0.66 ± 0.42%, and the mean PSNR was 22.88±3.73 dB. Figure 7 shows a qualitative comparison between the DRRs generated from synthetic and real CT images, while Table VII gives a quantitative breakdown of the mean MSE and PSNR. The MSE and PSNR are reported both for the full region of the DRR in which tissue can be seen as well as for small regions of interest surrounding the fiducials.

FIG. 7:

FIG. 7:

FIG. 7:

Comparison of the real and synthetic DRRs for the two angles investigated for a pelvic case. Red arrows indicated the location of the four fiducials present in the scan

TABLE VII:

Mean MSE and PSNR for the synthetic pelvic DRRs

DRR Angle Full DRR Mean MSE Full DRR Mean PSNR (dB) Fiducial Mean MSE Fiducial Mean PSNR (dB)
45 degree 0.57 ± 0.41% 23.89 ± 4.82 0.30% ± 0.31% 29.18 ± 7.26
315 degree 0.74 ± 0.46% 21.88 ± 2.59 0.74% ± 0.57% 23.69 ± 3.92

The fiducial identification task did not show a statistically significant difference between the number of fiducials found between the real and synthetic DRR images (p = 0.72, mean difference of −0.05 ± 0.88 identified). The mean difference of the centroid location between the true and synthetic DRR images was 1.4 ± 0.96 mm, which was found to be insignificant (p=0.90), and is approximately on the order of a single pixel (which has a dimension of 1×1 mm). On the 3D scans, the mean difference of the identified fiducial centroid locations between the real and synthetic CTs (0.5±0.4 mm) was found not to be significant (p= 0.25), and less than a single voxel (1×1×1 mm). There was no difference in the number of fiducials identified in this case.

3.E. Verification with dose calculation

Three-dimensional dose distributions between using the real CT and synthetic CT have a 97.7 ± 0.7 % agreement, using 3%/2mm gamma criteria, indicating that our synthesis CT is accurate enough for dose calculation24. A sample two dimensional dose distribution is demonstrated in Figure 8. DVHs of the two calculations show no noticeable difference (figure not shown).

4. DISCUSSION

In this study, we evaluated the accuracy of our proposed synthetic CT generation method in terms of the visualization of landmarks need for target tracking in robotic radiosurgery cases.

The model training shows the importance of the selection of sample size and patch size. In order to reach a balance of accuracy and efficiency, we found 9 patient samples and a 128×128 patch size is appropriate for the MR-CT synthesis for the pelvis. This combination allows for accurate reconstruction of the tissue, as assessed by the uniformity, PSNR, and MAE compared with the ground truth images. It preserves fine details in the scan, such as fiducials, as assessed by the contrast ratio. As the goal of this study was only to correlate images between two specific scanners, each using a single specific protocol, we did not need many samples to build an accurate model. While 9 patients were sufficient for this purpose, if more scanners or protocols were to be used, it is likely that a larger data set would be necessary to obtain similar results.

Using the parameters we found to be optimal, a patch size of 128 and 9 image pairs in the training set, our model produced images that were sufficiently similar to the ground truth images for clinical purposes. The synthetic CTs created by our model gave a MAE of 72.9 ± 88.1 HU and a PSNR of 31.23 ± 2.16 dB, comparable to other deep learning synthetic CT methods1618 (39.0 ± 4.616 to 84.8 ± 17.314 HU for MAE and 24.5± 1.3118 to 34.0±1.016 dB for PSNR). The MSE for the DRRs derived from the synthetic CTs was 0.66% and the PSNR was 22.88 dB. Based on the observations of our volunteers on the CTs, there was no difference in the number of fiducials observed, and the difference in identified centroid locations was insignificant. Similar results were obtained when the identification was done on the DRRs. There was not a statistically significant difference between the number of fiducials that could be seen in the synthetic and true DRRs, and the difference in the identified centers was within 1.5 mm, which is approximately one pixel. While artifacts on MR images may pose some difficulty in identifying the fiducials even on the fiducial enhanced image, the verification can be performed through 3D ultrasound scanning to check the number and location of fiducials. In this retrospective study, the ultrasound images were not saved during the fiducial implant procedure so we did not perform this verification step.

Due to computational limitations, we were limited as to the image size that we could process, so we could not include the full external surface and directly test the dose calculation accuracy for the pelvic cases. Based on the small variation in pixel values found between our real and synthetic CTs, we do not believe the dose calculation would be any less accurate using a synthetic CT produced using this method. Our comparison of the dose distributions calculated using the same plan on the real and synthetic CTs confirmed this, with a gamma pass rate of 97.7%. While MR images can contain some distortions, particularly around the edge of the patient, these distortions have not been found to lead to clinically meaningful differences in the dose calculation for pelvic treatments.4,5 Future investigation will look into methods to reduce the computational burden of processing large image sets and/or conduct training on more powerful hardware.

The fiducial enhancing sequence (T1-w VIBE Dixon) is new and not well established in our institution. We only have a very small number of sample images and the results are inconsistent. The training MR images selected for this study are typical clinical conventional MR for treatment planning, but pre-processed to simulate fiducial enhancing MRI (fe-MRI). Other fiducial enhancing sequences, such as three-dimensional T2-weighted images (T2*3D) have also been shown to provide good fiducial contrast and could potentially be used without pre-processing19,20.

As a proof-of-principle, we have confidence that our model design, model training, and experimental results are valid. The model can be easily applied to fe-MRI when it becomes more available, and is reliable with small modifications (such as image intensity normalization). Further work will investigate ways to expand the image size that can be processed to include a complete external contour to be used for a true treatment planning, as well as to acquire a data set of T1-w VIBE Dixon or T2*3D images to use for training and testing the model.

5. CONCLUSIONS

We developed a machine learning method to produce synthetic CTs and synthetic DRRs that accurately replicate the fiducial appearance for robotic radiosurgery tracking. This proof of principle study suggests it may be feasible to plan SBRT treatments with a single MRI image modality and also use synthetic DRR as the reference for tumor tracking.

FIG. 1:

FIG. 1:

Planning CT (a) and processed MRs (b). The red arrow marks the fiducial location in each of the images, bright on CT and dark on MR

FIG 8:

FIG 8:

Comparison of the dose calculated using the same plan on the synthetic image (a) and real image (b).

  • Synthetic CTs from MRs allow for treatment planning of radiotherapy without another scan

  • Fiducials do not appear on standard MR images, but do using specialized sequences

  • Robotic radiosurgery treatment needs fiducials in DRRs for localization and tracking

  • Deep learning model can produce CT images with fiducials for radiotherapy planning

  • Model provides DRR and fiducial localization for robotic radiosurgery delivery

ACKNOWLEDGMENTS

This project is in part supported by NIH 1R01CA206100.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

COI statement: None declared

REFERENCES

  • 1.Roy S, Carass A, Jog A, Prince JL, Lee J. Mr to ct registration of brains using image synthesis. Proc SPIE Int Soc Opt Eng, 9034, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Opposits G, Kis SA, Tron L, Berenyi E, Takacs E, Dobai JG, Bognar L, Szucs B, Emri M. Population based ranking of frameless ct-mri registration methods. Z Med Phys, 25(4):353–367, 2015. [DOI] [PubMed] [Google Scholar]
  • 3.Ulin K, Urie MM, Cherlow JM. Results of a multi-institutional benchmark test for cranial ct/mr image registration. Int J Radiat Oncol Biol Phys, 77(5):1584–9, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Tyagi N, Fontenla S, Zhang J, et al. Dosimetric and workflow evaluation of first commercial synthetic CT software for clinical use in pelvis. Phys. Med. Biol 62: 2961–2975, 2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Gustafsson C, Nordstrom F, Persson E, Brynolfsson J, and Olsson LE. Assessment of dosimetric impact of system specific geometric distortion in an MRI only based radiotherapy workflow for prostate. Phys. Med. Biol 62: 2976–2989, 2017. [DOI] [PubMed] [Google Scholar]
  • 6.Johansson A, Karlsson M, Nyholm T. Treatment planning using mri data: an analysis of the dose calculation accuracy for different treatment regions. Radiat. Oncol, 5:62, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Lambert J, Greer PB, Menk F, et al. Mri-guided prostate radiation therapy planning: investigation of dosimetric accuracy of mri-based dose planning. Radiother. Oncol, 98(3):330–334, 2011. [DOI] [PubMed] [Google Scholar]
  • 8.Guerreiro F, Burgos N, Dunlop A, et al. Evaluation of a multi-atlas ct synthesis approach for mri-only radiotherapy treatment planning. Phys Med, 35:7–17, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Chen S, Quan H, Qin A, Yee S, Yan D. Mr image-based synthetic ct for imrt prostate treatment planning and cbct image-guided localization. J Appl Clin Med Phys, 17(3):236–245, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Farjam R, Tyagi N, Deasy JO, Hunt MA. Dosimetric evaluation of an atlas-based synthetic ct generation approach for mr-only radiotherapy of pelvis anatomy. J Appl Clin Med Phys, 20(1):101–109, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Hsu SH, DuPre P, Peng Q, Tome WA. A technique to generate synthetic ct from mri for abdominal radiotherapy. Journal Of Applied Clinical Medical Physics, 21(2):136–143, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bredfeldt J, Liu L, Feng M, Cao Y, Balter J. Synthetic ct for mri-based liver stereotactic body radiotherapy treatment planning. Phys. Med. Biol, 62:2922, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Johansson A, Karlsson M, Nyholm T. Ct substitute derived from mri sequences with ultrashort echo time. Med. Phys, 38:2708–2714, 2011. [DOI] [PubMed] [Google Scholar]
  • 14.Han X MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys 2017. April;44(4):1408–1419. doi: 10.1002/mp.12155. [DOI] [PubMed] [Google Scholar]
  • 15.Kim J, Glide-Hurst C, Doemer A, Wen N, Movsas B, Chetty IJ. Implementation of a novel algorithm for generating synthetic ct images from magnetic resonance imaging data sets for prostate cancer radiation therapy. Int J Radiat Oncol Biol Phys, 91(1):39–47, 2015. [DOI] [PubMed] [Google Scholar]
  • 16.Nie D, Trullo R, Lian J, et al. Medical image synthesis with deep convolutional adversarial networks. IEEE Trans Biomed Eng, 65(12):2720–2730, 2018. doi: 10.1109/TBME.2018.2814538 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Lei Y, Harms J, Wang T, et al. MRI-based synthetic CT generation using semantic random forest with iterative refinement. Phys Med Biol, 64(8), 2019. doi: 10.1088/1361-6560/ab0b66 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Lei Y, Harms J, Wang T, et al. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med Phys, 46(8):3565–3581, 2019. doi: 10.1002/mp.13617 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Pathmanathan A, Schmidt M, Brand D, Kouski E, van As N, Tree A. Improving fiducial and prostate capsule visualization for radiotherapy planning using MRI. J Appl Clin Med Phys, 20(3):27–36, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Tanaka O, Komeda H, Hattori M, Hirose S, Yama E, Matsuo M. Comparison of mri sequences in ideal fiducial maker-based radiotherapy for prostate cancer. Rep Pract Oncol Radiother., 22(6):502–506, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Fischer B, Modersitzki J. Flirt: A flexible image registration toolbox. In Lecture Notes in Computer Science, volume 2717, pages 261–270, WBIR 2003, 2003. Springer, Berlin, Heidelberg. [Google Scholar]
  • 22.Bissonnette JP, Moseley DJ, Jaffray DA. A qulity assurance program for image quality of cone-beam CT guidance in radiation therapy. Med Phys 2008;35:1807–15. [DOI] [PubMed] [Google Scholar]
  • 23.Low DA, Harms WB, Mutic S, Purdy JA. A technique for the quantitative evaluation of dose distributions. Med Phys. 1998;25(5):656–661. doi: 10.1118/1.598248 [DOI] [PubMed] [Google Scholar]
  • 24.Miften M, Olch A, Mihailidis D et al. Tolerance limits and methodologies for IMRT measurement-based verification QA: recommendations of AAPM Task Group No. 218. Med Phys. 2018;45:e53–e83. [DOI] [PubMed] [Google Scholar]
  • 25.Siddon RL. Fast calculation of the exact radiological path for a three-dimensional ct array. Medical Physics, 12(2):252–255, 1985. [DOI] [PubMed] [Google Scholar]
  • 26.Folkerts M, Jia X, Choi D, Gu X, Majumdar A, Jiang S. Su-e-i-35: A gpu optimized drr algorithm. Medical Physics, 38(6):3403–3403, 2011. [Google Scholar]
  • 27.Byun DJ, Gorovets DJ, Jacobs LM et al. Strict bladder filling and rectal emptying during prostate SBRT: Does it make a dosimetric or clinical difference?. Radiat Oncol 15, 239 (2020). doi: 10.1186/s13014-020-01681-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Glide-Hurst CK, Lee P, Yock AD, et al. Adaptive Radiation Therapy (ART) Strategies and Technical Considerations: A State of the ART Review From NRG Oncology. Int J Radiat Oncol Biol Phys. 2021;109(4):1054–1075. doi: 10.1016/j.ijrobp.2020.10.021 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES