Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jul 1.
Published in final edited form as: Med Phys. 2023 Feb 4;50(7):4399–4414. doi: 10.1002/mp.16246

Compensation cycle consistent generative adversarial networks (Comp-GAN) for synthetic CT generation from MR scans with truncated anatomy

Yao Zhao 1,2, He Wang 1,2, Cenji Yu 1,2, Laurence E Court 1, Xin Wang 1,2, Qianxia Wang 1,5, Tinsu Pan 2,3, Yao Ding 1, Jack Phan 4, Jinzhong Yang 1,2
PMCID: PMC10356747  NIHMSID: NIHMS1869325  PMID: 36698291

Abstract

Background:

MR scans used in radiotherapy can be partially truncated due to the limited field of view, affecting dose calculation accuracy in MR-based radiation treatment planning.

Purpose:

We proposed a novel Compensation-cycleGAN (Comp-cycleGAN) by modifying the cycle-consistent Generative Adversarial Network (cycleGAN), to simultaneously create synthetic CT (sCT) images and compensate the missing anatomy from the truncated MR images.

Methods:

CT and T1 MR images with complete anatomy of 79 head-and-neck patients were used for this study. The original MR images were manually cropped 10–25 mm off at the posterior head to simulate clinically truncated MR images. Fifteen patients were randomly chosen for testing and the rest of the patients were used for model training and validation. Both the truncated and original MR images were used in the Comp-cycleGAN training stage, which enables the model to compensate for the missing anatomy by learning the relationship between the truncation and known structures. After the model was trained, sCT images with complete anatomy can be generated by feeding only the truncated MR images into the model. In addition, the external body contours acquired from the CT images with full anatomy could be an optional input for the proposed method to leverage the additional information of the actual body shape for each test patient. The mean absolute error (MAE) of Hounsfield units (HU), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between sCT and real CT images to quantify the overall sCT performance. To further evaluate the shape accuracy, we generated the external body contours for sCT and original MR images with full anatomy. The Dice similarity coefficient (DSC) and mean surface distance (MSD) were calculated between the body contours of sCT and original MR images for the truncation region to assess the anatomy compensation accuracy.

Results:

The average MAE, PSNR, and SSIM calculated over test patients were 93.1 HU/91.3 HU, 26.5 dB/27.4 dB, and 0.94/0.94 for the proposed Comp-cycleGAN models trained without/with body-contour information, respectively. These results were comparable with those obtained from the cycleGAN model which is trained and tested on full-anatomy MR images, indicating the high quality of the sCT generated from truncated MR images by the proposed method. Within the truncated region, the mean DSC and MSD were 0.85/0.89 and 1.3 mm/0.7 mm for the proposed Comp-cycleGAN models trained without/with body contour information, demonstrating good performance in compensating the truncated anatomy.

Conclusions:

We developed a novel Comp-cycleGAN model that can effectively create synthetic CT with complete anatomy compensation from truncated MR images, which could potentially benefit the MRI-based treatment planning.

Keywords: synthetic CT, cycle consistent generative adversarial networks, MR-based treatment planning, deep learning

1. INTRODUCTION

Magnetic resonance (MR) imaging has been widely used in radiation therapy to accurately delineate the tumor target and organs at risk since it has superior soft-tissue contrast compared to computed tomography (CT) imaging.1,2 In radiation treatment of head-and-neck cancers, several studies have shown that the application of MR images can substantially reduce the inter-observer variability of tumor delineation and improve the treatment outcomes.25 However, CT images are still required for treatment planning because MR images cannot provide electron density maps for dose calculation.6 Usually the MR images are fused to simulate CT through inter-modality registration and physicians draw contours based on the fusion MR view. However, this process has systematic errors and will introduce geometrical uncertainties for the contours.79 Therefore, it is desirable to develop a treatment planning workflow using MR images only. The development of MR-based treatment planning could essentially benefit radiotherapy since it eliminates the inherent CT/MR registration error, reduces unnecessary radiation exposure for patients, and advances the efficiency of clinical workflow.10 Additionally, the advent of MR-guided linear accelerator (MR-Linac) further drives the need for MR-based treatment planning and workflow11,12.

Since the electron density information acquired in CT images is necessary for dose calculation, estimating Hounsfield Unit (HU) and generating synthetic CT (sCT) from MR images become a key step to enable the MR-based treatment planning.6 So far, a lot of different methods have been proposed to address this issue, which can be mainly divided into three categories: segmentation-based13,14, atlas-based1520, and learning-based method2123. The segmentation-based methods generate sCT images by assigning uniform bulk densities to structures delineated on the MR images. However, these methods heavily rely on the accuracy of organ segmentation and fail to account for heterogeneity within each structure.24 Besides, Atlas-based methods have been widely used for sCT generation which depends on the deformable image registration to correlate Atlas CT images with the real MR images. One limitation of these methods is their heavy computational burden, making them impractical for clinical implementation. Also, their performance is limited by the underlying deformable image registration algorithms.20

In recent years, learning-based methods, including traditional machine learning and deep learning methods, have gained significant attention for synthetic image generation. These methods exploit self-learning and self-optimizing strategies to learn the MR-CT mapping for sCT generation. Among them, deep learning methods using convolutional neural networks (CNNs) have been demonstrated to have more promising performance in sCT generation without the need of extracting handcrafted features.25 For the deep learning methods, generally a model is trained to establish a nonlinear mapping from MR to CT domain based on a large databases of MR and CT pairs. Once the deep learning model has been trained, sCT images can be easily generated in a short amount of time by feeding the new MR image into the model. Han et al.23 first developed a UNet architecture26 to successfully generate two-dimensional (2D) sCT images of brain patients from T1-weighted MR images. To fully utilize the image information in all dimensions, Nie et al.27 proposed a three-dimensional (3D) fully convolution neural network (FCN) to learn the complex translation mapping between MR and CT images. While the CNN method28,29 improved the efficiency and quality of sCT generation, its performance is affected by the voxel-wise accuracy of MR-CT registration and might suffer from blurriness during image synthesis30. To generate high-quality sCT images with less blurriness, the generative adversarial network (GAN)31 which consists of a generator and a discriminator has been proposed. An adversarial loss function was also introduced to simultaneously optimize the generator and discriminator to improve the sCT image quality. Isola et al.32 further extended the GAN model and proposed the conditional GAN, in which the output sCT image is constrained by the input MR image. Although the GAN-based methods have achieved great success in generating synthetic images3339, training a GAN model usually requires perfectly co-registered image pairs40, which is especially challenging for inter-modal (MR-CT) images. Nevertheless, the cycle-consistent generative adversarial network (cycleGAN) model proposed by Zhu et al.41 could be trained to generate synthetic images without the requirement of spatially aligned image pairs. With the incorporation of cycle-consistency loss, cycleGAN models trained with unpaired CT/MR images could even outperform GAN models trained with paired images in the aspect of the image quality of generated sCT 42. Due to this advantage, cycleGAN has been applied for generating sCT images in radiotherapy planning for a variety of anatomical sites25,37,4348.

However, in MR-guided radiotherapy, in order to avoid the geometric distortion in peripheral regions and optimize the sequence for acquisition time and image quality, the field of view (FOV) of MR scans is often limited49. Thus, the MR images might be partially truncated in the peripheral regions, e.g. the posterior area of the head for the head-and-neck patients, as shown in Figure 1. Moreover, the truncation might also be observed in some cases where the regions of interest are distant from the posterior head region. The truncation in MR images usually does not affect the tumor delineation or diagnosis. However, this truncation poses significant challenge to MR-only treatment planning, since it might cause significant dose calculation errors due to the missing anatomy in the generated sCT images. To the best of our knowledge, current existing methods cannot accurately compensate for the missing structures in the truncated region during sCT generation.25

Fig. 1.

Fig. 1.

Illustration of the truncated MR images in the clinic. The truncation is observed in the posterior region of the head.

In this study, we aimed to generate sCT images with complete anatomy from truncated MR images to facilitate the MR-based radiotherapy workflow. We proposed a novel deep learning network named Compensation-cycleGAN (Comp-cycleGAN), which is based on cycleGAN, to enable anatomy compensation and sCT generation at the same time. Specifically, MR images with complete anatomy were collected and applied as training targets in our approach, making our Comp-cycleGAN model capture the complex relationship between the truncated regions and the given anatomical structures. The cycle-consistency loss in the traditional cycleGAN model was accordingly modified to constrain the process of sCT generation and anatomy compensation. We assessed the effectiveness of our proposed method with head-and-neck cancer patients with truncation at the posterior head on MR images.

2. MATERIALS AND METHODS

2.1. Overview

The proposed novel algorithm, Comp-cycleGAN, which can generate sCT images with complete anatomy from truncated MRI images is composed of a training stage and a generation stage. The CT and MR images with complete anatomy of head-and-neck patients were collected. The MR images were then manually cropped 10–25mm off at the posterior head to simulate real truncated MR images in the clinic. For a given MR image, the manually truncated image, the original image, and paired CT image were all used for training. The CT image was used for learning a complex MR-to-CT mapping, while the original MR image was used as a target for the truncated image to learn the anatomy compensation. Furthermore, full anatomy is usually available on CT images. To leverage this additional information during sCT generation, the body contours with complete anatomy obtained from these CT images could be used to guide the anatomy compensation. Therefore, the body contours were applied as the optional inputs during our network development. The workflow schematic of our proposed method is illustrated in Figure 2.

Fig. 2.

Fig. 2.

Schematic flow chart of the proposed model for truncated MR-based synthetic CT generation. Blue part shows the training stage of our proposed method, which consists of two generators and two discriminators. Both the truncated MR and original MR images are used for training. Yellow part shows the synthesizing stage, where a new testing MR image is fed into the well-trained generator to create the sCT image.

In the training stage, the truncated MR image and body contours(optional) were fed into the generator (GMR-CT) to be translated into synthetic CT (sCT). The sCT was trained to be realistic and compensate for the missing anatomy in the truncated MR image. Then, another generator (GCT-MR) was trained to translate the input sCT back into a synthetic MR image, which approximates the original MR image with complete anatomy. To improve training stability, the backward cycle was also trained, translating CT images into synthetic MR images and back into CT images. In addition, two discriminators (DCT,DMR) were trained to distinguish synthetic images from real images.

In the prediction stage, the sCT images with full anatomy can be generated by directly feeding the truncated MR images and body contour (optional) into the trained generator (GMR-CT).

2.2. Data acquisition

MR and CT images of 79 head-and-neck patients who received external photon beam radiation treatment at The University of Texas MD Anderson Cancer Center were included in this study, under an institutional review board-approved protocol and waiver of informed consent (RCR03–0400). These patients were randomly selected without specific restrictions on sex, age, or histology type. The median patient age was 68 yr (range 30–84 yr) at the time of image acquisition; 60 patients (76%) were men and 19 patients (24%) were women. The treatment sites include oral cavity, pharynx, larynx, paranasal sinuses and nasal cavity, and salivary glands. The CT images were acquired on a Philips scanner (Big Bore) with 120 kVp, 434mA tube current, 887ms exposure time, 1.1719×1.1719×1.0mm3 resolution, and 512×512 reconstruction matrix. The corresponding MR images were acquired using a 1.5 T MR system (Magnetom Aera, Siemens Healthineers) with a pair of large flex 4 coils and built-in spine coils covering the head and neck region. The post-contrast T1-weighted MR imaging protocol included a 3D gradient dual-echo Dixon sequence with repetitive time = 7.11 ms, echo time1 = 2.39 ms, echo time2 = 4.77 ms, Pixel bandwidth = 405 Hz, flip angle = 10°, field of view = 256×256×240 mm3 and reconstructed voxel size = 1×1×1 mm3. The CT and MR images of each patient were acquired using the same setup with an interval of less than one week. Since both the original MR (with complete anatomy) and truncated MR images were required in the training stage, all the MR images were carefully selected to contain the complete anatomy.

2.3. Preprocessing

CT images were first rigidly registered to MR images for each patient using a commercial software Velocity AI v.3.0.1 (Varian Medical System, Atlanta, GA). A binary body mask was generated to remove the head mask and couch outside the patient body for each image. The voxel number of the region outside the body mask was set to 0 and −1024 HU for MR and CT images, respectively. The intensity of each MR image was normalized by Z-score standardization using only voxels in the body mask, and then scaled to a similar numeric range. All the MR and CT images were resampled to have the same voxel size of 1.1719×1.1719×1.0mm3. Then, each axial slice of the registered MR and CT pairs was cropped to a 256×256 2D patch that kept the head and neck regions in the middle of the image. For the shoulder region, four 256×256 patches were cropped for each slice with 128 overlapping pixels.

In the clinic, the MR images might be partially truncated at the posterior head, with the depth ranging from 5–25mm. To simulate the truncation, the acquired MR images were manually cropped at the posterior head, making them close to the clinically truncated MR images. Furthermore, to acquire adequate truncated images for network training, all the slices within the head region (above the lower boundary of the mandible) were utilized. Each slice within the head region was randomly cropped 5–25 mm off at the posterior area based on the body mask. Specifically, the most posterior part of the head within each slice was identified by its corresponding body mask. Then, the cropping of 5–25 mm was randomly selected for each slice and all the pixels within that area were assigned a value of 0. The cropping can simulate all possible clinical MR truncations and can serve as data augmentation for network training.

2.4. Network architecture

The overall architecture of the Comp-cycleGAN network is shown in Figure 2, consisting of four CNNs: two generators (GMR-CT,GCT-MR) and two discriminators (DCT,DMR). The model was trained based on 2D 256×256 patches of axial slices. Generators (GMR-CT,GCT-MR) were developed based on a hybrid of UNet architecture and residual blocks, residual-UNet. The residual-UNet comprises 9 residual units and its detailed architecture is demonstrated in Figure 3. Each residual unit contains two sets of one 3×3 convolutional layer followed by instance normalization and Rectified Linear Unit (ReLU) activation. Instead of using a pooling operation to downsample the feature map, a stride of 2 was used for the first convolutional layer in each residual unit during the decoding stage. On the other hand, in the decoding stage, an up-sampling operation was applied to recover the size of the feature map. Like original UNet architecture, the long skip connections were also used to copy low-level features to the corresponding high-level features. After the last level of the decoding stage, another 1×1 convolutional layer was used to output the generated synthetic image with the same size as the input image. In addition, the generators in our model were modified to take dual-channel inputs: the truncated MR images and the body contours (optional) which might be generated from corresponding CT images with complete anatomy.

Fig. 3.

Fig. 3.

The details of the residual-UNet network: a combination of UNet architecture and residual blocks.

Discriminators (DCT,DMR) were trained to distinguish real images and synthetic images generated by generators (GMR-CT,GCT-MR), respectively. The discriminator is built by five successively 4×4 convolutional with the filter number of 64, 129, 256, 512, and 1 to generate a sub-regional estimation of the authenticity of images. All filters in the discriminator were followed by a Leak ReLU except for the last layer.

2.5. Network training

In the training stage, the generators (GMR-CT,GCT-MR) and discriminators(DCT,DMR) were trained simultaneously to achieve an optimal solution by minimizing an adversarial loss. That is, the generators are trained to create realistic synthetic images to “fool” the discriminators, and the discriminators are trained to differentiate between synthetic and real images by decreasing the judging error of the discriminator network. As shown in Figure 2, the generator (GMR-CT) was trained with the input of the truncated MR image and body contour (optional) to generate sCT. Then, the sCT was fed into another generator (GCT-MR) to be translated back to a synthetic MR image. To improve the training stability, the backward cycle was also trained: CT – generator (GCT-MR) – synthetic MR – generator (GMR-CT) – sCT. Meanwhile, the discriminators DCT and DMR were trained with the input of (sCT, real CT) and (synthetic MR, real MR), respectively. Once the model has been trained, the sCT images with full anatomy can be generated by feeding the truncated MR image and body contour (optional) into the trained generator (GMR-CT).

In the original cycleGAN, the MR-CT texture translation is based on the constraints of adversarial loss and cycle-consistency loss. However, if the truncated MR images are used to train a cycleGAN, the model will not be able to generate stable solutions for volume changes due to the missing anatomy in MR images, which will deteriorate texture translation for sCT generation. To address these issues, we used three different images for each patient: the manually truncated MR image, the original MR image, and paired CT image, to develop our Comp-cycleGAN. By doing so, we expect to establish a stable solution for GMR-CTin anatomy compensation in the truncation region and achieve MR-CT translation at the same time.

Similar to traditional cycleGAN, our training loss also includes a cycle-consistency loss (Lcycle), an adversarial loss (Ladv), and an identity loss (Lidentity). In order to build the correlation between synthetic images and input images, the cycle-consistency loss is introduced in cycleGAN to constrain generators, satisfying: GCT-MR(GMR-CT(IMRtrunc))IMRtrunc and GMR-CTGCT-MR(ICT)ICT, where IMRtrunc and ICT represent the truncated MR image and the paired CT image, respectively. With this cycle-consistency loss, the cycleGAN model is expected to prevent generators from producing synthetic images that are irrelevant to the input images. However, as aforementioned, this cycle-consistency loss cannot ensure accurate compensation of the truncated regions during sCT generation when the truncated MR images are used for model training, as shown in Figure 4(a). Since the cycle-consistency loss forces the reconstructed synthetic MR images (cycledMR) to be identical to their inputs IMRtrunc, the generated sCT from GMR-CT(IMRtrunc) might be also truncated or randomly compensated.

Fig. 4.

Fig. 4.

Illustration of the cycle-consistency loss Lcycle calculation. (a) The cycle-consistency loss Lcycle is calculated between synthetic MR (sMR) and truncated MR images. (b) The cycle-consistency loss Lcycle is calculated between sMR and original MR images. The output of GCT-MR is only sMR, which is compared to the truncated MR (a) or original MR (b) to construct the cycle-consistency loss Lcycle.

To overcome this issue, we modified the cycleGAN, especially the cycle-consistency loss, in our method. Instead of forcing cycledMR to be identical to the input IMRtrunc, the original MR image with full anatomy IMRori is utilized as the target for the reconstructed cycledMR and penalized the cycle-consistency loss, as shown in Figure 4(b). The new cycle-consistency loss is defined below:

LcycleGMR-CT,GCT-MR=GCT-MRGMR-CTIMRtrunc,Cbody,IMRori1+GMR-CTGCT-MRICT,Cbody,ICT1 (1)

where .1 denotes the calculation of L1-norm distance, and the Cbody denotes the body contour of this patient, which might be acquired from the CT image and is an optional input for our model. By targeting the cycledMR to the original MR image with full anatomy, the generator GMR-CT is trained to simultaneously learn MR to CT texture translation and anatomy compensation.

The adversarial loss Ladv is applied in cycleGAN to guarantee correct domain translation for synthetic images, which is defined as:

LadvGMR-CT,DCT=1-DCT(GMR-CTIMRtrunc,Cbody)2+DCTICT2# (2)
LadvGCT-MR,DMR=1-DMR(GCT-MRICT)2+DMRIMRtrunc2 (3)

In the training process, the generators (GMR-CT,GCT-MR) are optimized to generate synthetic images close to the real ones, and discriminators(DCT,DMR) are trained to distinguish between the generated synthetic images and the real ones. However, if IMRtrunc is applied in the adversarial loss for DMR, the generator GCT-MR will be optimized to generate truncated cycledMR that is similar to IMRtrunc due to the adversarial relationship between the generator GCT-MR and discriminator DMR. In this situation, Ladv will enforce the cycledMR to be truncated, while Lcycle will regularize the cycledMR to be identical to the original MR image with full anatomy. Consequently, the conflict between Ladv and Lcycle will impair the model training and image quality of sCT. In this study, we address this issue by modifying the discriminator loss in adversarial loss LadvGCT-MR,DMR as following:

Ladv'GCT-MR,DMR=1-DMRGCT-MRICT2+DMRIMRori2# (4)

Thus, the objective of training discriminator DMR will be to decrease the judging error of the discriminator network and encourage generator GCT-MR to produce a synthetic image that has similar features with IMRori.

An additional identity loss Lidentity is also introduced to constrain the generator to an identity mapping if the input images are from the target domain:

LadvGCT-MR,GMR-CT=GMR-CTICT,Cbody,ICT1+GCT-MRIMRori,IMRori1# (5)

Therefore, the final cost function to be optimized in our method is defined as:

θG,DGCT-MR,GMR-CT,DCT,DMR=argminGmax(Dλ1LcycleGMR-CT,GCT-MR+LadvGMR-CT,DCT+Ladv'GCT-MR,DMR+λ2LadvGCT-MR,GMR-CT) (6)

where θG,D denotes the parameters for generators and discriminators. λ1,λ2 are the hyperparameters to control the relative weight of the losses, which were set as 10 and 5 based on our experiments to balance the variance uncertainty of each task.

2.6. Validation and evaluations

To evaluate the performance of our proposed model, 64 patients (80%) were randomly selected for training and validation, and the remaining 15 patients (20%) were used for model testing. We performed five-fold cross validation in our study, in which 52 patients and 13 patients were used for training and validation, respectively. The best model from all folds was selected to generate sCT for the independent test dataset. The 15 test patients were manually cropped 25 mm off to simulate the most severe truncation in clinical scenarios.

Note that the body contour is an optional input for our method. To investigate the impact of utilizing body contours, we separately trained two independent models: (1) modified cycleGAN without body contours (Comp-cycleGAN), and (2) modified cycleGAN with body contours (Comp-cycleGAN with contours). The body contours were created from the real CT images, and then rigidly registered to the corresponding MR images to be used as inputs for Comp-cycleGAN (contour) model. Therefore, for each tested patient, two sCT images were generated by feeding the truncated MR image into the two models (Comp-cycleGAN and Comp-cycleGAN (contour)). The paired CT image for each patient was deformably registered to the corresponding MR image using a commercial software Velocity AI v.3.0.1 (Varian Medical System, Atlanta, GA). The deformed CT image was visually checked in Velocity and then used as the ground-truth image to evaluate the image quality of sCT images in this study. The mean absolute error (MAE) of Hounsfield units (HU), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) between the sCT and ground-truth CT image were calculated within the patient body to quantify the comparison. Furthermore, to demonstrate the effectiveness of our method, we compare with two additional 2D models: (1) cycleGAN-trunc and (2) cycleGAN-full. The cycleGAN-turnc model is a traditional cycleGAN model trained and predicted using truncated MR images which were the same data used in our proposed method. The cycleGAN-full model is a cycleGAN model trained and predicted using the MR images with full anatomy, which were the original MR images without any truncation in the head region.

To further evaluate the model performance in terms of anatomy compensation, the external body contours were generated for sCT and original MR images with full anatomy. The Dice similarity coefficient (DSC) and mean surface distance (MSD) were calculated between body contours of sCT and original MR images only within the truncated region to assess the shape accuracy. The MAE of HU between the sCT and ground-truth CT image was also calculated only within the truncated region to evaluate the structural accuracy.

For the comparison, we also performed paired two-tailed t-tests between our proposed method and comparison methods at a significance level of 0.05 (defined by a p<0.05) to evaluate the statistical difference.

3. RESULTS

3.1. Imaging quality evaluation

The image quality of sCT images generated by different models was evaluated by comparing to the ground-truth CT images. Figure 5 shows the visual comparison, which contains the axial view of truncated MR (a1, b1), the original MR (a2, b2), the corresponding real CT (a3, b3), and sCT images generated from cycleGAN-trunc(a4, b4), the proposed Comp-cycleGAN (a5, b5), the proposed Comp-cycleGAN with body contours (a6, b6), and the cycleGAN-full (a7, b7). Figure 6 shows the visual comparison in the sagittal and coronal views. The external body contours of the original MR images (a2, b2) are shown on CT and sCT images as the green outlines in the Figure 6 for comparison. As shown in Figure 5 and 6, the image quality of sCT images generated by both Comp-cycleGAN and Comp-cycleGAN (contour) is much better than cycleGAN-trunc with respect to anatomy compensation in the truncated regions and overall structural accuracy. Training with truncated MR images, sCT images generated by the cycleGAN-trunc model are inaccurate and deteriorated. Specifically, the shapes of these sCT images (a4, b4) are inconsistent with the actual patient shape (a2, b2) in the truncation region. Even in the untruncated region, the overall image quality is degraded and some generated structures in sCT are inconsistent with the input MR images. On the other hand, sCT images generated by both Comp-cycleGAN (a5, b5) and Comp-cycleGAN (contour) (a6, b6) demonstrate superior image quality and improved compensation of the truncated region. Additionally, the sCT images generated with truncated MR images by Comp-cycleGAN (a5, b5) and Comp-cycleGAN (contour) (a6, b6) are close to those generated with full-anatomy MR images by cycleGAN-full (a7, b7).

Fig. 5.

Fig. 5.

Comparison of actual computed tomography (CT) and synthetic CT (sCT) images generated by different models. Two examples (a, b) are included for illustration. (a1, b1) and (a2, b2) are the axial view of truncated MR images and original MR images with full anatomy. (a3, b3) are the corresponding real CT images for comparison. The sCT images generated by different models are shown as (a4, b4) for cycleGAN-trunc model, (a5, b5) for Comp-cycleGAN model, (a6, b6) or Comp-cycleGAN (contour) model, and (a7, b7) for cycleGAN-full model.

Fig. 6.

Fig. 6.

Comparison of actual computed tomography (CT) and synthetic CT (sCT) images generated by different models. Two examples (a, b) are included for illustration. (a1, b1) and (a2, b2) are the sagittal and coronal views of truncated MR images and original MR images with full anatomy, respectively. (a3, b3) are the corresponding real CT images for comparison. The sCT images generated by different models are shown as (a4, b4) for cycleGAN-trunc model, (a5, b5) for Comp-cycleGAN model, (a6, b6) or Comp-cycleGAN (contour) model, and (a7, b7) for cycleGAN-full model. Green outlines are the external body contours of the original MR images and shown on all CT and sCT images. (This figure is best viewed in the online version)

We also performed quantitative comparison of sCT images generated by different models. The detailed results of the average MAE, PSNR, and SSIM over 15 test patients are illustrated in Table 1. The results indicate high quality of the sCT images generated by our proposed method. Specifically, our Comp-cycleGAN models trained with/without body contours can achieve an average MAE of 93.3 and 95.1 HU, which is close to the cycleGAN-full model (MAE = 92.5 HU). Compared with cycleGAN-full, our Comp-cycleGAN and Comp-cycleGAN model (contour) show comparable image quality of generated sCT images, and there are no statistically significant differences (p>0.05) for all evaluation metrics except the PSNR for only Comp-cycleGAN. In contrast, cycleGAN-trunc has the worst quantitative results (MAE = 147.6), and our proposed Comp-cycleGAN models have statistically significant improvement (p<0.05) over cycleGAN-trunc in all metrics.

Table 1.

Quantitative comparison of image quality of sCT generated by different models.

Methods MAE(HU) PSNR (dB) SSIM
cycleGAN-trunc 147.6±14.3 22.5±1.1 0.86±0.03
Comp-cycleGAN 93.1±11.4 26.5±1.0 0.94±0.02
Comp-cycleGAN(contour) 91.3±10.9 27.4±1.0 0.94±0.01
cycleGAN-full 90.5±10.7 27.9±0.9 0.95±0.01

MAE: mean absolute error; PSNR: peak signal-to-noise ratio; SSIM: structural similarity; HU: Hounsfield units; cycleGAN-trunc: the traditional cycleGAN model trained and tested with truncated MR images; Comp-cycleGAN: the modified cycleGAN model trained and tested with truncated MR images without body contours as input; Comp-cycleGAN(contour): the modified cycleGAN model trained and tested with truncated MR images with body contours as input; cycleGAN-full: the traditional cycleGAN model trained and predicted using the original MR images with full anatomy

3.2. Anatomy compensation evaluation

Figure 7 illustrates the visual inspection of the sCT images in the truncated region. The truncated MR images (a5, c5) and the original MR images (a6, c6) are shown together with the sCT images generated by cycleGAN-trunc models (a1, c1), Comp-cycleGAN (a2, c2), Comp-cycleGAN contour (a3, c3), and cycleGAN-full (a4, c4), respectively. The insets (b1-b4, d1-d4) show the zoomed-in images of the truncated regions of sCT images (a1-a4, c1-c4) indicated by the dashed-line boxes, to highlight the differences in anatomy compensation by using different methods. The external body contours are generated for the original MR images (a6, c6) and shown on all sCT images as the green outlines in the figure for comparison. As the cycleGAN-full was trained and tested using original MR images with full anatomy, the generated sCT images (a4, c4) have the same shape as the original MR images (a6, c6).

Fig. 7.

Fig. 7.

Comparison of anatomy compensation in the truncated regions of sCT images generated by different models. The axial view of truncated MR and original MR images are shown as (a5, c5) and (a6, c6), respectively. The first and second rows show the sCT images generated by cycleGAN-trunc (a1, c1), the Comp-cycleGAN model (a2, c2), the Comp-cycleGAN(contour) model (a3, c3), and the cycleGAN-full model (a4, c4). The insets (b1-b4, d1-d4) show the zoomed-in images of truncation regions in sCT images (a1-a4, c1-c4), outlined by the red boxes. (This figure is best viewed in the online version)

By using the cycleGAN-trunc model, the generated sCT images cannot accurately compensate for the truncated regions (b1, d1) and there are notable differences when compared to the cyleGAN-full model (b4, d4). On the contrary, anatomy compensation quality in sCT images generated by Comp-cycleGAN (b2, d2) and Comp-cycleGAN contour (b3, d3) outperforms cycleGAN-full (b1, d1), and are similar to the structures in truncated regions of sCT images generated by cycleGAN-full (b4, d4). The sCT images in the truncated region (b2, b3, d2, d3) are close to the real body outlines in the original MR images. In addition, with the use of body contours, the shape and anatomical structures of sCT images in the truncated regions (b3, d3) are more similar to those generated by cycleGAN-full (b4, d4), and have a notable improvement over the Comp-cycleGAN without using contours (b2, d2).

The quantitative comparison of anatomy compensation in the truncated region has also been performed over 15 test patients by calculating MSD and DSC to quantify the shape accuracy and MAE to indicate texture accuracy. The quantitative results are summarized in Table 2. The DSC and MSD were not calculated for cycleGAN-full because the external body outlines of the generated sCTs are identical to those of the original MR images. Among these models, the sCT images generated by cycleGAN-trunc have the worst accuracy with an average MAE of 219.8 HU, MSD of 3.9 mm, and DSC of 0.62 in the truncated regions. Instead, both the proposed Comp-cycleGAN and Comp-cycleGAN (contour) achieve statistically significant improvement (p<0.05) over cycleGAN-tunc in all evaluation metrics. The MAE comparison results show that sCT images generated by Comp-cycleGAN with/without contours have comparable image quality as those generated from cycleGAN-full in the truncated region, and the differences were not significant (p>0.05). Furthermore, the Comp-cycleGAN (contour) shows superior performance to Comp-cycleGAN in anatomy compensation accuracy, which is also consistent with the visual comparison. The Comp-cycleGAN (contour) is significantly better than Comp-cycleGAN based on the evaluation metrics of MSD and DSC (p<0.05).

Table 2.

Quantitative comparison between different models for anatomy compensation. All the evaluation metrics are calculated within the truncated regions.

Methods MAE(HU) MSD (mm) DSC
cycleGAN-trunc 219.8±34.3 3.9±1.1 0.62±0.08
Comp-cycleGAN 65.6±15.1 1.3±0.5 0.85±0.03
Comp-cycleGAN(contour) 62.1±13.7 0.7±0.3 0.89±0.02
cycleGAN-full 59.3±9.2 - -

MAE: mean absolute error; MSD: mean surface distance, DSC: Dice similarity coefficient; HU: Hounsfield units.

3.3. Effect of the extent of truncation

The 15 test patients were manually cropped 25 mm off within the head region to simulate the most severely truncated cases during the model evaluation. To further evaluate the performance of our proposed method on truncated MR images with various extent, we also manually cropped the 15 test patients with 10 mm, 15 mm, 20 mm, and 25 mm off. Our proposed Comp-cycleGAN and Comp-cycleGAN (contour) were tested on those different cases and evaluated by calculating MAE, MSD, and DSC. Note that the MSD and DSC are calculated in the truncated region only. The quantitative results for Comp-cycleGAN and Comp-cycleGAN (contour) are listed in Table 3a and 3b, respectively. The results demonstrate that our proposed methods are robust for different truncation cases. For different truncation levels, the MAE results calculated within the whole volume of the patient body demonstrate the consistent image quality of the generated sCT images. With the decrease of the truncation level from 25mm to 10mm, the MSD results reduce from 1.3mm to 0.8 mm, 0.7 mm to 0.6 m, and DSC results increase from 0.85 to 0.90, 0.90 to 0.92 for Comp-cycleGAN and Comp-cycleGAN (contour) models. It indicates that our models have improved performance in anatomy compensation for cases with smaller truncation.

Table 3a.

Quantitative evaluation of Comp-cycleGAN for different truncation cases:10mm, 15mm, 20mm, 25mm. The metric, MAE Whole, is calculated within the whole volume of the patient body. The other evaluation metrics are only calculated within the corresponding truncated regions

Truncation (mm) MAE (HU) MSD (mm) DSC
Whole Truncated
10 93.0±10.6 61.1±12.4 0.8±0.2 0.90±0.02
15 92.7±10.8 61.6±13.2 0.9±0.4 0.86±0.03
20 93.2±11.3 63.7±14.0 1.0±0.5 0.86±0.03
25 93.1±11.4 65.6±15.1 1.3±0.5 0.85±0.03

MAE: mean absolute error; MSD: mean surface distance, DSC: Dice similarity coefficient; HU: Hounsfield units.

Table 3b.

Quantitative evaluation of Comp-cycleGAN for different truncation cases:10mm, 15mm, 20mm, 25mm. The metric, MAE Whole, is calculated within the whole volume of the patient body. The other evaluation metrics are only calculated within the corresponding truncated regions

Truncation (mm) MAE (HU) MSD (mm) DSC (%)
Whole Truncated
10 91.2±10.8 58.8±13.0 0.6±0.2 0.92±0.02
15 90.8±10.5 58.6±12.6 0.6±0.3 0.91±0.02
20 91.4±11.2 61.0±13.2 0.7±0.2 0.90±0.03
25 91.3±10.9 62.1±13.7 0.7±0.3 0.90±0.02

MAE: mean absolute error; MSD: mean surface distance, DSC: Dice similarity coefficient; HU: Hounsfield units.

4. DISCUSSION

In this study, we proposed a novel method by modifying the traditional cycleGAN model to effectively generate sCT images with complete anatomy from truncated MR images. We aimed at training our Comp-cycleGAN model to effectively capture the relationship between the truncated area and non-truncated anatomical structures during the sCT generation process. Based on this relationship, our Comp-cycleGAN can predict the truncation area for unseen cases. The proposed method innovatively incorporated both truncated and full-anatomy MR images into the model and adjusted the loss functions to address the truncation issue. Additionally, the residual-UNet was developed as the generator in our methods to better capture the connection between input MR and sCT images. With the integration of UNet with residual blocks, the network can take advantage of strengths from both UNet features and residual learning: (1) the UNet architecture enables the effective combination of low-level information and high-level features; (2) the residual connections facilitate information propagation and alleviate the degradation problem during network training. In this work, we evaluated our method on 15 independent test patients with great success in both sCT generation and missing anatomy compensation. All the deep learning models were implemented in TensorFlow (v2.2.0) and were trained on an NVIDIA Tesla GPU (V100) with 32GB of memory. Training took around 120–150 hours for each model. The sCT generation took about 5–10 seconds for each patient, making the model practical for clinical implementation.

From our experiment, the traditional cycleGAN model was not able to generate accurate sCT images with full anatomy from truncated MR images. This is because cycleGAN is developed based on cycle-consistency constraints, which means the generators are trained to ensure the reconstructed MR (cycled MR) to be identical to the input MR images. Since the input MR images are truncated, the generators in cycleGAN will perform unstably by applying random compensation and cropping to the truncated regions to guarantee the consistency between cycled MR and input MR images. The accurate anatomy compensation thus cannot be learned by the generator in the cycleGAN model. Moreover, due to the lack of constraints for compensation/cropping in the cycled process, the generators would perform unstably for sCT image generation. This might further substantially decrease the sCT image quality (shown in Table 1), and even lead to inconsistent anatomy structures in sCT compared to input MR images (shown in Figure 5 and 6). Consequently, the decreased image quality (an average MAE of 147.6 HU) and missing anatomy in sCT images might remarkably affect dose calculation in MR-based radiotherapy planning.

By introducing the original MR images with full anatomy to the model training process, our method can significantly outperform the traditional cycleGAN model in terms of the sCT image quality and the accuracy of anatomy compensation in the truncated region. In our work, the MR images with full anatomy were collected and used together with the truncated MR images in the training process. The generator was trained to learn the anatomy compensation for the truncated region by targeting the cycled MR to the original MR images with full anatomy. As the datasets used in our study were the head-and-neck MR images, the truncation at the posterior head shares some similar shapes and anatomical structures (e.g. skin and skull). During the model training, the original MR images with full anatomy were used as the ground truth for the cycled MR images to constrain the anatomy compensation, one example being that the actual posterior head usually is rounded shape. Our Comp-cycleGAN not only can translate MR to CT domain, but also can compensate the truncated region based on the known anatomical information on the truncated MR image. However, as observed in Figure 7, the generated sCT image in the truncated region may not be perfectly identical to the actual anatomy in the sCT image from the cycleGAN-full model. This is mainly because the trained Comp-cycleGAN model would compensate the truncated region based on the anatomical features it learned from the training data. Thus, its capability to compensate the missing anatomy in sCT might be limited as the anatomical structures can substantially vary from patient to patient. Although the generated sCT images in the truncation region by our method might be imperfect, they were sufficiently close to the real anatomy of patients based on both quantitative and qualitative evaluation. In addition, one clear advantage of our method is that it does not require complete-anatomy MR images for sCT generation in the prediction stage. The original MR images with full anatomy were only used during the training stage. Once the model has been trained, it can directly generate sCT images from truncated MR images.

The body contours are optional input for our method, and we also trained the Comp-cycleGAN (contour) model to leverage the additional contour information in this work. In the clinic, CT images covering the complete anatomy might be acquired for patients from the treatment simulation process. The body contours of the CT images contain information about the actual shape of each patient. In MR-guided adaptive radiotherapy, daily MR images are acquired for adaptive planning and dose calculation. After rigid registration, the simulation CT and MR images for the same patient should have similar body outline with minor variations due to daily setup uncertainty. In this scenario, the acquired body contours can be used as guidance for anatomy compensation during sCT generation from daily MR images, and our Comp-cycleGAN (contour) model was trained to take advantage of the supplementary information from the body contours. The results in Tables 1 and 2 show our Comp-cycleGAN (contour) has superior performance compared to Comp-cycleGAN, demonstrating the effectiveness of utilizing the contour information for anatomy compensation. However, if the patient setup is extremely different between the CT and MR images or CT images are unavailable for MR-only based treatment planning, there will be no body contours for the Comp-cycleGAN (contour) model. In this condition, the Comp-cycleGAN (contour) model cannot be used, but the Comp-cycleGAN model can still be used to generate sCT image and compensate the truncated region because it does not rely on the contour information during training and prediction.

To demonstrate the effectiveness of our method, we also compared it to the cycleGAN-full model, a regular cycleGAN model trained and tested using original MR images with complete anatomy. The sCT images generated from cycleGAN-full had an average MAE of 90.5 HU, PSNR of 27.9 dB, and SSIM of 0.95, which were comparable to the results published in recent deep learning studies of head-and-neck sCT generation.25,28,35,37,45,50 The quantitative results in Tables 1 and 2 demonstrated that sCT images generated from truncated MR images by our methods were comparable to those generated from complete-anatomy MR images by the cycleGAN full model. One limitation of our study is that we did not perform dosimetric evaluation of our sCT generation. However, several previous studies 28,35,37,50 have investigated the dosimetric accuracy of sCT images generated for head-and-neck patients. Since our sCT images had the similar image quality as that in those studies based on the quantitative results, the sCT images generated by our Comp-cycleGAN model were expected to achieve similar dosimetric accuracy as previous studies. In our future study, we will evaluate dosimetric accuracy of the sCT images generated by our Comp-cycleGAN model for treatment planning.

Furthermore, in this study, we used the truncated MR images of head-and-neck patients to demonstrate the effectiveness of our method for simultaneous anatomy compensation and sCT generation. However, this method can be used for any other anatomical sites where the MR images might be truncated. The other common case would be pelvis and prostate patients. Their MR images are usually truncated at the peripheral regions because of the limited field of view of MR scans to avoid geometric distortion and optimize the sequence for acquisition time and image quality. The major challenge for the pelvis and prostate sites is the non-rigid body shape that might result in the inferior performance of the Comp-cycleGAN model in compensating the missing anatomy. From our observation, the Comp-cycleGAN can be trained to predict missing data of a rigid body shape like head very well. Non-rigid body shape like the abdomen is less predictable in general, and more training data are needed for Comp-cycleGAN to achieve reasonable prediction.

All CT and MR images used to train and test the Comp-cycleGAN model were from our institution. The trained Comp-cycleGAN model might not be directly applied to MR images obtained from other institutions. The main reason lies in that the voxel intensities of MR images do not correspond to specific physical meaning because they depend on the combination of tissue properties and hardware-specific settings.51 Although the Z-score standardization could be used to reduce the intensity variability52, there is still a lack of consistency of voxel intensity, which prevents the direct application of the trained model across the different institutions. Transfer learning is usually needed to train a new model for new institution data.

One limitation of the proposed method is that the model might not perform robustly if the input MR images are severely truncated (e.g. larger than 3cm). This is mainly because the truncated regions might contain some critical structures which could not be inferred from the given anatomy in the truncated images. In this scenario, our proposed method could still generate sCT images and compensate the missing anatomy based on the learning of anatomical features from the patient population in the training dataset. However, the accuracy and robustness of our method would be considerably reduced. Another limitation of this work is that our evaluation metrics (MAE, PSNR, and SSIM) are affected by the accuracy of registration between CT and MR images. Even though training our model does not require registered MR-CT pairs, the misalignment between CT and MR images will affect our evaluation accuracy. While deformable image registration was applied to alleviate this issue, inter-modality image registration is still a challenging problem, especially for head-and-neck patients where the neck flexion can be quite different. Furthermore, the capability of our proposed method to compensate the missing anatomy is learned from the structural features in training data. In the future study, we plan to collect more patients to train and test our method. We plan to include more patients with increased variations in shapes and weights in our study, improving the diversity of structural features in our training database. This is expected to enhance the performance of our model.

5. CONCLUSIONS

We proposed a novel deep-learning method to generate sCT images with complete anatomy from truncated MR images. Based on cycleGAN, we modified the cycle-consistency loss and innovatively introduced original MR images with complete anatomy in the training process to facilitate anatomy compensation during sCT creation. Extensive experiments demonstrated that our method can generate sCT images with high image quality and reliable anatomy compensation. This technique has the great potential to be a useful tool to facilitate MR-based radiation treatment planning.

Acknowledgments:

This work was supported in part by a start-up fund from MD Anderson Cancer Center and the National Institutes of Health through Cancer Center Support Grant P30CA016672.

Footnotes

Conflict of interest: The authors have no conflicts to disclose.

REFERENCES

  • 1.Njeh CF. Tumor delineation: The weakest link in the search for accuracy in radiotherapy. J Med Phys Assoc Med Phys India. 2008;33(4):136–140. doi: 10.4103/0971-6203.44472 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Schmidt MA, Payne GS. Radiotherapy Planning using MRI. Phys Med Biol. 2015;60(22):R323–R361. doi: 10.1088/0031-9155/60/22/R323 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Chung NN, Ting LL, Hsu WC, Lui LT, Wang PM. Impact of magnetic resonance imaging versus CT on nasopharyngeal carcinoma: primary tumor target delineation for radiotherapy. Head Neck. 2004;26(3):241–246. doi: 10.1002/hed.10378 [DOI] [PubMed] [Google Scholar]
  • 4.Rasch CR, Steenbakkers RJ, Fitton I, et al. Decreased 3D observer variation with matched CT-MRI, for target delineation in Nasopharynx cancer. Radiat Oncol. 2010;5(1):21. doi: 10.1186/1748-717X-5-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Dai YL, King AD. State of the art MRI in head and neck cancer. Clin Radiol. 2018;73(1):45–59. [DOI] [PubMed] [Google Scholar]
  • 6.Edmund J, Nyholm T. A review of substitute CT generation for MRI-only radiation therapy. Radiat Oncol. 2017;12. doi: 10.1186/s13014-016-0747-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Burgos N, Guerreiro F, McClelland J, et al. Iterative framework for the joint segmentation and CT synthesis of MR images: application to MRI-only radiotherapy treatment planning. Phys Med Biol. 2017;62(11):4237–4253. doi: 10.1088/1361-6560/aa66bf [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Nyholm T, Jonsson J. Counterpoint: Opportunities and challenges of a magnetic resonance imaging-only radiotherapy work flow. Semin Radiat Oncol. 2014;24(3):175–180. doi: 10.1016/j.semradonc.2014.02.005 [DOI] [PubMed] [Google Scholar]
  • 9.Ulin K, Urie MM, Cherlow JM. Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration. Int J Radiat Oncol Biol Phys. 2010;77(5):1584–1589. doi: 10.1016/j.ijrobp.2009.10.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Khoo VS, Joon DL. New developments in MRI for target volume delineation in radiotherapy. Br J Radiol. 2006;79 Spec No 1:S2–15. doi: 10.1259/bjr/41321492 [DOI] [PubMed] [Google Scholar]
  • 11.Boeke S, Mönnich D, van Timmeren JE, Balermpas P. MR-Guided Radiotherapy for Head and Neck Cancer: Current Developments, Perspectives, and Challenges. Front Oncol. 2021;11:616156. doi: 10.3389/fonc.2021.616156 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Winkel D, Bol GH, Kroon PS, et al. Adaptive radiotherapy: The Elekta Unity MR-linac concept. Clin Transl Radiat Oncol. 2019;18:54–59. doi: 10.1016/j.ctro.2019.04.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Hsu SH, Cao Y, Huang K, Feng M, Balter JM. Investigation of a method for generating synthetic CT models from MRI scans of the head and neck for radiation therapy. Phys Med Biol. 2013;58(23):8419–8435. doi: 10.1088/0031-9155/58/23/8419 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Zheng W, Kim JP, Kadbi M, Movsas B, Chetty IJ, Glide-Hurst CK. Magnetic Resonance-Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region. Int J Radiat Oncol Biol Phys. 2015;93(3):497–506. doi: 10.1016/j.ijrobp.2015.07.001 [DOI] [PubMed] [Google Scholar]
  • 15.Andreasen D, Van Leemput K, Edmund JM. A patch-based pseudo-CT approach for MRI-only radiotherapy in the pelvis. Med Phys. 2016;43(8):4742. doi: 10.1118/1.4958676 [DOI] [PubMed] [Google Scholar]
  • 16.Arabi H, Zaidi H. One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI. Eur J Nucl Med Mol Imaging. 2016;43(11):2021–2035. doi: 10.1007/s00259-016-3422-5 [DOI] [PubMed] [Google Scholar]
  • 17.Dowling JA, Burdett N, Greer PB, et al. Automatic Atlas Based Electron Density and Structure Contouring for MRI-based Prostate Radiation Therapy on the Cloud. J Phys Conf Ser. 2014;489:012048. doi: 10.1088/1742-6596/489/1/012048 [DOI] [Google Scholar]
  • 18.Dowling JA, Sun J, Pichler P, et al. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences. Int J Radiat Oncol Biol Phys. 2015;93(5):1144–1153. doi: 10.1016/j.ijrobp.2015.08.045 [DOI] [PubMed] [Google Scholar]
  • 19.Gudur MSR, Hara W, Le QT, Wang L, Xing L, Li R. A unifying probabilistic Bayesian approach to derive electron density from MRI for radiation therapy treatment planning. Phys Med Biol. 2014;59(21):6595–6606. doi: 10.1088/0031-9155/59/21/6595 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Uh J, Merchant TE, Li Y, Li X, Hua C. MRI-based treatment planning with pseudo CT generated through atlas registration. Med Phys. 2014;41(5):051711. doi: 10.1118/1.4873315 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Huynh T, Gao Y, Kang J, et al. Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model. IEEE Trans Med Imaging. 2016;35(1):174–183. doi: 10.1109/TMI.2015.2461533 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Cao X, Yang J, Gao Y, Guo Y, Wu G, Shen D. Dual-core steered non-rigid registration for multi-modal images via bi-directional image synthesis. Med Image Anal. 2017;41:18–31. doi: 10.1016/j.media.2017.05.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44(4):1408–1419. doi: 10.1002/mp.12155 [DOI] [PubMed] [Google Scholar]
  • 24.Chin AL, Lin A, Anamalayil S, Teo BKK. Feasibility and limitations of bulk density assignment in MRI for head and neck IMRT treatment planning. J Appl Clin Med Phys. 2014;15(5):4851. doi: 10.1120/jacmp.v15i5.4851 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Boulanger M, Nunes JC, Chourak H, et al. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys Medica PM Int J Devoted Appl Phys Med Biol Off J Ital Assoc Biomed Phys AIFB. 2021;89:265–281. doi: 10.1016/j.ejmp.2021.07.027 [DOI] [PubMed] [Google Scholar]
  • 26.Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Lecture Notes in Computer Science. Springer International Publishing; 2015:234–241. doi: 10.1007/978-3-319-24574-4_28 [DOI] [Google Scholar]
  • 27.Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. Deep Learn Data Labeling Med Appl First Int Workshop LABELS 2016 Second Int Workshop DLMIA 2016 Held Conjunction MICCAI 2016 Athens Greece Oct 21 2016 Proc. 2016;2016:170–178. doi: 10.1007/978-3-319-46976-8_18 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Dinkla AM, Florkow MC, Maspero M, et al. Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based three-dimensional convolutional neural network. Med Phys. 2019;46(9):4095–4104. doi: 10.1002/mp.13663 [DOI] [PubMed] [Google Scholar]
  • 29.Dinkla AM, Wolterink JM, Maspero M, et al. MR-Only Brain Radiation Therapy: Dosimetric Evaluation of Synthetic CTs Generated by a Dilated Convolutional Neural Network. Int J Radiat Oncol Biol Phys. 2018;102(4):801–812. doi: 10.1016/j.ijrobp.2018.05.058 [DOI] [PubMed] [Google Scholar]
  • 30.Wolterink J, Dinkla A, Savenije M, Seevinck P, Berg C, Išgum I. Deep MR to CT Synthesis Using Unpaired Data. In: SASHIMI@MICCAI. ; 2017. doi: 10.1007/978-3-319-68127-6_2 [DOI] [Google Scholar]
  • 31.Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative Adversarial Nets. In: Advances in Neural Information Processing Systems. Vol 27. Curran Associates, Inc.; 2014. Accessed May 26, 2022. https://proceedings.neurips.cc/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html [Google Scholar]
  • 32.Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. ; 2017:1125–1134. [Google Scholar]
  • 33.Cusumano D, Lenkowicz J, Votta C, et al. A deep learning approach to generate synthetic CT in low field MR-guided adaptive radiotherapy for abdominal and pelvic cases. Radiother Oncol J Eur Soc Ther Radiol Oncol. 2020;153:205–212. doi: 10.1016/j.radonc.2020.10.018 [DOI] [PubMed] [Google Scholar]
  • 34.Emami H, Dong M, Nejad-Davarani SP, Glide-Hurst C. Generating Synthetic CTs from Magnetic Resonance Images using Generative Adversarial Networks. Med Phys. Published online June 14, 2018: 10.1002/mp.13047. doi: 10.1002/mp.13047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Klages P, Benslimane I, Riyahi S, et al. Patch-Based Generative Adversarial Neural Network Models for Head and Neck MR-Only Planning. Med Phys. 2020;47(2):626–642. doi: 10.1002/mp.13927 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Largent A, Barateau A, Nunes JC, et al. Comparison of Deep Learning-Based and Patch-Based Methods for Pseudo-CT Generation in MRI-Based Prostate Dose Planning. Int J Radiat Oncol Biol Phys. 2019;105(5):1137–1150. doi: 10.1016/j.ijrobp.2019.08.049 [DOI] [PubMed] [Google Scholar]
  • 37.Peng Y, Chen S, Qin A, et al. Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning. Radiother Oncol. 2020;150:217–224. doi: 10.1016/j.radonc.2020.06.049 [DOI] [PubMed] [Google Scholar]
  • 38.Kazemifar S, McGuire S, Timmerman R, et al. MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach. Radiother Oncol J Eur Soc Ther Radiol Oncol. 2019;136:56–63. doi: 10.1016/j.radonc.2019.03.026 [DOI] [PubMed] [Google Scholar]
  • 39.Maspero M, Bentvelzen LG, Savenije MH, et al. Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy. Radiother Oncol. 2020;153:197–204. [DOI] [PubMed] [Google Scholar]
  • 40.Kazemifar S, Barragán Montero AM, Souris K, et al. Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors. J Appl Clin Med Phys. 2020;21(5):76–86. doi: 10.1002/acm2.12856 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Zhu JY, Park T, Isola P, Efros AA. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. Published online August 24, 2020. Accessed May 26, 2022. http://arxiv.org/abs/1703.10593
  • 42.Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, Berg CAT van den, Isgum I. Deep MR to CT Synthesis using Unpaired Data. ArXiv170801155 Cs. Published online August 3, 2017. Accessed September 28, 2020. http://arxiv.org/abs/1708.01155 [Google Scholar]
  • 43.Lei Y, Harms J, Wang T, et al. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med Phys. 2019;46(8):3565–3581. doi: 10.1002/mp.13617 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Li W, Li Y, Qin W, et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant Imaging Med Surg. 2020;10(6):1223–1236. doi: 10.21037/qims-19-885 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Liu Y, Chen A, Shi H, et al. CT synthesis from MRI using multi-cycle GAN for head-and-neck radiation therapy. Comput Med Imaging Graph. 2021;91:101953. doi: 10.1016/j.compmedimag.2021.101953 [DOI] [PubMed] [Google Scholar]
  • 46.Liu Y, Lei Y, Wang T, et al. MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method. Br J Radiol. 2019;92(1100):20190067. doi: 10.1259/bjr.20190067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Shafai-Erfani G, Lei Y, Liu Y, et al. MRI-Based Proton Treatment Planning for Base of Skull Tumors. Int J Part Ther. 2019;6(2):12–25. doi: 10.14338/IJPT-19-00062.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Yang H, Sun J, Carass A, et al. Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN. IEEE Trans Med Imaging. 2020;39(12):4249–4261. doi: 10.1109/TMI.2020.3015379 [DOI] [PubMed] [Google Scholar]
  • 49.Hong C, Lee DH, Han BS. Characteristics of geometric distortion correction with increasing field-of-view in open-configuration MRI. Magn Reson Imaging. 2014;32(6):786–790. doi: 10.1016/j.mri.2014.02.007 [DOI] [PubMed] [Google Scholar]
  • 50.Qi M, Li Y, Wu A, et al. Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy. Med Phys. 2020;47(4):1880–1894. doi: 10.1002/mp.14075 [DOI] [PubMed] [Google Scholar]
  • 51.Bloem JL, Reijnierse M, Huizinga TW, van der Helm-van AH. MR signal intensity: staying on the bright side in MR image interpretation. RMD Open. 2018;4(1):e000728. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Wahid KA, He R, McDonald BA, et al. MRI Intensity Standardization Evaluation Design for Head and Neck Quantitative Imaging Applications. MedRxiv. Published online 2021. [Google Scholar]

RESOURCES