Abstract
Magnetic resonance imaging (MRI) with gadolinium contrast is widely used for tissue enhancement and better identification of active lesions and tumors. Recent studies have shown that gadolinium deposition can accumulate in tissues including the brain, which raises safety concerns. Prior works have tried to synthesize post-contrast T1-weighted MRIs from pre-contrast MRIs to avoid the use of gadolinium. However, contrast and image representations are often entangled during the synthesis process, resulting in synthetic post-contrast MRIs with undesirable contrast enhancements. Moreover, the synthesis of pre-contrast MRIs from post-contrast MRIs which can be useful for volumetric analysis is rarely investigated in the literature. To tackle pre- and post- contrast MRI synthesis, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) network that enables disentanglement of contrast and image representations via a bi-directional image-to-image translation(I2I)model. Our proposed model can perform both pre-to-post and post-to-pre contrast synthesis, and provides an interpretable synthesis process by predicting contrast enhancement maps from the learned contrast embedding. Extensive experiments on a multiple sclerosis dataset demonstrate the feasibility of applying our bidirectional synthesis and show that BICEPS outperforms current methods.
Keywords: Multi-modal MRI Synthesis, Contrast Enhancement, Feature Disentanglement, Multiple Sclerosis
1. Introduction
Gadolinium-based contrast agents (GBCAs) have been widely applied in magnetic resonance imaging (MRI) to enhance tissue contrast [25] and to better identify active lesions in multiple sclerosis (MS) [13] and tumors [22]. Although GBCAs are generally considered safe [12], recent research indicates that gadolinium can accumulate in the brain [3]. Such gadolinium deposition may cause health issues, which raises safety concerns in the use of GBCAs [20]. To avoid the use of GBCAs, deep learning based image synthesis models [1,2,6,10,16] have been proposed to simulate post-contrast MRIs from pre-contrast MRIs without compromising diagnostic ability. The feasibility of such methods is also corroborated by research on predicting lesions from pre-contrast MRIs [14].
To simulate post-contrast MRIs from pre-contrast MRIs, UNets [19] and conditional generative adversarial networks (cGANs) [9] have been proposed as the image synthesis models. Taking multiple pre-contrast MRI sequences of the same subject as input, image-to-image translations (I2I) generate post-contrast T1-weighted MRIs (post-T1w). Past efforts on simulating post-contrast MRIs have had promising results, though few works have focused on interpreting the synthesis process. Kleesiek et al. [10] introduced a 3D UNet [4] model to predict a contrast enhancement map along with an uncertainty map. Although the model provides some level of interpretability, the disentanglement of contrast and image signal has not been investigated. Additionally, there has been limited work [21] on synthesizing pre-contrast T1-weighted MRIs (pre-T1w) from post-contrast. In clinical practice, there are cases where only post-T1w is acquired due to time and cost. Since most image analysis pipelines [8] are developed for pre-T1w, missing such data limits the use of these algorithms.
In this paper, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) model which can perform both pre-T1w to post-T1w and post-T1w to pre-T1w synthesis using a single model. The bi-directional synthesis is achieved by first disentangling contrast and image features via a feature encoder and generating a corresponding post-contrast MRI and contrast enhancement map (CE-map) via a dual-path decoder. The disentanglement enables a more interpretable synthesis process and better alignment between the contrast and image features. Depending on the target MRI sequence, the synthesis process of BICEPS can be interpreted as a combination of a prediction process and a reconstruction process. When the inputs contain a pre-T1w, BICEPS will try to simulate the post-T1w and recover the input pre-T1w. Similarly, given inputs with post-T1w, BICEPS aims to simulate the pre-T1w and recover the input post-T1w. The network can also take as input other pre-contrast tissue contrasts (e.g., T2w and FLAIR), if they are available, to provide complementary information for a better synthesis. We further train BICEPS with input dropout, so that not all input tissue contrasts are needed during the inference. The disentanglement of contrast and image features encourages better alignment between pre-T1w and post-T1w sequences, and provides an interpretable CE-map. The bi-directional synthesis requires multi-task learning of the network and thus improves the robustness and generalizability of our proposed model. To the best of our knowledge, this work is the first to exploit feature disentanglement for bi-directional synthesis between pre- and post-contrast MRIs.
2. Methodology
Network Architecture.
The overall framework of BICEPS is outlined in Fig. 1. Our backbone network is similar to Pix2Pix [9] with 3D convolutional kernels and additional residual blocks [7]. Input to the network is 3D patches during training, with full volumes used during inference. The network takes four input patches from three different tissue contrasts: T1w, T2w, and FLAIR. A zero patch is also used for bi-directional synthesis. For pre-to-post synthesis, the T1w image is pre-T1wandforpost-to-presynthesis,theT1wimageispost-T1w.Ifoneorbothofthe T2w and FLAIR images is missing, then the input can be replaced by zeros. The T1w image (pre- or post-) is always required. The encoder network consists of two down-sampling layers via a 3D convolution. Four consecutive residual blocks are applied before each down-sampling layer. The architecture of our residual block is the same as in [2], where the number of channels is reduced to 1/4 of the input channel dimension in the residual block. The decoder network takes the encoded image feature maps and contrast feature maps from the encoder network and outputs both the post-T1w and the CE-map through two paths. In the image path, contrast features are fused with image features via a contrast-aware synthesis (CAS) block. While in the contrast path, only contrast features are used to generate the CE-map. The two paths encourages the disentanglement of the image and contrast features to ease the bi-directional synthesis and ensure better alignment between the image and contrast features. To get pre-T1w, the predicted CE-map and post-T1w are subtracted.
Fig. 1.
Overview of our proposed BICEPS model. The network takes four 3D inputs the first two of which are always a T1w image (pre- or post-) and a zero image. The second two inputs are T2w and FLAIR images, either of which or both can be replaced by zeros if missing. The outputs are the same regardless of the inputs.
For bi-directional synthesis, we design two specific input channels for pre-T1w or post-T1w. For pre-to-post, the post-T1w input channel is set to all zeros. In this case, the image feature is extracted from the input and the contrast feature is synthesized from multiple input channels (i.e., multiple MRI tissue contrasts such as T2w or FLAIR). The image path in the decoder synthesizes the target post-T1w, and the predicted pre-T1w means to reconstruct the corresponding input. Similarly, for post-to-pre, the pre-T1w input channel is set to all zeros. The encoder then tries to extract both image and contrast features from the input MRIs. The image path will try to recover the target post-T1w, and predict the pre-T1w by subtracting the predicted CE-map. During inference, the network automatically detects the input sequences and performs the corresponding synthesis tasks.
Contrast-Aware Synthesis.
The architecture of our decoder network and CAS block are in Fig. 2. The decoder network consists of two up-sampling layers via 3D transposed convolutional layers with multiple residual and CAS blocks. The decoder takes two inputs, the learned contrast and image latent embeddings from the encoder network, and outputs a CE-map as well as a post-T1w image. For the image path, which synthesizes the post-T1w, contrast and image features are fused within the CAS block. The design of the CAS block is inspired by [5] and [15]. Different from [15] which used a fixed conditional input, our CAS block gradually refines a learned contrast embedding and incorporates it into the image features. γ and β are spatial maps with the same dimension as the input contrast or image feature. Intuitively, γ and β contain both position and intensity features of the CE-map and thus can guide the synthesis of the post-T1w image. By providing three types of outputs, CAS blocks enable a more interpretable synthesis process and generate high quality T1w for both directions. More details are given in Sect. 3.
Fig. 2.
The architecture of our proposed decoder network and Contrast Aware Synthesis (CAS) block. Regardless of encoder inputs, the decoder takes the contrast and image embeddings learned from the encoder and generates CE-maps and post-T1w images.
Let x and y denote the real pre-T1w and post-T1w images, respectively, and let and denote the generated pre-T1w and post-T1w images, respectively. We adopt a Mean Absolute Error (MAE) loss as the image loss to guide the training of both syntheses. Although we do not have an explicit loss term for the CE-map, it is implicitly trained by two image loss terms. For both pre-to-post and post-to-pre contrast synthesis training, we always use the same loss functions. The image loss is defined as .
Disentanglement.
To better disentangle contrast and image features, we include an additional discriminator network, D, and adopt an adversarial loss to adaptively learn the difference between the paired synthetic images and the groundtruth. D follows the DCGAN [17] architecture where 3D convolutional kernels are used for 3D image input. Let x′ denote other non-T1w input images including T2w and FLAIR images. The adversarial objective is defined as,
where is the sigmoid function and θD represents the parameters of the conditional discriminator network D, θG represents the parameters of the bi-directional synthesis network G. For the second term, the input of the synthesis network is either x or y depending on the task. The adversarial loss ensures better matching between synthesized outputs and thus implicitly encourages better disentanglement between contrast and image features. The overall loss function for training our model is , where λ is a scaling factor.
Input Dropout.
While BICEPS by default takes 3 input tissue contrasts, we further boost the flexibility of BICEPS by allowing missing input contrasts for inference. This is achieved by randomly replacing one input channel with a zero image during training, which we call input dropout. During inference, missing MRI sequences will be replaced with zero images to maintain the same input dimension. Compared with previous methods [1,2] which handle a different number of input sequences by training multiple models, our trained model can perform bi-directional synthesis with either full sequences or partial sequences using the same model. Note that in the adversarial loss term, we always include all available MRI sequences regardless of whether they are used as inputs or not.
Implementation Details.
The encoder network of BICEPS consists of two down-sampling with 3D convolution kernel size 3 × 3 × 3 and stride 2. Four consecutive residual blocks with 3 × 3 × 3 convolution, BatchNormalization, and LeakyReLU activation as in [2] are placed before each down-sampling layer. The last down-sampling layer is followed by 4 additional residual blocks to generate the contrast and image embeddings in the latent space. The decoder network consists of two up-sampling 3D transposed convolutional layers with kernel size 3×3×3 and stride 2. MLP layer in the CAS block is implemented as a 1×1×1 convolution. A 1×1×1 convolution is also applied before the output. We disable tracking the running mean and std. dev. in the BatchNormalization step since the input to the network could contain zero images due to the bi-directional synthesis and input dropout. λ is set to 100. We adopt a three-layer DCGAN [17] discriminator with 3D convolutional kernel size 4 × 4 × 4. LSGAN [11] is used as the adversarial loss for stabilizing the training. During training, T2w and FLAIR images are randomly replaced with zero images with probability 1/4. We alternatively train the network with one iteration pre-to-post, then an iteration post-to-pre. We use the AdamW optimizer with (β1,β2) = (0.5,0.999), weight decay 10−2, and initial learning rate 10−3. The learning rate is decayed every time the validation loss does not decrease for 20 epochs with a rate of 0.8. The batch size is 16 and the maximum number of training epochs is 500.
All MRIs were center cropped to 224×256×224, then randomly extracted 3D patches of size 56×64×56 were used as input to the network during the training. Random horizontal flipping was also applied for data augmentation. At inference, whole image volumes are input to the network with no further processing.
3. Experiments and Results
Dataset.
We conducted experiments on an MS dataset, containing 59 subjects collected on a Philips Achieva 3.0T scanner. Each scan contained multiple structural images including two 3D T1w images (acquired at resolution 1.1×1.1×1.18 mm, TE = 6ms, TR = 3s, TI = 840ms), a 2D FLAIR image (acquired at resolution 0.83 × 0.83 × 2.2 mm, TE = 68ms, TR = 11s, TI = 2.8s) and a 2D T2weighted (T2w) image (acquired at resolution 1.1 × 1.1 × 2.2mm, TE = 12 ms/80 ms, TR = 4.2 s). T1w and T2w images were reconstructed on the scanner with 0.83×0.83 mm in-plane. Three-dimensional T1w images were obtained both before and after contrast administration. Gadolinium dosage was 0.1 mmol per kg of subject weight. Within the 59 subjects, we randomly select 40 for training, 6 for validation, and 13 for our test set. All comparisons are done on the test set.
For preprocesing, images went through inhomogeneity correction using N4 [23]. 2D acquired images (T2w, FLAIR) were super-resolved and anti-aliased using SMORE [26]. All contrasts for subjects were rigidly registered to the pre-T1w image. After registration, images were gain-corrected by linearly adjusting the intensities to align the white matter histogram peaks [18], since it showed better performance in synthesis tasks than other normalizations.
Result Analysis.
We evaluated the BICEPS model for both pre-to-post and post-to-pre contrast synthesis. For pre-to-post synthesis, we compare BICEPS with state-of-the-art post contrast synthesis methods including 2D UNet [19], 2D cGAN [9,16], and 3D Gadnet [2]. 2D networks are applied on axial slices. Quantitative and qualitative comparisons can be found in Table 1 two columns under pre- to post- contrast and Fig. 3, respectively. SSIM and PSNR are calculated on 3D volumes. From Table 1, one can observe that even though BICEPS was trained to perform synthesis in both directions, it outperforms the other methods in both metrics. As illustrated in Fig. 3, UNet and cGAN fail to capture the detailed contrast enhancement of the synthesized post-T1w. Both Gadnet and BICEPS generate post-T1w which contain correct contrast enhancement information, yet Gadnet slightly overestimates the contrast enhancement.
Table 1.
Quantitative comparisons between synthesis methods. For post- to pre-T1w experiments, Dice is calculated using SLANT [8] whole brain segmentation results between the original and synthesized pre-T1w. The Dice of the original post-T1w is 88.73 ± 1.25. Note that BICEPS uses a single model for both directions, other methods train two separate models.
| Methods | Pre- to Post- contrast | Post- to Pre- contrast | |||
|---|---|---|---|---|---|
|
| |||||
| SSIM↑ (%) | PSNR↑ (dB) | SSIM↑ (%) | PSNR↑ (dB) | Dice↑ (%) | |
| UNet 2D [19] | 83.60 ± 2.50 | 28.96 ± 1.59 | 87.11 ± 2.06 | 30.34 ± 1.39 | 92.04 ± 1.06 |
| cGAN [9,16] | 85.55 ± 2.14 | 27.44 ± 1.15 | 91.19 ± 1.68 | 33.98 ± 1.18 | 93.06 ± 1.27 |
| Gadnet [2] | 87.74 ± 2.56 | 31.40 ± 1.58 | 91.35 ± 1.76 | 35.38 ± 1.40 | 92.38 ± 0.99 |
|
| |||||
| BICEPS (Ours) | 89.93 ± 2.08 | 32.59 ± 1.25 | 91.84 ± 2.44 | 32.38 ± 1.68 | 93.41 ± 1.10 |
Fig. 3.
Pre-to-post synthesis: The two rows show (from L to R) the pre-T1w image, the results from four different synthesis methods, and the ground truth post-T1w image.
While no prior works have been done for post-to-pre synthesis, we adopted the same methods used for pre-to-post synthesis as baselines to compare with our model. We would emphasize that all baseline methods were trained separately for different tasks, while BICEPS uses the same trained model for both tasks. To validate the clinical usefulness of post-to-pre synthesis when only post-contrast is available, we ran the SLANT [8] whole brain segmentation model which was designed for pre-contrast images on all synthetic pre-T1w. As a reference, we also ran SLANT on both the real pre-T1w and real post-T1w. The Dice coefficient between real post-T1w segmentation and pre-T1w segmentation is 88.73 ± 1.25. From the last three columns of Table1, BICEPS achieves comparable performance in terms of SSIM and PSNR to cGAN and Gadnet. While all methods achieve considerably better segmentation results than using the real post-T1w, pre-T1w generated from our method has a segmentation result that agrees more with the real pre-T1w segmentation, which further validates the utility of our model. As shown in Fig. 4, BICEPS clearly leads to more accurate brain segmentation results when comparing to the other methods (p < 0.025 compared to cGAN which is the second best method using the paired Wilcoxon signed rank test).
Fig. 4.
Post-to-pre synthesis: The top row shows the images and the second row shows their associated SLANT segmentation. The pre-T1w segmentation is treated as the groundtruth. White circles highlight regions where BICEPS leads to better segmentation than other synthesis methods.
Input Dropout Testing.
To validate our proposed input dropout, we test BICEPS with different input sequences using the same trained model. Compared with previous methods using different trained models, our trained BICEPS model eliminates the effect of randomization in the training and thus has a more accurate understanding of contributions from different input sequences in different tasks. From Table 2, we conclude that in pre-to-post synthesis, all input sequences play an important role in the synthesis. On the other hand, FLAIR contributes more towards post-to-pre synthesis than the T2w image.
Table 2.
Analysis of synthesis performance using different input MRI contrasts. All results are from the same BICEPS model trained using input dropout.
| Inputs | Pre- to Post- contrast | Post- to Pre- contrast | ||
|---|---|---|---|---|
| SSIM↑ (%) | PSNR↑ (dB) | SSIM↑ (%) | PSNR↑ (dB) | |
| T1w | 77.21 ± 2.32 | 28.27 ± 1.66 | 68.06 ± 3.15 | 28.30 ± 2.37 |
| T1w + T2w | 82.55 ± 3.03 | 30.81 ± 1.63 | 83.15 ± 2.58 | 27.87 ± 1.12 |
| T1w + FLAIR | 83.53 ± 2.73 | 29.35 ± 1.59 | 90.19 ± 2.50 | 32.42 ± 2.32 |
| Full | 89.93 2.08 | 32.59 1.25 | 91.84 2.44 | 32.38 1.68 |
Other Applications.
We also explore the potential of applying BICEPS to other tasks such as dural sinuses segmentation. We use the maximum intensity projection (MIP) [24] to render the CE-map learned by BICEPS as well as the groundtruth. In Fig. 5, we compare the groundtruth CE-map, with the post-to-pre output of BICEPS. Our method correctly highlights all dural sinuses, which indicates that our method successfully separates contrast and image features. For pre-to-post synthesis, predicting the CE-map is more challenging since the contrast must be estimated from the input. Although the visualization of our pre-to-post CE-map is not as close to the groundtruth as in post-to-pre, major sinuses including superior sagittal sinus, straight sinus, and transverse sinuses are still clearly visible. Based on such observations, we believe BICEPS can be instructional in dural sinuses segmentation.
Fig. 5.
MIP rendering visualization of CE-maps, defined as the differece between post-T1w and pre-T1w. From left to right: our method in pre-to-post synthesis, our method in post-to-pre synthesis, and the groundtruth.
4. Conclusion
In this paper, we propose a novel bi-directional synthesis model for both pre-to-post and post-to-pre MRI synthesis via disentangling image and contrast features from the input. We demonstrate that our proposed BICEPS outperforms current methods in both tasks on an MS dataset. In the future, we plan to investigate the potential of applying BICEPS and incorporating T2* weighted images to the detection of active MS lesions and dural sinuses segmentations.
Acknowledgements.
This research was in part supported by the Intramural Research Program of the NIH, National Institute on Aging.
References
- 1.Bône A, et al. : Contrast-enhanced brain mri synthesis with deep learning: key input modalities and asymptotic performance. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1159–1163. IEEE; (2021) [Google Scholar]
- 2.Calabrese E, Rudie JD, Rauschecker AM, Villanueva-Meyer JE, Cha S: Feasibility of simulated postcontrast MRI of glioblastomas and lower-grade gliomas by using three-dimensional fully convolutional neural networks. Radiol. Artif. Intell 3(5), e200276 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Choi JW, Moon WJ: Gadolinium deposition in the brain: current updates.Korean J. Radiol. 20(1), 134–147 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin S, Joskowicz L, Sabuncu MR, Unal G, Wells W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham: (2016). 10.1007/978-3-319-46723-8_49 [DOI] [Google Scholar]
- 5.Dumoulin V, Shlens J, Kudlur M: A learned representation for artistic style. arXiv preprint arXiv:1610.07629 (2016) [Google Scholar]
- 6.Gong E, Pauly JM, Wintermark M, Zaharchuk G: Deep learning enablesreduced gadolinium dose for contrast-enhanced brain mri. J. Magn. Reson. Imaging 48(2), 330–340 (2018) [DOI] [PubMed] [Google Scholar]
- 7.He K, Zhang X, Ren S, Sun J: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision- and Pattern Recognition, pp. 770–778 (2016) [Google Scholar]
- 8.Huo Y, et al. : 3d whole brain segmentation using spatially localized atlas network tiles. Neuroimage 194, 105–119 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Isola P, Zhu JY, Zhou T, Efros AA: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017) [Google Scholar]
- 10.Kleesiek J, et al. : Can virtual contrast enhancement in brain MRI replace gadolinium?: a feasibility study. Invest. Radiol 54(10), 653–660 (2019) [DOI] [PubMed] [Google Scholar]
- 11.Mao X, Li Q, Xie H, Lau RY, Wang Z, Paul Smolley S: Least squares generative adversarial networks. In: ICCV, pp. 2794–2802 (2017) [DOI] [PubMed] [Google Scholar]
- 12.Matsumura T, et al. : Safety of gadopentetate dimeglumine after 120 million administrations over 25 years of clinical use. Magn. Reson. Med. Sci 12, 297–304 (2013) [DOI] [PubMed] [Google Scholar]
- 13.McFarland HF, et al. : Using gadolinium-enhanced magnetic resonance imaginglesions to monitor disease activity in multiple sclerosis. Ann. Neurol 32(6), 758–766 (1992) [DOI] [PubMed] [Google Scholar]
- 14.Narayana PA, Coronado I, Sujit SJ, Wolinsky JS, Lublin FD, Gabr RE: Deep learning for predicting enhancing lesions in multiple sclerosis from noncontrast MRI. Radiology 294(2), 398–404 (2020) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Park T, Liu MY, Wang TC, Zhu JY: Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2337–2346 (2019) [Google Scholar]
- 16.Preetha CJ, et al. : Deep-learning-based synthesis of post-contrast t1-weightedmri for tumour response assessment in neuro-oncology: a multicentre, retrospective cohort study. Lancet Digit. Health 3(12), e784–e794 (2021) [DOI] [PubMed] [Google Scholar]
- 17.Radford A, Metz L, Chintala S: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015) [Google Scholar]
- 18.Reinhold JC, Dewey BE, Carass A, Prince JL: Evaluating the impact of intensity normalization on MR image synthesis. In: Medical Imaging 2019: Image Processing. vol. 10949, pp. 890–898. SPIE (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Ronneberger O, Fischer P, Brox T: U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham: (2015). 10.1007/978-3-319-24574-428 [DOI] [Google Scholar]
- 20.Semelka RC, Ramalho M, AlObaidy M, Ramalho J: Gadolinium in humans: a family of disorders. Am. J. Roentgenol 207(2), 229–233 (2016) [DOI] [PubMed] [Google Scholar]
- 21.Simona B, et al. : Homogenization of brain Mri from a clinical data warehouse using contrast-enhanced to non-contrast-enhanced image translation with u-net derived models. In: Medical Imaging 2022: Image Processing. vol. 12032, pp. 576–582. SPIE (2022) [Google Scholar]
- 22.Tuncbilek N, Karakas HM, Okten OO: Dynamic contrast enhanced mri in the differential diagnosis of soft tissue tumors. Eur. J. Radiol 53(3), 500–505 (2005) [DOI] [PubMed] [Google Scholar]
- 23.Tustison NJ, et al. : N4itk: improved n3 bias correction. IEEE Trans. Med. Imaging 29(6), 1310–1320 (2010) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Wallis JW, Miller TR, Lerner CA, Kleerup EC: Three-dimensional displayin nuclear medicine. IEEE Trans. Med. Imaging 8(4), 230–297 (1989) [DOI] [PubMed] [Google Scholar]
- 25.Yankeelov TE, Gore JC: Dynamic contrast enhanced magnetic resonance imaging in oncology: theory, data acquisition, analysis, and examples. Current Medical Imaging 3(2), 91–107 (2007) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Zhao C, Dewey BE, Pham DL, Calabresi PA, Reich DS, Prince JL: Smore: a self-supervised anti-aliasing and super-resolution algorithm for mri using deep learning. IEEE Trans. Med. Imaging 40(3), 805–817 (2020) [DOI] [PMC free article] [PubMed] [Google Scholar]





