Abstract
The rapid tracer kinetics of rubidium-82 (82Rb) and high variation of cross-frame distribution in dynamic cardiac positron emission tomography (PET) raise significant challenges for inter-frame motion correction, particularly for the early frames where conventional intensity-based image registration techniques are not applicable. Alternatively, a promising approach utilizes generative methods to handle the tracer distribution changes to assist existing registration methods. To improve frame-wise registration and parametric quantification, we propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame using an all-to-one mapping. Specifically, a feature-wise linear modulation layer encodes channel-wise parameters generated from temporal tracer kinetics information, and rough cardiac segmentations with local shifts serve as the anatomical information. We validated our proposed method on a clinical 82Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, motion estimation accuracy and clinical myocardial blood flow (MBF) quantification were improved compared to using the original frames. Our code is published at https://github.com/gxq1998/TAI-GAN.
Keywords: frame conversion, cardiac PET, motion correction
1. Introduction
Compared to other non-invasive imaging techniques, dynamic cardiac positron emission tomography (PET) myocardial perfusion imaging increases the accuracy of coronary artery disease detection [20]. After tracer injection, a dynamic frame sequence is acquired over several minutes until the myocardium is well perfused. The time-activity curves (TACs) are collected in the myocardium tissue and left ventricle blood pool (LVBP) using regions of interest (ROIs) derived from the reconstructed frames. The myocardial blood flow (MBF) is then quantified through kinetic modeling using myocardium and LVBP TACs.
However, the inter-frame motion will cause spatial misalignment across the dynamic frames, resulting in incorrect TAC measurements and a major impact on both ROI-based and voxel-based MBF quantification [12]. The high variation of cross-frame distribution originating from the rapid tracer kinetics of rubidium-82 (82Rb) further complicates inter-frame motion correction, especially for the early frames when the injected tracer is concentrated in the blood pool and has not been well distributed in the myocardium. Most existing motion correction studies and clinical software focus solely on the later frames in the myocardial perfusion phase [3,16,26]. Although deep learning-based dynamic PET motion correction has outperformed conventional techniques [9,10,27], few focused on 82Rb cardiac PET. An automatic motion correction network was proposed for 82Rb cardiac PET under supervised learning using simulated translational motion [22], but the method requires training two separate models to handle the discrepancy between early and late frames, which is inconvenient and computationally expensive.
Alternatively, the usage of image synthesis and modality conversion has been proposed to improve optimization in multi-modality image registration [4,15,18], mostly involving magnetic resonance imaging. In FDG dynamic PET, converting early frames to the corresponding late frame using a generative adversarial network (GAN) is a promising way to overcome the barrier of tracer differences and aid motion correction [24,25]. However, this recent method trains one-to-one mappings for each specific early frame, which is impractical to implement and also difficult to generalize to new acquisitions. Moreover, the tracer kinetics and related temporal analysis are not incorporated in model training, which might be a challenge when directly applied to 82Rb cardiac PET.
In this work, we propose a Temporally and Anatomically Informed GAN (TAI-GAN) as an all-to-one mapping to convert all the early frames into the last reference frame. A feature-wise linear modulation (FiLM) layer encodes channel-wise parameters generated from the blood pool TACs and the temporal frame index, providing additional temporal information to the generator. The rough segmentations of the right ventricle blood pool (RVBP), LVBP, and myocardium with local shifts serve as the auxiliary anatomical information. Most current work applying GAN+FiLM models encode text or semantic information for natural images [1,2,17] and metadata for medical images [6,21], while we innovatively propose encoding dynamic PET tracer distribution changes. TAI-GAN is the first work incorporating both temporal and anatomical information into the GAN for dynamic cardiac PET frame conversion, with the ability to handle high tracer distribution variability and prevent spatial mismatch.
2. Methods
2.1. Dataset
This study includes 85 clinical 82Rb PET scans (55 rest and 30 regadenoson-induced stress) that were acquired from 59 patients at the Yale New Haven Hospital using a GE Discovery 690 PET/CT scanner and defined by the clinical team to be nearly motion-free, with Yale Institutional Review Board approval. After weight-based 82Rb injection, the list-mode data of each scan for the first 6 min and 10 s were rebinned into 27 dynamic frames (14 × 5 s, 6 × 10 s, 3 × 20 s, 3 × 30 s, 1 × 90 s), resulting in a total of 2210 early-to-late pairs. The details of the imaging and reconstruction protocol are in Supplementary Figure S1. In all the scans, the rough segmentations of RVBP, LVBP, and myocardium were manually labeled with reference to the last dynamic frame for TAC generation and the following MBF quantification.
2.2. Network Architecture
The structure of the proposed network TAI-GAN is shown in Fig. 1. The generator predicts the related late frame using the input early frame, with the backbone structure of a 3-D U-Net [5] (4 encoding and decoding levels), modified to be temporally and anatomically informed. The discriminator analyzes the true and generated late frames and categorizes them as either real or fake, employing the structure of PatchGAN [13] (3 encoding levels, 1 linear output layer).
Fig. 1.

The structure of the proposed early-to-late frame conversion network TAI-GAN.
Temporally Informed by Tracer Dynamics and FiLM.
To address the high variation in tracer distribution in the different phases, the temporal information related to tracer dynamics is introduced to the network by concatenating RVBP and LVBP TACs as well as the frame temporal index in one-hot format. A long short-term memory (LSTM) [11] layer encodes the concatenated temporal input, and the following 1-D convolutional layer and linear layer map the LSTM outputs to the channel-wise parameters γ and β. The feature-wise linear modulation (FiLM) [19] layer then manipulates the bottleneck feature map by the generated scaling factor γ and bias β, as in (1),
| (1) |
where Mi is the ith channel of the bottleneck feature map, γi and βi are the scaling factor and the bias of the ith channel, respectively.
Anatomically Informed by Segmentation Locators.
The dual-channel input of the generator is the early frame concatenated with the rough segmentations of RVBP, LVBP, and myocardium. Note that cardiac segmentations are already required in MBF quantification and this is an essential part of the current clinical workflow. In our work, the labeled masks serve as the anatomical locator to inform the generator of the cardiac ROI location and prevent spatial mismatch in frame conversion. This is especially helpful in discriminating against the frame conversion of early RV and LV phases. Random local shifts of the segmentations are applied during training. This improves the robustness of the conversion network to motion between the early frame and the last frame.
Loss Function.
Both an adversarial loss and a voxel-wise mean squared error (MSE) loss are included in the loss function of TAI-GAN, as in (2)–(4),
| (2) |
| (3) |
| (4) |
where Ladv is the adversarial loss, Lmse is the MSE loss, D is the discriminator, G is the generator, FL is the real last frame, G(Fi) is the generator-mapped last frame from the ith early frame Fi, and V is the number of voxels in each frame.
2.3. Network Training and Image Conversion Evaluation
All the early frames with LVBP activity higher than 10% of the maximum activity in TAC are converted to the last frame. Very early frames with lower activity in LVBP are not considered as they do not have a meaningful impact on the image-derived input function and subsequently the associated MBF quantification [22]. Prior to model input, all the frames were individually normalized to the intensity range of [−1,1]. Patch-based training was implemented with a random cropping size of (64,64,32) near the location of LV inferior wall center, random rotation in the xy plane with the range of [−45°,45°], and 3-D random translation with the range of [−5,5] voxels as the data augmentation.
Considering the low feasibility of training each one-to-one mapping for all the early frames, we trained two pairwise mappings using a vanilla GAN (3-D U-Net generator) and solely the adversarial loss as a comparison with the state-of-the-art method by Sundar et al. [24]. The two specific mappings are one frame before and one frame after the EQ frame, the first frame when LVBP activity is equal to or higher than RVBP activity [22], respectively EQ-1 and EQ+1.
We also implemented the vanilla GAN and the MSE loss GAN as two all-to-one conversion baselines. A preliminary ablation study of the introduced temporal and anatomic information is summarized in Supplementary Figure S2 and Table S1. A comparison of the average training time and memory footprint is included in Supplementary Table S2.
All the deep learning models are developed using PyTorch and trained under 5-fold cross-validation on an NVIDIA A40 GPU using Adam optimizer (learning rate G = 2e−4, D = 5e−5). In each fold, 17 scans were randomly selected as the test set and the remaining 68 were for training. The stopping epoch was 800 for one-to-one mappings and 100 for all the all-to-one models.
Image conversion evaluations include visualizing the generated last frames against the real last frame and the input frame, with overlaid cardiac segmentations. Quantitatively, the MSE, normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) are computed between the generated and real last frames. Differences between methods were assessed by fold-wise paired two-tailed t-tests (α = 0.05).
2.4. Motion Correction and Clinical MBF Quantification
Since all the included scans are categorized as motion-free, we ran a motion simulation test to evaluate the benefit of frame conversion on motion correction using the test set of one random fold, resulting in 17 cases. On an independent 82Rb cardiac scan cohort identified as having significant motion by the clinical team, we ran non-rigid motion correction in BioImage Suite [14] (BIS) to generate motion fields. We applied the motion field estimations from the late frames scaled by 2 to the motion-free test frames as motion ground-truth. In this way, the characteristics of simulated motion match with real-patient motion and also have significant magnitudes. The different image conversion methods were applied to the early frames prior to motion simulation. All original and converted frames with simulated motion were then registered to the last frame using BIS with the settings as in [8]. We calculated the mean absolute prediction error to measure motion prediction accuracy,
| (5) |
where P is the number of transformation control points in a frame, is the motion prediction, and is the motion ground-truth.
After motion estimation, the predicted motion of each method is applied to the original frames without intensity normalization for kinetic modeling. To estimate the uptake rate K1, the LVBP TAC as the image-derived input function and myocardium TAC were fitted to a 1-tissue compartment model using weighted least squares fitting as in [23]. MBF was then calculated from K1 under the relationship as in [7]. The percentage differences of K1 and MBF were calculated between the motion-free ground-truth and motion-corrected values. The weighted sum-of-squared residuals were computed between the MBF model predictions and the observed TACs. We also included a comparison of LVBP and myocardium TACs in Supplementary Figure S3.
3. Results
3.1. Frame Conversion Performance
The sample results of the early-to-late frame conversion by each method are visualized in Fig. 2. Although the one-to-one models were trained under the most specific temporal mapping, the prediction results were not satisfactory and showed some failure cases, possibly due to the small sample size and the insufficient kinetics information of a given early frame. Among the all-to-one models, vanilla GAN was able to learn the conversion patterns but with some distortions. After introducing MSE loss, the GAN generated results with higher visual similarity. After introducing temporal and anatomical information, the visual performance of the proposed TAI-GAN was the best with less mismatch and distortion.
Fig. 2.

Sample early-to-late frame conversion results of each method with overlaid segmentations of RVBP (red), LVBP (blue), and myocardium (green).
The image similarity evaluation results are summarized in Table 1. Note that the similarity is quantitatively compared between the normalized predicted and real last frames as the intermediate conversion results, not representing actual tracer concentrations. The one-to-one training pair did not achieve better results than the all-to-one models, possibly due to the lack of inter-frame tracer dynamic dependencies. The TAI-GAN achieved the best result in each metric on each test set. The major improvement of the proposed TAI-GAN was for the Pre-EQ frames where the LVBP activity < RVBP activity, which is the most challenging to convert to late frame due to the large difference between the input and output frames.
Table 1.
Quantitative image similarity evaluation of early-to-late frame conversion (mean ± standard deviation) with the best results marked in bold
| Test set | Metric | Vanilla GAN One-to-one | Vanilla GAN | MSE loss GAN | TAI-GAN(Proposed) |
|---|---|---|---|---|---|
| EQ-1 | SSIM | 0.557 ± 0.017* | 0.640 ± 0.021 | 0.633 ± 0.053* | 0.657 ± 0.018 |
| MSE | 0.057 ± 0.001* | 0.050 ± 0.006* | 0.044 ± 0.011 | 0.040 ± 0.005 | |
| NMAE | 0.068 ± 0.002* | 0.063 ± 0.005* | 0.059 ± 0.009 | 0.057 ± 0.005 | |
| PSNR | 18.678 ± 0.116* | 19.370 ± 0.474* | 19.950 ± 0.949 | 20.335 ± 0.530 | |
| EQ+1 | SSIM | 0.669 ± 0.061* | 0.679 ± 0.014 | 0.680 ± 0.011 | 0.691 ± 0.013 |
| MSE | 0.032 ± 0.014 | 0.034 ± 0.002 | 0.033 ± 0.006 | 0.032 ± 0.002 | |
| NMAE | 0.050 ± 0.011 | 0.053 ± 0.003 | 0.051 ± 0.006 | 0.048 ± 0.003 | |
| PSNR | 21.323 ± 1.800 | 21.014 ± 0.355 | 21.188 ± 0.757 | 21.361 ± 0.205 | |
| All Pre-EQ frames | SSIM | – | 0.594 ± 0.012* | 0.596 ± 0.047* | 0.627 ± 0.025 |
| MSE | – | 0.063 ± 0.010* | 0.053 ± 0.016 | 0.046 ± 0.009 | |
| NMAE | – | 0.072 ± 0.007* | 0.066 ± 0.011 | 0.062 ± 0.008 | |
| PSNR | – | 18.507 ± 0.474* | 19.269 ± 1.036* | 19.834 ± 0.738 | |
| All frames | SSIM | – | 0.708 ± 0.010* | 0.716 ± 0.007* | 0.733 ± 0.018 |
| MSE | – | 0.027 ± 0.002* | 0.024 ± 0.003 | 0.021 ± 0.002 | |
| NMAE | – | 0.047 ± 0.004* | 0.044 ± 0.002 | 0.040 ± 0.002 | |
| PSNR | – | 22.803 ± 0.530* | 23.241 ± 0.342* | 23.799 ± 0.466 |
P < 0.05 between the current class and TAI-GAN (subject-wise paired two-tailed t-test).
3.2. Motion Correction Evaluation
Sample motion simulation and correction results are shown in Fig. 3. The simulated non-rigid motion introduced distortion to the frames and the mismatch between the motion-affected early frame and the segmentation is observed. After directly registering the original frames, the resliced frame was even more deformed, likely due to the tracer distribution differences in the registration pair. Early-to-late frame conversion could address such challenging registration cases, but additional mismatches might be introduced due to conversion errors, as seen in the vanilla and MSE loss GAN results. With minimal local distortion and the highest frame similarity, the conversion result of the proposed TAI-GAN matched the myocardium and ventricle locations with the original early frame and the registration result demonstrated the best visual alignment.
Fig. 3.

Sample motion simulation and correction results with different methods of frame conversion.
Table 2 summarizes the mean absolute motion prediction error on the original early frames and converted frames. Generally, the early acquisition time of a frame relates to high motion prediction errors. In all the included early frames and both EQ-1 and EQ+1 frames, the proposed TAI-GAN achieved the lowest motion prediction error and significantly reduced average prediction error compared to no conversion and all-to-one GAN models (p < 0.05). The improvement of motion correction accuracy after proper frame conversion is suggested.
Table 2.
Mean absolute motion prediction errors without and with each conversion method (in mm, mean ± standard deviation) with the best results marked in bold.
| No Conversion | Vanilla GAN One-to-one | Vanilla GAN | MSE loss GAN | TAI-GAN | |
|---|---|---|---|---|---|
| All frames | 4.45 ± 0.64* | – | 4.40 ± 0.49* | 4.76 ± 0.48* | 3.48 ± 0.45 |
| EQ-1 | 6.18 ± 1.51* | 5.33 ± 1.34 | 6.03 ± 1.06* | 5.12 ± 0.72* | 5.06 ± 0.78 |
| EQ+1 | 5.12 ± 0.93* | 4.72 ± 0.86 | 4.93 ± 0.80* | 4.81 ± 0.46* | 4.35 ± 0.87 |
P < 0.05 between the current class and TAI-GAN (paired-wise paired two-tailed t-test).
3.3. Parametric Fitting and Clinical MBF Quantification
Figure 4 shows the scatter plots of MBF results estimated from motion-free frames vs. no motion correction and motion correction after different conversion approaches. With simulated motion, the MBF estimates were mostly lower than the ground-truth. The fitted line of motion correction with vanilla GAN was closer to the identity line compared with motion correction without conversion. The fitted line of motion correction with MSE loss GAN was next to that of no motion correction with a slight correction effect. The fitted line of motion correction with the proposed TAI-GAN was the closest to the identity line, suggesting the most improvement in MBF quantification.
Fig. 4.

Scatter plots of MBF results estimated from motion-free frames vs. no motion correction (MC) and motion correction after different conversion methods.
Table 3 summarizes the bias of K1 and MBF as well as the parametric fitting error. The fitting error of TAI-GAN+MC was the lowest among all the test classes and didn’t show a significant difference with the motion-free error (p > 0.05).
Table 3.
K1 and MBF quantification results (mean ± standard deviation) with the best results marked in bold.
| Mean K1 percentage difference (%) | Mean MBF percentage difference (%) | Mean K1 fitting error (×10−5) | |
|---|---|---|---|
| Motion-free | – | – | 3.07 ± 1.85 |
| With motion | −25.97 ± 18.05* | −36.99 ± 23.37* | 10.08 ± 8.70† |
| Motion corrected (MC) | −17.76 ± 19.51* | −25.09 ± 31.71* | 22.18 ± 24.24† |
| Vanilla GAN+MC | −11.09 ± 9.79* | −16.93 ± 14.84* | 7.72 ± 6.64† |
| MSE loss GAN+MC | −27.99 ± 18.61* | −39.52 ± 23.89* | 13.05 ± 13.16† |
| TAI-GAN+MC | −5.07 ± 7.68 | −7.95 ± 11.99 | 3.80 ± 3.00 |
The K1 and MBF percentage differences of TAI-GAN+MC were decreased significantly compared to all the other groups.
4. Conclusion
We propose TAI-GAN, a temporally and anatomically informed GAN for early-to-late frame conversion to aid dynamic cardiac PET motion correction. The TAI-GAN can successfully perform early-to-late frame conversion with desired visual results and high quantitative similarity to the real last frames. Frame conversion by TAI-GAN can aid conventional image registration for motion estimation and subsequently achieve accurate motion correction and MBF estimation. Future work includes the evaluation of deep learning motion correction methods and real patient motion as well as the validation of clinical impact using invasive catheterization as the clinical gold standard.
Supplementary Material
Acknowledgements.
This work is supported under National Institutes of Health (NIH) grant R01 CA224140.
Footnotes
Supplementary Information The online version contains supplementary material available at https://doi.org/10.1007/978-3-031-44689-4_7.
References
- 1.Ak KE, Lim JH, Tham JY, Kassim AA: Semantically consistent text tofashion image synthesis with an enhanced attentional generative adversarial network. Pattern Recogn. Lett 135, 22–29 (2020) [Google Scholar]
- 2.Ak KE, Lim JH, Tham JY, Kassim A: Semantically consistent hierarchicaltext to fashion image synthesis with an enhanced-attentional generative adversarial network. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3121–3124. IEEE (2019) [Google Scholar]
- 3.Burckhardt DD: Cardiac positron emission tomography: overview of myocardialperfusion, myocardial blood flow and coronary flow reserve imaging. Mol. Imag (2009) [Google Scholar]
- 4.Cao X, Yang J, Gao Y, Wang Q, Shen D: Region-adaptive deformable registration of CT/MRI pelvic images via learning-based image synthesis. IEEE Trans. Image Process 27(7), 3500–3512 (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O: 3D U-Net:¨ learning dense volumetric segmentation from sparse annotation. In: Ourselin S, Joskowicz L, Sabuncu MR, Unal G, Wells W (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham: (2016). 10.1007/978-3-319-46723-8_49 [DOI] [Google Scholar]
- 6.Dey N, Ren M, Dalca AV, Gerig G: Generative adversarial registration forimproved conditional deformable templates. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3929–3941 (2021) [Google Scholar]
- 7.Germino M, et al. : Quantification of myocardial blood flow with 82 RB: validation with 15 O-water using time-of-flight and point-spread-function modeling. EJNMMI Res. 6, 1–12 (2016) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Guo X, et al. : Inter-pass motion correction for whole-body dynamic PET andparametric imaging. IEEE Trans. Radiat. Plasma Med. Sci 7, 344–353 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Guo X, Zhou B, Chen X, Liu C, Dvornek NC: MCP-Net: inter-frame motioncorrection with Patlak regularization for whole-body dynamic pet. In: Wang L, Dou Q, Fletcher PT, Speidel S, Li S (eds.) MICCAI 2022. LNCS, vol. 13434, pp. 163–172. Springer, Cham: (2022). 10.1007/978-3-031-16440-8_16 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Guo X, Zhou B, Pigg D, Spottiswoode B, Casey ME, Liu C, Dvornek NC: Unsupervised inter-frame motion correction for whole-body dynamic pet using convolutional long short-term memory in a convolutional neural network. Med. Image Anal 80, 102524 (2022). 10.1016/j.media.2022.102524 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Hochreiter S, Schmidhuber J: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) [DOI] [PubMed] [Google Scholar]
- 12.Hunter CR, Klein R, Beanlands RS, deKemp RA: Patient motion effectson the quantification of regional myocardial blood flow with dynamic pet imaging. Med. Phys 43(4), 1829–1840 (2016) [DOI] [PubMed] [Google Scholar]
- 13.Isola P, Zhu JY, Zhou T, Efros AA: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017) [Google Scholar]
- 14.Joshi A, et al. : Unified framework for development, deployment and robust testingof neuroimaging algorithms. Neuroinformatics 9(1), 69–84 (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Liu X, Jiang D, Wang M, Song Z: Image synthesis-based multi-modal imageregistration framework by using deep fully convolutional networks. Med. Biol. Eng. Comput 57, 1037–1048 (2019) [DOI] [PubMed] [Google Scholar]
- 16.Lu Y, Liu C: Patient motion correction for dynamic cardiac pet: current statusand challenges. J. Nucl. Cardiol 27, 1999–2002 (2020) [DOI] [PubMed] [Google Scholar]
- 17.Mao X, Chen Y, Li Y, Xiong T, He Y, Xue H: Bilinear representation forlanguage-based image editing using conditional generative adversarial networks. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2047–2051. IEEE (2019) [Google Scholar]
- 18.Maul J, Said S, Ruiter N, Hopp T: X-ray synthesis based on triangular meshmodels using GPU-accelerated ray tracing for multi-modal breast image registration. In: Svoboda D, Burgos N, Wolterink JM, Zhao C (eds.) SASHIMI 2021. LNCS, vol. 12965, pp. 87–96. Springer, Cham: (2021). 10.1007/978-3-030-87592-3_9 [DOI] [Google Scholar]
- 19.Perez E, Strub F, De Vries H, Dumoulin V, Courville A: FiLM: visual reasoning with a general conditioning layer. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018) [Google Scholar]
- 20.Prior JO, et al. : Quantification of myocardial blood flow with 82 RB positronemission tomography: clinical validation with 15 O-water. Eur. J. Nucl. Med. Mol. Imaging 39, 1037–1047 (2012) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Rachmadi MF, del C. Valdés-Hernández M, Makin S, Wardlaw JM, Komura T: Predicting the evolution of white matter hyperintensities in brain MRI using generative adversarial networks and irregularity map. In: Shen D, et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 146–154. Springer, Cham: (2019). 10.1007/978-3-030-32248-9_17 [DOI] [Google Scholar]
- 22.Shi L, et al. : Automatic inter-frame patient motion correction for dynamic cardiacpet using deep learning. IEEE Trans. Med. Imaging 40, 3293–3304 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Shi L, et al. : Direct list mode parametric reconstruction for dynamic cardiacSPECT. IEEE Trans. Med. Imaging 39(1), 119–128 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Sundar LKS, et al. : Conditional generative adversarial networks aided motioncorrection of dynamic 18F-FDG PET brain studies. J. Nucl. Med 62(6), 871–879 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Sundar LS, et al. : Data-driven motion compensation using cGAN for total-body[18F] FDG-PET imaging (2021)
- 26.Woo J, et al. : Automatic 3D registration of dynamic stress and rest 82Rb and flurpiridaz F 18 myocardial perfusion PET data for patient motion detection and correction. Med. Phys 38(11), 6313–6326 (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Zhou B, et al. : Fast-MC-PET: a novel deep learning-aided motion correction andreconstruction framework for accelerated PET. In: Frangi A, de Bruijne M, Wassermann D, Navab N (eds.) IPMI 2023. LNCS, vol. 13939, pp. 523–535. Springer, Cham: (2023). 10.1007/978-3-031-34048-2_40 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
