Skip to main content
Nuclear Medicine and Molecular Imaging logoLink to Nuclear Medicine and Molecular Imaging
. 2024 Jul 22;58(6):354–363. doi: 10.1007/s13139-024-00869-y

Accurate Automated Quantification of Dopamine Transporter PET Without MRI Using Deep Learning-based Spatial Normalization

Seung Kwan Kang 1,2, Daewoon Kim 3,4, Seong A Shin 1,, Yu Kyeong Kim 5,6, Hongyoon Choi 2,5, Jae Sung Lee 1,2,3,4,5,
PMCID: PMC11415331  PMID: 39308485

Abstract

Purpose

Dopamine transporter imaging is crucial for assessing presynaptic dopaminergic neurons in Parkinson’s disease (PD) and related parkinsonian disorders. While 18F-FP-CIT PET offers advantages in spatial resolution and sensitivity over 123I-β-CIT or 123I-FP-CIT SPECT imaging, accurate quantification remains essential. This study presents a novel automatic quantification method for 18F-FP-CIT PET images, utilizing an artificial intelligence (AI)-based robust PET spatial normalization (SN) technology that eliminates the need for anatomical images.

Methods

The proposed SN engine consists of convolutional neural networks, trained using 213 paired datasets of 18F-FP-CIT PET and 3D structural MRI. Remarkably, only PET images are required as input during inference. A cyclic training strategy enables backward deformation from template to individual space. An additional 89 paired 18F-FP-CIT PET and 3D MRI datasets were used to evaluate the accuracy of striatal activity quantification. MRI-based PET quantification using FIRST software was also conducted for comparison. The proposed method was also validated using 135 external datasets.

Results

The proposed AI-based method successfully generated spatially normalized 18F-FP-CIT PET images, obviating the need for CT or MRI. The striatal PET activity determined by proposed PET-only method and MRI-based PET quantification using FIRST algorithm were highly correlated, with R2 and slope ranging 0.96–0.99 and 0.98–1.02 in both internal and external datasets.

Conclusion

Our AI-based SN method enables accurate automatic quantification of striatal activity in 18F-FP-CIT brain PET images without MRI support. This approach holds promise for evaluating presynaptic dopaminergic function in PD and related parkinsonian disorders.

Keywords: Dopamine transporter, Parkinson’s disease, Spatial normalization, Deep learning, Quantification

Introduction

Dopamine transporter imaging is a widely used method to assess the function of presynaptic dopaminergic neurons [15], and it is particularly useful in the differential diagnosis of Parkinson's disease (PD) and other degenerative parkinsonism from essential tremor, vascular parkinsonism, and drug-induced parkinsonism. Compared to dopamine transporter single-photon emission computed tomography (SPECT) using 123I-2β-carboxymethoxy-3β-(4-iodophenyl) tropane (123I-β-CIT) or 123I-ioflupane (N-ω-fluoropropyl-β-CIT) (123I-FP-CIT), positron emission tomography (PET) imaging with 18F-FP-CIT offers advantages due to PET's superior spatial resolution and sensitivity [6].

Objective and convenient assessment of regional PET activity can be achieved through the quantification of brain PET images using spatial normalization (SN) techniques and predefined brain atlases, as opposed to the manual drawing of regions of interest (ROIs) [711]. However, direct SN of 18F-FP-CIT PET images onto the average template is not recommended due to overestimation of striatal 18F-FP-CIT binding in PD patients [12]. This is because the conventional SN methods perform the SN of input (or source) images by maximizing the similarity between spatially normalized input and template (target) images. Although an alternative SN method using early-phase 18F-FP-CIT PET image was proposed [13], this approach has limitations, as the early-phase images are not always available, and large low-uptake lesions caused by chronic stroke can degrade the accuracy of SN.

Recent studies have demonstrated the potential of deep learning (DL)-based PET SN methods for the automatic quantification of amyloid brain PET images [1416]. Among these approaches, a DL-based direct SN method has been shown to generate non-linear deformation fields that are necessary for SN of input brain PET images [16]. This DL model, which are trained with a large number of amyloid PET and MRI pairs, have enabled fast and accurate SN of amyloid PET images acquired with different radiotracers and outperformed conventional SN method in terms of regional standard uptake value ratio quantification accuracy. This advantage was particularly notable in cases with severe hydrocephalus and large low-uptake lesions. Therefore, this method holds promise for standalone brain-dedicated PET scanners, as it does not require the use of computed tomography (CT) or magnetic resonance (MR) images for PET SN.

In this study, we have enhanced this DL-based SN method to address the challenge of applying PET-only SN to 18F-FP-CIT PET images, as previously noted. Furthermore, we assessed the accuracy of our proposed method by comparing the striatal 18F-FP-CIT binding values estimated using our SN method to those obtained through an MRI-parcellation-based PET activity estimation method, which was performed in the original individual space. To evaluate our proposed method, we conducted analyses using both internal and external datasets.

Materials and Methods

Datasets

A total of 302 18F-FP-CIT PET and 3D structural T1 MR images of the patients with suspicious PD and other movement disorders, which were acquired in Hospital, were used for the development and validation of proposed method. Patients underwent PET/CT imaging, 2 h after the injection of 185 MBq 18F‐FP‐CIT. Emission scans were acquired for 10 min using PET/CT scanners (Biograph 40 or mCT, Siemens Healthineers, Knoxville, TN), followed by CT scans for attenuation correction. PET images were reconstructed using ordered‐subset expectation maximization algorithm with 24 subsets and 5 iterations. PET images were reconstructed with the matrix size with 256 × 256. Gaussian post‐reconstruction filter (kernel size = 4 mm) was applied. Out of 302 paired PET and MR image sets, 213 were used for training a deep neural network model. Other 89 image sets were used for internal validation of the trained network and comparison with an MRI-based PET quantification method.

For the external validation, we used 135 paired 18F-FP-CIT PET and 3D T1 MR images obtained from. Patients underwent PET scans using Gemini TF-64 PET/CT scanner (Philips Healthcare, Cleveland, OH) 2 h after the injection of 185 MBq 18F-FP-CIT for 10 min. After routine corrections for physical effects, images were reconstructed by applying a 3D row-action maximum-likelihood algorithm for 90 slices, each 2 mm thick, in a 128 × 128 matrix.

The retrospective use of the image and demographic data of the patients and waiver of informed consent were approved by the Institutional Review Boards of both hospitals. All experiments were conducted in accordance with the Declaration of Helsinki.

Network Model and Training

The deep neural network model for 18F-FP-CIT PET SN was developed to generate SN parameters (affine transformation matrix and nonlinear deformation fields) once the input PET image is fed into the network model as input. The SN parameters are then applied to the input PET image, resulting in a spatially normalized image in MNI (Montreal Neurological Institute) space. A pre-trained network model developed in our previous study [16] using sequentially connected U-Nets and 994 multicenter amyloid PET and corresponding T1 MR images [16] was fine-tuned using 213 18F-FP-CIT PET and T1 MRI datasets that were mentioned previously.

Automatic generation of volumes of interest (VOIs) on individual brains can be achieved through the use of backward deformation fields, which transform the VOIs defined in a template space into the individual's own space. We employed a cyclic training strategy, which allows for the simultaneous generation of forward and backward deformations (as illustrated in Fig. 1). As the loss function for the network training, we used the neighbor cross correlation between spatially normalized paired MR images, which were obtained using the inferred SN parameters, and MNI T1 template. On-the-fly data augmentation was also applied to prevent overfitting of the network parameters. While MR images were necessary for training the network, only PET input is required for PET SN during the inference stage, once the network model has been trained [16].

Fig. 1.

Fig. 1

Proposed deep neural network model with cycle-consistent structure for 18F-FP-CIT PET spatial normalization

Quantification of Striatal Binding

To assess the accuracy of proposed method, we compared the striatal activity values estimated using the proposed DL-based SN method to those obtained through PET-MRI co-registration and MRI parcellation in individual brain space. Using the FIRST software [17], putamen and caudate in both hemispheres were segmented in individual space from 89 internal and 135 external 3D T1 MR images. The FIRST software has shown high performance and reliability in striatal segmentation [18]. PET and MR images of each individual were co-registered using SPM12 program (https://www.fil.ion.ucl.ac.uk/spm), which maximizes the mutual information between the co-registered images, and PET images were resampled to have same dimension and voxel-size with MR images. The segmented striatal regions from MRI were then applied to the resampled PET images to obtain the PET activity concentration, which were regarded as the ground truth.

The striatal activity was also obtained in MNI space using the proposed method. The striatal regions of the MNI T1 MRI template were segmented using the FIRST software and the segmented striatal regions were applied to the spatially normalized PET images to obtain the PET activity concentration in MNI space.

Statistical Analysis

The correlation between our proposed method (18F-FP-CIT PET quantification without support from MRI) and the FIRST approach (MRI-parcellation-based PET quantification) was evaluated using Pearson’s correlation. To assess the consistency of the quantification results, intraclass correlation coefficients (ICCs) were also calculated. Furthermore, we performed a Bland–Altman analysis on the estimated values.

Results

The proposed deep neural network-based method with a cyclic training strategy successfully generated spatially normalized 18F-FP-CIT PET images without the use of CT or MRI (Fig. 2). Activity concentration in striatal regions are well preserved. The exemplar cases shown in Fig. 2 were selected from the external dataset. In addition, striatal regions transformed from MNI space using the inverse deformation fields generated by proposed PET SN matched well with those determined by the segmentation of individual MRIs (Fig. 3).

Fig. 2.

Fig. 2

18F-FP-CIT PET images in individual and standard MNI template space

Fig. 3.

Fig. 3

Backward deformation of striatal regions onto individual brain images

Furthermore, the proposed PET-only SN method produced highly correlated striatal PET activity concentration (Bq/ml) with MRI-based PET quantification using the FIRST software, as evidenced by high R2 values (0.96–0.99) and slopes of linear regression (0.98–1.02) in both internal and external validation (Figs. 4 and 5; Tables 1 and 2). The ICC values ranging 0.977–0.995 also demonstrated consistency across the validation sets. These results suggest that the proposed SN method for 18F-FP-CIT PET provides highly accurate and consistent performance.

Fig. 4.

Fig. 4

Internal validation (N = 89): striatal activity comparison between MRI-based PET quantification in individual space (x-axis) versus PET-only quantification in template space using proposed method (y-axis). a Left putamen, b right putamen, c left caudate, and d right caudate. Dashed lines are linear regression lines

Fig. 5.

Fig. 5

External validation (N = 135): striatal activity comparison between MRI-based PET quantification in individual space (x-axis, kBq/ml) versus PET-only quantification in template space using proposed method (y-axis). a Left putamen, b right putamen, c left caudate, and d right caudate. Dashed lines are linear regression lines

Table 1.

Internal validation: Pearson’s correlation and ICC analysis for striatal activity concentration of the internal 18F-FP-CIT dataset (n = 89) relative to the FIRST approach

Slope R2 ICC
Left putamen 1.000 0.990 0.995
Right putamen 0.999 0.990 0.995
Left caudate 1.006 0.961 0.980
Right caudate 1.018 0.957 0.977

Table 2.

External validation: Pearson’s correlation and ICC analysis for striatal activity concentration of the external 18F-FP-CIT dataset (n = 135) relative to the FIRST approach

Slope R2 ICC
Left putamen 0.983 0.989 0.994
Right putamen 0.984 0.986 0.993
Left caudate 0.977 0.966 0.983
Right caudate 0.997 0.971 0.985

The Bland–Altman plots presented in Figs. 6 and 7 reveal a strong agreement between the MRI-based PET quantification and the proposed PET-only quantification method. Furthermore, the plots indicate a consistent pattern in the difference of activity concentration between MRI-based and PET-only quantification, demonstrating stability and reliability of the proposed method. Notably, the variations in activity concentration between MRI-based and PET-only quantification were confined to a tight margin.

Fig. 6.

Fig. 6

Internal validation (N = 89): Bland–Altman plots between MRI-based PET quantification in individual space versus PET-only quantification in template space using proposed method. a Left putamen, b right putamen, c left caudate, and d right caudate. The black lines represent the mean difference of activity concentration, and the red lines represent mean ± 1.96 × standard deviation of the activity difference

Fig. 7.

Fig. 7

External validation (N = 135): Bland–Altman plots between MRI-based PET quantification in individual space versus PET-only quantification in template space using proposed method. a Left putamen, b right putamen, c left caudate, and d right caudate. The black lines represent the mean difference of activity concentration, and the red lines represent mean ± 1.96 × standard deviation of the activity difference

The proposed method was also efficient, with computation times for the SN and striatal binding ratio quantification taking only 10–20 s.

Discussion

Artificial intelligence (AI) and DL technologies have been rapidly advancing in various biomedical imaging fields, offering new and effective alternatives to conventional image processing and analysis methods [1921]. In nuclear medicine, AI and DL methods are also being explored to overcome various technical challenges. Recent innovative approaches in this field include attenuation correction without the need for transmission data, denoising images for reduced scan time, improved PET timing resolution for direct positron imaging, and accurate radiation dose estimation without Monte Carlo simulation [2227]. Another notable achievement in recent years is the development of AI-powered accurate brain PET and SPECT quantification without relying on structural images, which holds great promise for more precise diagnosis and treatment of neurological conditions [1416, 28].

The visual assessment and quantification of dopaminergic PET and SPECT images in standard or template space offers several advantages. Due to the small size of the striatal regions in the brain, which are the primary ROIs in dopaminergic imaging, differences in the tilted angle of the brain can sometimes lead to misinterpretation of the images. However, the SN of brain PET and SPECT images using affine and non-linear transformations allows for visual interpretation of images in the same stereotactic space with the same orientation of the brain. Additionally, automated quantification of regional image intensities is possible using the SN of images and predefined ROIs in the standard template space [79, 29]. This automated SN-based quantification has several advantages over the traditional manual ROI drawing method, including perfect reproducibility of quantification and independence from the operator. However, conventional SN methods such as SPM's unified segmentation and spatial normalization algorithm [30] estimate the SN parameters to maximize the similarity of intensity between the spatially normalized input image and template. As a result, the size of regions with higher or lower uptake tends to be reduced during SN to maximize the intensity similarity. This leads to overestimation of striatal binding in PD patients when conventional SN-based quantification is applied [12]. Therefore, it is important to carefully consider the limitations of conventional SN methods and explore alternative methods to accurately quantify dopaminergic PET and SPECT images in standard or template space.

In contrast to the conventional SN approaches, the DL model trained in this study, which maximizes the structural similarity of the spatially normalized input image and template, is less influenced by the uptake level and distribution pattern of radiotracers, as shown in our previous study with amyloid PET [16]. Notably, the scattered plots demonstrated a high level of consistency in the performance of striatal activity estimation using the proposed method, irrespective of striatal uptake level (Figs. 4 and 5). The bias and variation between brains with low and high uptake in the striatum were almost identical, indicating minimal differences in performance. In addition, the quantification accuracy of the proposed DL-based SN method did not show a noticeable difference between the internal and external validation datasets.

The study has some limitations that need to be addressed. Firstly, the striatal bindings were estimated without partial volume correction (PVC), which may have affected the accuracy of the results due to the small size of the striatal regions and the limited spatial resolution of PET. However, the lack of PVC does not invalidate the proposed method, as both the proposed and reference methods were similarly affected by the partial volume effect. In future studies, the segments of striatal regions obtained through backward deformation could be used for PVC. Secondly, dopamine transporter PET is less commonly used than dopamine transporter SPECT, even though PET offers some advantages. Therefore, further development and validation of the proposed method for dopamine transporter SPECT will also be necessary.

Conclusions

The use of a DL-based SN method enabled automatic quantification of 18F-FP-CIT brain PET images, providing an effective means for evaluating presynaptic dopaminergic function in PD and other degenerative parkinsonism. The PET activity concentration in the putamen and caudate, obtained through the proposed SN method, were found to be almost identical to those obtained through MRI segmentation and PET/MRI co-registration. This suggests that the proposed method has the potential to be a valuable alternative to traditional MRI-based techniques, as it offers greater efficiency and automation in quantifying PET images.

Author Contributions

S.K.K. participated in the study design, drafting of the manuscript, and data analysis. D.K. participated in the study design and data analysis. S.A.S. participated in the study design, data analysis and approval of the final content of the manuscript. H.C. and Y.K.K. participated in the study design and data acquisition. J.S.L. participated in the study design, drafting of the manuscript, and approval of the final content of the manuscript.

Funding

This work was supported by the Seoul R&BD Program (No. BT200151) through the Seoul Business Agency (SBA) funded by The Seoul Metropolitan Government, the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711137868, RS-2020-KD000006), and the Korea Dementia Research Project (No. HU23C014000) through the Korea Dementia Research Center (KDRC) funded by the Ministry of Health & Welfare and Ministry of Science and ICT.

Data Availability

Contact the corresponding authors for data requests. Data will be limitedly available.

Declarations

Conflict of Interest

Seung Kwan Kang, Seong A Shin, and Jae Sung Lee are Brightonix Imaging Inc.’s employees. Daewoon Kim, Yu Kyeong Kim, and Hongyoon Choi declare that they have no conflict of interest.

Ethics Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The retrospective use of the scan data was approved by the Institutional Review Board of our institute.

Consent to Participate

Waiver of consent was approved by the Institutional Review Board of Chosun University Hospital.

Consent for Publication

Written informed consent for publication was taken from the patient’s parents.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Seong A. Shin, Email: nicky.shin@brtnx.com

Jae Sung Lee, Email: jaes@snu.ac.kr.

References

  • 1.Benamer HT, Patterson J, Grosset DG, Booij J, De Bruin K, Van Royen E, et al. Accurate differentiation of parkinsonism and essential tremor using visual assessment of [123I]-FP-CIT SPECT imaging: the [123I]-FP-CIT study group. Mov Disord. 2000;15:503–10. [PubMed] [Google Scholar]
  • 2.O’Brien JT, Colloby S, Fenwick J, Williams ED, Firbank M, Burn D, et al. Dopamine transporter loss visualized with FP-CIT SPECT in the differential diagnosis of dementia with Lewy bodies. Arch Neurol. 2004;61:919–25. [DOI] [PubMed] [Google Scholar]
  • 3.Oh M, Kim JS, Kim JY, Shin K-H, Park SH, Kim HO, et al. Subregional patterns of preferential striatal dopamine transporter loss differ in Parkinson disease, progressive supranuclear palsy, and multiple-system atrophy. J Nucl Med. 2012;53:399–406. [DOI] [PubMed] [Google Scholar]
  • 4.Lee JY, Seo SH, Kim YK, Yoo HB, Kim YE, Song IC, et al. Extrastriatal dopaminergic changes in Parkinson’s disease patients with impulse control disorders. J Neurol Neurosurg Psychiatry. 2014;85:23–30. [DOI] [PubMed] [Google Scholar]
  • 5.Lee JY, Seo S, Lee JS, Kim HJ, Kim YK, Jeon BS. Putaminal serotonergic innervation: monitoring dyskinesia risk in Parkinson disease. Neurology. 2015;85:853–60. [DOI] [PubMed] [Google Scholar]
  • 6.Lee I, Kim JS, Park JY, Byun BH, Park SY, Choi JH, et al. Head-to-head comparison of (18) F-FP-CIT and (123) I-FP-CIT for dopamine transporter imaging in patients with Parkinson’s disease: A preliminary study. Synapse. 2018;72:e22032. [DOI] [PubMed] [Google Scholar]
  • 7.Lee JS, Lee DS. Analysis of functional brain images using population-based probabilistic atlas. Curr Med Imaging Rev. 2005;1:81–7. [Google Scholar]
  • 8.Lee JS, Lee DS, Kim S-K, Lee S-K, Chung J-K, Lee MC, et al. Localization of epileptogenic zones in F-18 FDG brain PET of patients with temporal lobe epilepsy using artificial neural network. IEEE Trans Med Imaging. 2000;19:347–55. [DOI] [PubMed] [Google Scholar]
  • 9.Minoshima S, Koeppe RA, Frey KA, Kuhl DE. Anatomic standardization: linear scaling and nonlinear warping of functional brain images. J Nucl Med. 1994;35:1528–37. [PubMed] [Google Scholar]
  • 10.Avants BB, Tustison NJ, Song G, Cook PA, Klein A, Gee JC. A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage. 2011;54:2033–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Lee SK, Lee DS, Yeo JS, Lee JS, Kim YK, Jang MJ, et al. FDG-PET images quantified by probabilistic atlas of brain and surgical prognosis of temporal lobe epilepsy. Epilepsia. 2002;43:1032–8. [DOI] [PubMed] [Google Scholar]
  • 12.Kim JS, Cho H, Choi JY, Lee SH, Ryu YH, Lyoo CH, et al. Feasibility of computed tomography-guided methods for spatial normalization of dopamine transporter positron emission tomography image. PLoS One. 2015;10:e0132585. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Bae S, Choi H, Whi W, Paeng JC, Cheon GJ, Kang KW, et al. Spatial normalization using early-phase [(18)F]FP-CIT PET for quantification of striatal dopamine transporter binding. Nucl Med Mol Imaging. 2020;54:305–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Kang SK, Seo S, Shin SA, Byun MS, Lee DY, Kim YK, et al. Adaptive template generation for amyloid PET using a deep learning approach. Hum Brain Mapp. 2018;39:3769–78. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Choi H, Lee DS. Generation of structural MR images from amyloid PET: application to MR-less quantification. J Nucl Med. 2018;59:1111–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Kang SK, Kim D, Shin SA, Kim YK, Choi H, Lee JS. Fast and accurate amyloid Brain PET quantification without MRI using deep neural networks. J Nucl Med. 2023;64:659–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Patenaude B, Smith SM, Kennedy DN, Jenkinson M. A Bayesian model of shape and appearance for subcortical brain segmentation. Neuroimage. 2011;56:907–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Perlaki G, Horvath R, Nagy SA, Bogner P, Doczi T, Janszky J, et al. Comparison of accuracy between FSL’s FIRST and Freesurfer for caudate nucleus and putamen segmentation. Sci Rep. 2017;7:2418. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Rajendran P, Sharma A, Pramanik M. Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett. 2022;12:155–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Rao D, Prakashini K, Singh R, Vijayananda J. Automated segmentation of the larynx on computed tomography images: a review. Biomed Eng Lett. 2022;12:175–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Garcia EV, Piccinelli M. Preparing for the artificial intelligence revolution in nuclear cardiology. Nucl Med Mol Imaging. 2023;57:51–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Lee JS. A review of deep-learning-based approaches for attenuation correction in positron emission tomography. IEEE Trans Radiat Plasma Med Sci. 2020;5:160–84. [Google Scholar]
  • 23.Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, Lassmann M, Rischpler C, et al. Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging. 2022;49:4452–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Berg E, Cherry SR. Using convolutional neural networks to estimate time-of-flight from PET detector waveforms. Phys Med Biol. 2018;63:02LT1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Lee MS, Hwang D, Kim JH, Lee JS. Deep-dose: a voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry. Sci Rep. 2019;9:1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Kang H, Kang DY. Alzheimer’s disease prediction using attention mechanism with dual-phase (18)F-florbetaben images. Nucl Med Mol Imaging. 2023;57:61–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Kim KM, Lee MS, Suh MS, Cheon GJ, Lee JS. Voxel-based internal dosimetry for 177Lu-labeled radiopharmaceutical therapy using deep residual learning. Nucl Med Mol Imaging. 2023;57:94–102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Seo SY, Kim SJ, Oh JS, Chung J, Kim SY, Oh SJ, et al. Unified deep learning-based mouse brain MR segmentation: template-based individual brain positron emission tomography volumes-of-interest generation without spatial normalization in mouse Alzheimer model. Front Aging Neurosci. 2022;14:807903. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Evans AC, Janke AL, Collins DL, Baillet S. Brain templates and atlases. Neuroimage. 2012;62:911–22. [DOI] [PubMed] [Google Scholar]
  • 30.Ashburner J, Friston KJ. Unified segmentation. Neuroimage. 2005;26:839–51. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Contact the corresponding authors for data requests. Data will be limitedly available.


Articles from Nuclear Medicine and Molecular Imaging are provided here courtesy of Springer

RESOURCES