Abstract
This study aimed to generate a delayed 64Cu-dotatate (DOTA)-rituximab positron emission tomography (PET) image from its early-scanned image by deep learning to mitigate the inconvenience and cost of estimating absorbed radiopharmaceutical doses. We acquired PET images from six patients with malignancies at 1, 24, and 48 h post-injection (p. i.) with 8 mCi 64Cu-DOTA-rituximab to fit a time–activity curve for dosimetry. We used a paired image-to-image translation (I2I) model based on a generative adversarial network to generate delayed images from early PET images. The image similarity function between the generated image and its ground truth was determined by comparing L1 and perceptual losses. We also applied organ-wise dosimetry to acquired and generated images using OLINDA/EXM. The quality of the generated images was good, even of tumors, when using the L1 loss function as an additional loss to the adversarial loss function. The organ-wise cumulative uptake and corresponding equivalent dose were estimated. Although the absorbed dose in some organs was accurately measured, predictions for organs associated with body clearance were relatively inaccurate. These results suggested that paired I2I can be used to alleviate burdensome dosimetry for radioimmunoconjugates.
Keywords: Dosimetry, Deep learning, I2I, GAN, 64Cu-DOTA-rituximab
Subject terms: Image processing, Cancer imaging
Introduction
Patient-specific dosimetry of radiopharmaceuticals must be understood to predict toxicity and design treatment plans1–3. Current clinical organ-wise dosimetry is based on the internal dosimetry schema from the Committee on Medical Internal Radiation Dose (MIRD)4. This estimates the dose of radioactivity absorbed by specific organs or tumors in several regions and define S-values as mean absorbed dose of a target region originating from the unit activity in the source region. S-values are mainly pre-calculated using voxel phantoms for various clinical conditions, as well as age and sex. However several approaches can calculate patient-specific S-values from anatomical information, such as computed tomography (CT) imaging and deep learning5,6. In contrast, a spatial map of the radiopharmaceutical is acquired by detecting photons originating from radioactive decay and reconstructing them as three-dimensional functional positron emission tomography (PET) and single-photon emission computed tomography (SPECT) images. However, as the amount of radioactivity determined from functional images is similar to a snapshot at a specific moment in time post-injection (p. i.), cumulative radioactivity over time cannot be estimated from such images. Therefore, patients injected with radiopharmaceuticals should be scanned several times to determine internal dosimetry. Using at least three images acquired at three time points after injection time–activity curves for each organ can be fitted based on an exponential basis function. However, acquiring several images at different times is extremely arduous because radiopharmaceuticals slowly disseminate throughout the body.
Monoclonal antibodies (mAbs) have emerged as promising target vectors for radiopharmaceuticals due to high affinity and targeting ability7–9. Radioimmunoconjugates (RIC) are mAb radiochelates that diffuse very slowly throughout the vascular system due to a remarkably high molecular weight10,11. Most RICs require days, whereas 18F-fludeoxyglucose requires ~ 1 h to disseminate throughout the body. Hence, several images must be acquired over days to determine RIC biodistribution over time, which is a burden to patients in terms of time and cost.
The best method to mitigate this situation is to predict the organ-wise dose from a single functional image using a model. A deep learning model that learns mapping between early- and delayed-scanned images can be a solution. Image-to-image translation (I2I) is a deep learning scheme for translating an image from a source domain to a target domain, which has recently been studied in detail12–14. Although the I2I model has been applied in medical imaging to synthesize magnetic resonance (MR)-to-CT images or to correct attenuation based on MRI-based PET15,16, few approaches have trained mapping between functional images.
Here, we propose an I2I deep-learning model that could predict delayed-scanned images from early-scanned input images. We acquired clinical PET images of patients with malignancies at several time points after 64Cu-DOTA-rituximab injection. We assumed that early and delayed PET images could be treated as image pairs in terms of voxel alignment, and applied a paired I2I deep learning model. We also conducted organ-based dosimetry with generated delayed-scan PET images and estimated residual 64Cu-DOTA-rituximab and absorbed doses for each organ.
Methods
Data preparation
Six patients with lymphoma were injected with 8 mCi of 64Cu-DOTA-rituximab then PET images were acquired using GE Discovery 710 PET/CT between January 2022 and January 2023. The selection criteria included lymphoma, age ≥ 19 years, confirmed diagnosis of CD-20 positive B-cell non-Hodgkin’s lymphoma (NHL), at least one measurable lesion. Moreover, an Eastern Cooperative Oncology Group (ECOG) score of ≤ 2, a minimum of 4 weeks since their last treatment with chemotherapy, radiation or cytokine therapy, or immunosuppressive drugs17.
Images were acquired from patients at 1, 24, and 48 h p. i. at the Korea Institute of Radiological and Medical Sciences. Each patient’s data was stored in DICOM format and processed using the Pydicom library to read pixel values into array. Whole-body PET images were reconstructed using the iterative reconstruction algorithm VPFXS. The size of the axial slice was (192, 192), and its value differed according to each patient. The Institutional Review Board at KIRAMS approved the study (IRB No.: KIRAMS 2021–02-003), and all patients provided written informed consent to participate. All methods were complied with the relevant guidelines and regulations.
The data normalization process is important in deep learning to accelerate convergence during training. Unlike a common three-channel image or CT that has a pixel intensity range, a functional image only has a lower limit of zero. An upper limit does not exist because its physical meaning is a count or an amount of uptake. Every pixel in paired early and delayed PET images can be normalized into a range of 0, 1 if the maximum value of the early PET image is the normalization constant because radioisotopes decay over time.
In contrast, we also considered the time-constant normalization value, which, regardless of the input, reflects the exponential radioactive decay of the radioisotope. This is because the voxel intensity of functional images tends to be far higher in images acquired early rather than late. The total amount of radioactivity in an injected radioisotope can be quantified as exponential decay. Thus, radioactive decay during normalization should mitigate a difference voxel intensity. Therefore, the I2I deep learning model was trained to learn only the location of changes in the RIC and its clearance from the body. The normalization formula is:
| 1 |
where is the pixel array of raw data in Bq, is the normalization constant, is elapsed time after administration, and is the half-life of the radioisotope. We used as an initial value to match the typical count range of input images, and because the half-life of 64Cu is 12.7 h, the normalization constants for 1, 24, and 48 h were ~ 9,469, 2,699, and 728.2, respectively. Moreover, the images were resized to (128, 128) by applying the interpolation function provided in the torch.nn.functional library18.
Image generation algorithm
The image generation deep learning model is based on a paired I2I, Pix2pix framework19 in which a pair of early- and delayed PET images serves as the input. We used generator and discriminator networks based on a generative adversarial network (GAN)20 to respectively acquire pseudo-delayed PET images corresponding to the early input images, and to discriminate whether the input is in fact a delayed image. Similarity between the generated delayed PET images and their ground truth was used for training.
The generator network is based on a encoder–decoder structure with residual blocks between them. In addition, the discriminator architecture was that of PatchGAN21. Figure 1 shows the arrangement of the convolution layer, normalization, activation function, and their hyperparameters obtained from DCGAN22.
Fig. 1.
Network architecture of generator and discriminator.
In a GAN scheme, the generator and discriminator are simultaneously trained as adversaries. We adopted the least squares loss23 as the adversarial objective function for the output of the discriminator, rather than that used in vanilla GAN18, which is binary cross entropy to mitigate instability as an issue during training:
| 2 |
The model was optimized using the Adaptive Moment Estimation algorithm (Adam optimizer) with tuned hyperparameters. The learning rate was set to 2 × 105, and the batch size was 32. The model was trained for a total of 200 epochs. In most studies of paired I2I, the image similarity function between a generated image and its ground truth is used to regularize the output for pairing with the input. The image similarity function can be classified as pixel- or voxel-wise in the image domain and as a feature-wise similarity, which extracts features from a trained neural network. Voxel-wise similarity using L1 loss ensures precise local intensity matching between the predicted and reference images by minimizing the absolute difference between corresponding voxels. This direct comparison is beneficial for estimating the uptake of radioisotopes in delayed-scanned PET, which is critical for dosimetry. In contrast, feature-wise similarity captures high-level features such as textures and structures. This method is less sensitive to subtle voxel misalignments, which are common in clinical PET data, thus avoiding potential textural errors. Because clinical data pairs of early- and delayed-scanned PET images inevitably face misalignment issues, feature-wise similarity offers an advantage in handling such imperfections. However, L1 loss might still be better suited for a more accurate voxel value estimation in delayed-scanned PET dosimetry. Therefore, we compared two types of image similarity loss functions.
The L1-loss function in Pix2pix served as a voxel-wise similarity loss function19 because it encourages less blurring than other losses, including L2 loss:
| 3 |
We used the VGG perceptual loss function as the feature-wise image similarity loss function24,25 based on the VGG19 network26 without a batch normalization layer pretrained by ImageNet. The perceptual loss function is calculated as the sum of the content and style losses between the features of the generated delayed-scanned PET and its ground truth extracted by the VGG19 network. Content loss is defined as the difference between the features of the two images evaluated using the L2 norm. Style loss is calculated as the Frobenius norm of the difference between style representations, which is a Gram matrix of the feature. We resized the images to (224, 224) in the same way as the raw images to put them into the VGG19 network as follows:
| 4 |
The objective function in the model was expressed by combining the adversarial and similarity loss functions as follows:
| 5 |
where is one image similarity loss function and is its corresponding weight. We applied and .
Evaluation
Training
We assigned PET images acquired from six patients to training and tests set at a ratio of 8:2, respectively. We conducted a patient-wise K-fold cross validation with K = 5. Because the number of axial slices differed among the patients, the number of training sets varied over time. The total number of axial slices for the entire dataset was 1,794. Because the maximum and minimum numbers of axial slices for the patients were 227 and 371, respectively, the corresponding sizes of the training set were set to 1,423 and 1,567, respectively.
Data augmentation
Data augmentation was applied to the initial dataset to enhance the robustness and generalizability of the model. The initial dataset was increased from 6–60 patients who underwent 64Cu-DOTA-rituximab imaging among whom, 50 were split in a 8:2 ratio for the training and validation dataset, respectively. For data augmentation, randomrotation and randomresizedcrop functions in PyTorch, which applies to rotate within a specified angle and crop with resized images were used. Patient-wise K-fold cross validation was repeated 5 times (K = 5). The remaining data from 10 patients were applied to external tests.
Image-wise evaluation
The estimated structural similarity index (SSIM)27 to compare similarity between the generated delayed-scan PET image and ground truth. SSIM was defined as:
| 6 |
where the and indicate the means, is the covariance between, and and are variances between the generated image and its ground truth, respectively. The parameters and were adjusted and
We also estimated the Fréchet inception distance28 (FID), which is widely used to assess images generated from a GAN. The FID was computed as the Fréchet distance between two image sets by calculating the statistical distance between features extracted from the Inception V3 network29:
| 7 |
where μ and C are the mean and covariance, respectively, and indices d and g indicate ground truth and generated data, respectively.
Dosimetry
We acquired early and delayed PET images at 1, 24, and 48 h p. i., then compared the results with those of the dosimetry images to determine the value of the generation model to dosimetry, specifically the organ-wise type. This process was applied to images generated using the model trained on original data from six patients and those using the augmented dataset, which we expanded to 60 patients. Regions of interest (ROI) were manually drawn on each PET image, and the integrated activity over the ROI was estimated as % injected dose (%ID). The time–activity curve of ROI-integrated activity was fitted according to the mono-exponential model, then the residence time (normalized cumulative activity) was calculated as follows:
| 8 |
From the residence time acquired for source organ , the equivalent dose in was derived based on the MIRD schema as:
| 9 |
where is the S-value of to that was pre-calculated by OLINDA/EXM version 1.130 with a male or female phantom and was the tissue-weighting factor of radiation, which is the weight of the stochastic effect of radiation on a specific tissue within the target organ .
Results
Generated images
Table 1 shows the quantitative metrics evaluated for the generated images. The model sometimes could not converge by training and output a zero-intensity image when VGG perceptual loss was the image similarity loss. The SSIM value for L1 loss at 24 h p. i., was 0.7302, whereas that for perceptual loss was 0.4308, originating from the results of zero outputs. Moreover, this tendency became more obvious in PET images at 48 h p. i. The model was not well-trained for every fold when perceptual loss was used. Supplementary Figure S2 shows the discriminator loss with L1 and VGG on images acquired at 24 and 48 h p. i. The estimated SSIM and FID were 0.8094, and 62.93 for the original data test set, for generating PET images at 24 h p. i., and the FID was 0.7714 and 74.35 in images acquired at 48 h p. i. Augmenting the data significantly improved the performance of the model. The SSIM values and the FID scores respectively increased to 0.9903 and 0.9756 and decreased to 25.84 and 46.56 at 24 and 48 h p. i. Figure 2 shows the generation results of the delayed 64Cu-DOTA-rituximab PET image for the test set, according to the organ trained with L1 loss. The model accurately predicted uptake by organs and the tumor in the sixth column, whereas the images tended to become blurred.
Table 1.
Evaluation of image similarity loss functions applied in training using SSIM and FID for patient-wise K-fold validation (n = 5).
| L1 loss | Perceptual loss | |||
|---|---|---|---|---|
| K-Fold | SSIM | FID | SSIM | FID |
| 24 h | ||||
| 1 | 0.661 | 95.75 | 0.6802 | 80.68 |
| 2 | 0.7039 | 90.21 | 0.6998 | 94.12 |
| 3 | 0.7592 | 49.15 | 4.230E-4 | 237.5 |
| 4 | 0.7742 | 46.88 | 0.7703 | 41.13 |
| 5 | 0.7528 | 83.56 | 0.00348 | 242 |
| 48 h | ||||
| 1 | 0.4888 | 69.9 | 1.882E-4 | 229.53 |
| 2 | 0.7198 | 104.1 | 8.148E-4 | 265.4 |
| 3 | 0.6828 | 57.47 | 5.531E-4 | 241.1 |
| 4 | 0.5412 | 42.26 | 5.527E-4 | 231.74 |
| 5 | 0.09077 | 97.95 | 0.2151 | 245.34 |
FID Fréchet inception distance; SSIM structural similarity index.
Fig. 2.
Generation of 64Cu-DOTA-rituximab delayed PET test set images, trained with sum of least-squares generative adversarial network and L1 loss. PET, positron emission tomography.
Dosimetry
Figure 3 shows the organ-wise time–activity curve (%ID), fitted with one early and two PET generated delayed images at various times p. i. The image similarity loss was L1 for dosimetry. The generation model precisely predicted organ-integrated uptake by the lung, but not the kidney. Deep learning In addition correctly predicted radiopharmaceutical uptake by tumors.
Fig. 3.
Organ-wise time–activity curves fitted with PET images acquired at 1, 24, and 48 h p. i. (navy blue) and image acquired 1 h p. i. and delayed PET images at 24 and 48 h p. i. (orange) generated by deep learning. PET, positron emission tomography; p. i., post-injection.
The residence time (persistence) of 64Cu-DOTA-rituximab in each organ was evaluated using the fitted time–activity curve (Fig. 4). The tendencies and values of the residence times estimated by the acquired and generated PET images were similar. The residence time was highest in the heart, followed by the liver, lungs, spleen, kidney, bladder, and stomach. The residence time estimated by deep learning was the most inaccurate for the kidney as described for uptake. The mean absolute error of residence time in the kidney was 3.04E-2. However, Data augmentation increased the dosimetry accuracy across most organs. The error for all organs except the stomach was < 10%.
Fig. 4.
Organ-wise residence time estimated by PET images acquired at 1, 24, and 48 h p. i. (navy blue). Image acquired at 1 h p. i., delayed (orange) and augmented delayed images acquired at 24 and 48 h p. i. (gray) generated by deep learning. PET, positron emission tomography; p. i., post-injection.
The equivalent dose was estimated from the S-value pre-calculated by OLINDA/EXM using a male or female phantom (Table 2). The uptake of 64Cu-DOTA-rituximab was the highest in the heart, followed by the spleen, kidney, and pancreas. The deep learning model predicted this tendency was also effectively predicted by the, except for the kidney, whose uptake was underestimated. Additionally, the relative error of the equivalent dose estimated using deep learning was evaluated. The most inaccurate prediction was for the bladder, with a relative error of 3.65E-01. The equivalent dose of the bladder and kidney was higher than that of the other organs. However, data augmentation improved the prediction accuracy of the bladder and kidney, with relative errors of 0.00E + 00 and 4.35E-02, respectively.
Table 2.
Organ-wise dosimetry of acquired images at three time points post-injection and corresponding images generated from original and augmented data by deep learning.
| Target organ | Equivalent dose (mSv/MBq) | ||||
|---|---|---|---|---|---|
| Acquired | Generated | Relative error | Generated (augmented) | Relative error | |
| Adrenal glands | 2.60E-03 | 2.41E-03 | 7.46E-02 | 2.54E-03 | 2.31E-02 |
| Brain | 4.33E-05 | 4.13E-05 | 4.48E-02 | 4.29E-05 | 9.24E-03 |
| Breasts | 1.26E-03 | 1.22E-03 | 3.42E-02 | 1.22E-03 | 3.17E-02 |
| Gallbladder wall | 2.62E-03 | 2.48E-03 | 5.64E-02 | 2.62E-03 | 0.00E + 00 |
| Lower large intestine wall | 1.69E-04 | 1.70E-04 | 4.27E-03 | 1.64E-04 | 2.96E-02 |
| Small intestine | 5.55E-04 | 5.14E-04 | 7.42E-02 | 5.44E-04 | 1.98E-02 |
| Stomach wall | 3.29E-03 | 2.60E-03 | 2.08E-01 | 2.68E-03 | 1.85E-01 |
| Upper large intestine wall | 7.34E-04 | 6.85E-04 | 6.70E-02 | 7.23E-04 | 1.50E-02 |
| Heart wall | 5.52E-02 | 5.04E-02 | 8.66E-02 | 5.00E-02 | 9.42E-02 |
| Kidneys | 2.99E-02 | 2.20E-02 | 2.63E-01 | 2.86E-02 | 4.35E-02 |
| Liver | 2.47E-02 | 2.44E-02 | 1.19E-02 | 2.50E-02 | 1.21E-02 |
| Lungs | 2.02E-02 | 2.20E-02 | 8.90E-02 | 2.19E-02 | 8.42E-02 |
| Muscle | 6.72E-04 | 6.42E-04 | 4.52E-02 | 6.54E-04 | 2.68E-02 |
| Ovaries | 2.42E-04 | 2.36E-04 | 2.64E-02 | 2.38E-04 | 1.65E-02 |
| Pancreas | 2.74E-03 | 2.48E-03 | 9.64E-02 | 2.63E-03 | 4.01E-02 |
| Red marrow | 8.81E-04 | 8.04E-04 | 8.74E-02 | 8.55E-04 | 2.95E-02 |
| Osteogenic cells | 5.79E-04 | 5.56E-04 | 3.87E-02 | 5.64E-04 | 2.59E-02 |
| Skin | 3.44E-04 | 3.26E-04 | 5.23E-02 | 3.35E-04 | 2.62E-02 |
| Spleen | 5.11E-02 | 4.11E-02 | 1.96E-01 | 4.72E-02 | 7.63E-02 |
| Testes | 4.54E-05 | 5.91E-04 | 1.20E + 01 | 4.47E-05 | 1.54E-02 |
| Thymus | 2.86E-03 | 2.10E-03 | 2.65E-01 | 2.67E-03 | 6.64E-02 |
| Thyroid | 2.92E-04 | 2.94E-04 | 6.85E-03 | 2.88E-04 | 1.37E-02 |
| Urinary bladder wall | 2.33E-03 | 3.18E-03 | 3.65E-01 | 2.33E-03 | 0.00E + 00 |
| Uterus | 2.53E-04 | 2.67E-04 | 5.54E-02 | 2.49E-04 | 1.58E-02 |
| Total body | 2.00E-03 | 1.91E-03 | 4.50E-02 | 1.98E-03 | 1.00E-02 |
Discussion
The most important task of the model was to appropriately estimate the dose absorbed by a tumor because radiopharmaceuticals are mainly used to treat cancer. The results showed that the deep learning model accurately determined tumor uptake, but tended to underestimate the absorbed dose. Moreover, the absorbed dose was more erroneous in the bladder and kidney than in other organs when the model was trained and tested using only original data from the six patients. However, data augmentation improved the results for most organs. Therefore., this could be a promising approach to enhance the robustness and accuracy of model in terms of predicting absorbed doses, especially in organs with variable uptake.
Here, we used a normalization constant that was a function of the acquisition time and half-life of a radioisotope in a radiopharmaceutical. Our first approach was to use the maximum value of 3D PET images of each patient as normalization constants. The maximum value of the generated image was impossible to determine. Therefore, we applied the normalization constant of the early image to the delayed PET image used in the training. Even though this approach limited all voxel intensities to the range of 0, 1, it nevertheless predicted the exact voxel value after multiplying the normalization again, which is significant in dosimetry (Supplementary Figure S1). Our comparison of two types of loss functions revealed that the VGG perceptual loss was more unstable than the L1 loss for training convergence. A comparison of the FID of the validation set versus the epoch revealed that using L1 loss caused the FID to exponentially decrease according to the epochs, whereas using perceptual loss made the FID almost constant or vibrated near the initial value. We also occasionally identified a sudden decrease in FID when using perceptual loss. If so, the model was trained normally, and the results were similar to those trained with L1 loss, (1-, 2-, 4-, and fivefold at 24 h; Table 1).
This study has some limitations. We used the 2D Pix2pix scheme and axial slices although PET images are three-dimensional of PET. Therefore, the 2D convolution layer used in the generator and discriminator did not learn the axial correspondence of the pixels. Independence between adjusted axial slices might cause a difference in the average voxel intensities over the slice, and when the 3D image is reconstructed by stacking the generated slice, awkward slices can have remarkably high or low intensity. Furthermore, clinical functional images of the RIC are essential to acquire from patients with cancer. A sufficient number of images was needed for training because two networks were used in the GAN framework. Although we used > 1,000 slice pairs as the training set, the model will probably improve performance and be validated if more 64Cu-DOTA-rituximab PET data are included. Thus, given the absence of additional information, we externally validated the model using 64Cu-NOTA-trastuzumab data derived from five patients in a 4:1 ratio for training and test sets, respectively. The dosimetry results were promising, with a relative error of 0.122 for the lung, while that of all other organs were < 10% (Supplementary Table S1). However, collecting more datasets of 64Cu-DOTA-rituximab and applying the 3D-based I2I method will be the focus of our future work. The method should also be validated using other RICs with different radioisotopes.
Conclusions
Here, we proposed generating delayed PET images of 64Cu-DOTA-rituximab from corresponding early PET images using GAN-based paired I2I. This is significant because conventional dosimetry requires several images to estimate cumulative activity within organs, and RIC takes a long time to distribute within the body and reach the target tumor. We evaluated these methods using clinical 64Cu-DOTA-rituximab PET images in terms of image quality and dosimetry. The deep learning model has high potential for dosimetric application to RIC, which avoids the need for repeated image acquisition and wait times for targeting.
Supplementary Information
Author contributions
All authors contributed to the study conception and design. Ilhan Lim obtained IRB approval. Chi Soo Kang and Inki Lee acquired data including PET images. Kangsan Kim implemented the computation code and Kangsan Kim, Jingyu Yang and Muath Almaslamani collected the results. Muath Almaslamani analyzed dosimetry and augmented the data. Kangsan Kim drafted the first version of the manuscript and all authors commented on other versions of the manuscript. All authors read and approved the final version ro be submitted for publication. Sang-Keun Woo supervised the project.
Funding
This work was supported by the National Research Foundation of Korea grant funded by the Ministry of Science & ICT (No. 2020M2D9A1094070) and a grant of the Korea Institute of Radiological and Medical Sciences (KIRAMS), funded by Ministry of Science and ICT (MSIT), Korea (No. 50332-2024, 50461-2025).
Data availability
The datasets generated and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Declarations
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-025-88498-z.
References
- 1.Stabin, M. Nuclear medicine dosimetry. Phys. Med. Biol.51, R187 (2006). [DOI] [PubMed] [Google Scholar]
- 2.Sgouros, G. & Hobbs, R. F. Dosimetry for radiopharmaceutical therapy. Semin. Nucl. Med.44, 172–178 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Capala, J. et al. Dosimetry for radiopharmaceutical therapy: current practices and commercial resources. J. Nucl. Med.62, 3S-11S (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Bolch, W. E. et al. MIRD Pamphlet No. 21: A generalized schema for radiopharmaceutical dosimetry—Standardization of nomenclature. J. Nucl. Med.50, 477–484 (2009). [DOI] [PubMed] [Google Scholar]
- 5.Lee, M. S., Hwang, D., Kim, J. H. & Lee, J. S. Deep-dose: A voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry. Sci. Rep.9, 1–9 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Akhavanallaf, A., Shiri, I., Arabi, H. & Zaidi, H. Whole-body voxel-based internal dosimetry using deep learning. Eur. J. Nucl. Med. Mol. Imaging48, 670–682 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Sharkey, R. M. & Goldenberg, D. M. Cancer radioimmunotherapy. Immunotherapy.3(3), 349–370 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Pouget, J. P. et al. Clinical radioimmunotherapy—The role of radiobiology. Nat. Rev. Clin. Oncol.8, 720–734 (2011). [DOI] [PubMed] [Google Scholar]
- 9.Larson, S. M., Carrasquillo, J. A., Cheung, N. K. V. & Press, O. W. Radioimmunotherapy of human tumours. Nat. Rev. Cancer15, 347–360 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Wang, W., Wang, E. Q. & Balthasar, J. P. Monoclonal antibody pharmacokinetics and pharmacodynamics. Clin. Pharmacol. Ther.84, 548–558 (2008). [DOI] [PubMed] [Google Scholar]
- 11.Ryman, J. T. & Meibohm, B. Pharmacokinetics of monoclonal antibodies. CPT Pharmacomet. Syst. Pharmacol.6, 576–588 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Alotaibi, A. Deep generative adversarial networks for image-to-image translation: A review. Symmetry12, 1705 (2020). [Google Scholar]
- 13.Hoyez, H., Schockaert, C., Rambach, J., Mirbach, B. & Stricker, D. Unsupervised image-to-image translation: A review. Sensors2022(22), 8540 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Pang, Y., Lin, J., Qin, T. & Chen, Z. Image-to-image translation: Methods and applications. IEEE Trans. Multimed.24, 3859–3881 (2022). [Google Scholar]
- 15.Armanious, K. et al. Independent attenuation correction of whole body [18F]FDG-PET using a deep learning approach with generative adversarial networks. EJNMMI Res.10, 1–9 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Torkaman, M. et al. Direct image-based attenuation correction using conditional generative adversarial network for SPECT myocardial perfusion imaging. Med. Imaging10.1117/12.258092211600 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lee, I. et al. Evaluating 64Cu-DOTA-rituximab as a PET agent in patients with B-cell lymphoma: A head-to-head comparison with 18F-fluorodeoxyglucose PET/computed tomography. Proc. Nucl. Med. Commun.45(10), 865–873 (2024). [DOI] [PubMed] [Google Scholar]
- 18.Paszke, A. et al. PyTorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst.32, 8024 (2019). [Google Scholar]
- 19.Isola, P., Zhu, J.-Y., Zhou, T., Efros, A. A. & Research, B. A. Image-to-image translation with conditional adversarial networks. 1125–1134 (2017).
- 20.Goodfellow, I. J. et al. Generative adversarial nets. Adv. Neural Inf. Process. Syst.27 (2014).
- 21.Li, C. & Wand, M. Precomputed real-time texture synthesis with markovian generative adversarial networks. Lecture Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 9907 LNCS, 702–716 (2016).
- 22.Radford, A., Metz, L. & Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. 4th Int. Conf. Learn. Represent. ICLR 2016—Conf. Track Proc. (2015).
- 23.Mao, X. et al. Least Squares Generative Adversarial Networks. 2794–2802 (2017).
- 24.Gatys, L. A., Ecker, A. S. & Bethge, M. Image style transfer using convolutional neural networks 2414–2423 (2016).
- 25.Ledig, C. et al. Photo-realistic single image super-resolution using a generative adversarial network. Proc.—30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017-January, 105–114 (2016).
- 26.Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. 3rd Int. Conf. Learn. Represent. ICLR 2015—Conf. Track Proc. (2014).
- 27.Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process.13, 600–612 (2004). [DOI] [PubMed] [Google Scholar]
- 28.Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. & Hochreiter, S. GANs trained by a two time-scale update rule converge to a local nash equilibrium. Adv. Neural Inf. Process. Syst.2017, 6627–6638 (2017).
- 29.Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2016-December, 2818–2826 (2015).
- 30.Stabin, M. G., Sparks, R. B. & Crowe, E. OLINDA/EXM: The second-generation personal computer software for internal dose assessment in nuclear medicine. J. Nucl. Med.46 (2005). [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets generated and/or analyzed during the current study are available from the corresponding author upon reasonable request.




