Skip to main content
Nuclear Medicine and Molecular Imaging logoLink to Nuclear Medicine and Molecular Imaging
. 2025 Aug 23;59(5):329–341. doi: 10.1007/s13139-025-00939-9

The Role of Artificial Intelligence in Advancing Theranostics Dosimetry for Cancer Therapy: a Review

Sang-Keun Woo 1,2,
PMCID: PMC12446183  PMID: 40979137

Abstract

Cancer treatment has greatly benefited from advancements in radiopharmaceutical therapy, which requires precise dosimetry to enhance therapeutic efficacy and minimize risks to healthy tissues. This review investigated the role of artificial intelligence (AI) in theranostic radiopharmaceutical dosimetry, focusing on image quality enhancement, dose estimation, and organ segmentation. An in-depth review of the literature was conducted using targeted keywords searches in Google Scholar, PubMed, and Scopus. Selected studies were evaluated for their methodologies and outcomes. Traditional dosimetry techniques such as organ-level and voxel-based methods are discussed. Deep learning (DL) models based on U-Net, generative adversarial networks, and hybrid transformer networks for image synthesis and generation, image quality improvement, organ segmentation, and radiation dose estimation are reviewed and discussed. While DL shows great potential for enhancing dosimetry accuracy and efficiency, challenges such as the need for accurate dose estimation from theranostic pairs, lack of imaging data, and modeling of radionuclide decay chains must be addressed using DL models. In addition, the optimization and standardization of DL and AI models is crucial for ensuring clinical reliability and should be given high priority to support their effective integration into clinical practice.

Keywords: Deep learning, CNN, Generative adversarial network, Personalized dosimetry, Image-based dosimetry, Voxel-based dose prediction, Medical image synthesis

Introduction

Cancer is characterized by the uncontrolled growth and spread of abnormal cells and remains a major global health challenge. The estimated number of new cancer cases worldwide is projected to increase by 75% rising from 20 million in 2022 to approximately 35 million by 2050 [1]. Various types of cancers are treated with radiation therapy, which employs ionizing radiation to destroy cancerous cells while sparing healthy tissues. Radiation therapy encompasses external beam therapy or radiopharmaceutical therapy (RPT). External beam radiation therapy is one of the most popular cancer treatments and is commonly performed using linear accelerator, which directs high-energy X-rays, protons, or electron beams to the cancer site.

Advances in medicine have also highlighted the importance of radiopharmaceuticals in cancer diagnosis and treatment. Radiopharmaceuticals are specialized compounds labeled with radioactive isotopes that target specific cellular structures and deliver cytotoxic radiation to tumor cells, enabling the precise targeting of cancer cells. Cancer treatment using radiopharmaceutical is often called RPT or targeted radionuclide therapy (TRT). Another advantage of radiopharmaceuticals is that they can be imaged directly or via surrogate imaging using techniques such as positron emission tomography (PET) or single-photon emission computed tomography (SPECT) [2]– [3]. These images can be used to evaluate therapeutic outcomes, enable image-based dosimetry, and diagnose diseases. The therapeutic use of radiopharmaceuticals began in the 1940 s [4], when radioactive iodine was first employed for the treatment of thyroid disease. Many radiopharmaceuticals labeled with different radioisotopes have since been developed to target various types of cancers with high therapeutic efficacy [513].

Several therapeutic radiopharmaceuticals have been approved by the food and drug association such as 131I-MIBG (AZEDRA®) for the treatment of pheochromocytoma or paraganglioma [14], [177Lu]Lu-DOTA-TATE LUTATHERA®) for somatostatin receptor-positive gastroenteropancreatic neuroendocrine tumors [15], [177Lu] Lu-PSMA-617 (Pluvicto™) for prostate-specific membrane antigen (PSMA)-positive metastatic castration-resistant prostate cancer [16], and [223Ra]RaCl2 (Xofigo®) for castration-resistant prostate cancer and symptomatic bone metastases [17]. Delivering the highest possible absorbed dose to the tumor while minimizing the dose to healthy tissues is essential for achieving the highest therapeutic efficacy. Despite the great therapeutic efficacy of radiopharmaceuticals, estimating the absorbed dose is essential for establishing dose-response relationships in clinical RPT, thus enabling the evaluation of therapeutic outcomes and associated toxicities [18]. Therefore, improved dosimetry techniques are required to accurately assess the risks and benefits of radiopharmaceuticals in clinical applications. This paper provides an in-depth review of conventional dosimetry approaches and the evolving role of AI in enhancing RPT dosimetry across various tasks.

Dosimetry Techniques and Tools

Internal dosimetry is commonly performed using the methodology introduced by the Medical Internal Radiation Dose (MIRD) committee [19]. Initially, this approach was based on organ-level dose estimation using organ time integral activity and organ S-value, which are the mean absorbed dose to the target organ per unit activity in the source region. OLINDA/EXM was the first and most widely used dosimetry software based on the MIRD approach. However, organ-level dosimetry calculates the mean absorbed dose assuming a uniform activity distribution of radiopharmaceuticals in the target organ or tissue. Additionally, organ-level dosimetry does not consider patient-specific anatomy. Instead, it relies on standardized anatomical models such as reference phantoms to estimate the radiation dose. These models represent the average anatomical features and organ geometries, which may differ from the anatomy of individual patients [20]. Several software packages, such as AIDE [21], DCAL [22], IDAC-DOSE [23], MIRDcalc [24], and MIRDOSE [25] have been developed to calculate doses at the organ level.

To address these challenges, voxel-based dosimetry has been introduced as a more precise method. Voxel-based dosimetry is reportedly superior to the mean-absorbed dose approach for establishing an absorbed dose-effect relationship in TRT [26]. This approach allows the calculation of the absorbed dose on a voxel-by-voxel basis, providing three-dimensional dose maps. Utilizing quantitative imaging data from modalities such as SPECT or PET combined with anatomical information from computed tomography (CT) or magnetic resonance imaging (MRI), voxel-based dosimetry accounts for heterogeneities in radiopharmaceutical uptake and patient-specific anatomical variations. This technique also includes several methods, such as the direct Monte Carlo (MC) method, dose kernel convolution (DKC), and direct voxel S-value(VSV).

The direct MC method incorporates heterogeneities in both the activity distribution and tissue properties. This technique simulates particle transport using MC engines and calculates the energy deposition at the voxel level. The direct MC method is considered to be the gold standard for accurate dose estimation. Several MC-based software packages such as VIDA [27], RAYDOSE [28], SIMDOSE [29], 3D-RAD [30], and OEDIPE [31] have been developed. However, the direct MC method is not used in routine clinical practice because of its time consumption and computational demands.

Other methods such as the dose point kernel (DPK) [32] and VSV [33] approaches have been proposed to speed up computation and address the nonuniform distribution of radiopharmaceuticals in organs. The DPK method represents the radially absorbed dose distribution around an isotropic point source in a homogeneous aqueous medium. Graves et al. provided dose-point kernels for 2,174 radionuclides [34]. Alternatively, VSV extends the organ-based MIRD schema to the voxel level by defining sources and targets as voxels with voxel-specific S-value precomputed for various isotopes and voxel sizes. Unlike DPK, VSV does not require computationally intensive conversions of spherical to Cartesian coordinates, but relies on tabulated S-value for each radionuclide. However, both methods assume uniform tissue density using fast Fourier or Hartley transforms [35, 36]. Table 1 summarizes the advantages and disadvantages of each dosimetry method. QDOSE (ABX-CRO advanced pharmaceutical services Forschungsgesellschaft mbH) is an advanced molecular imaging dosimetry software designed for internal radiation dose assessments at both the voxel (voxel S kernels) and organ (integrated with IDAC-Dose 2.1) level. The software supports AI-based semi- and fully-automated organ segmentation, single time-point dosimetry, and one-click hybrid dosimetry, offering precision and efficiency in dosimetric analysis. Another software package for voxel-level dosimetry is MIM (MIM Software, Inc., Cleveland, OH, USA), which supports single time-point and voxel-level dosimetry. VoxelDose [37], BigDose [38], and RMDP [39] are additional software packages based on voxel-based dosimetry.

Table 1.

Comparison between organ-based dosimetry, voxel-based dosimetry and direct MC method

Organ-based dosimetry Voxel-based dosimetry Direct MC method
Resolution Provides average dose per organ Provides the spatial distribution of the dose within the organ Provides the spatial distribution of the dose within the organ
Computation Complexity Relatively simple, uses mean kinetic parameters Less than direct MC method Computationally intensive. Requires voxel-by-voxel modeling
Accuracy Limited by assumption of the uniform activity distribution Limited by assumption of the homogenous medium density High accuracy (gold standard)
Activity distribution Homogeneous activity distributions Heterogeneous activity distribution Heterogeneous activity distribution
Medium density Homogenous density Homogenous density (water) Heterogeneous density
Time Requirements Rapid Less time-consuming than the direct MC method Slow

MC Monte Carlo

AI Roles for Enhancing Dosimetry in RPT: Image synthesis, generation, and Quality Improvement

In radiopharmaceutical dosimetry, AI, particularly DL, has emerged as a transformative tool for improving the accuracy and efficiency of dosimetric assessments. Beyond direct dose estimation and the generation of high-resolution dose maps, DL models have been effectively employed in critical preprocessing steps, such as medical image enhancement, image generation, and organ segmentation, which play a crucial role in improving the precision of dose calculations. Using DL techniques to synthesize particular medical images using different modalities (e.g., PET-CT or MRI-CT) [4047] or the same modality, has the potential to significantly reduce the risk of additional radiation exposure. Furthermore, DL has shown excellent ability to enhance image quality via noise reduction and super-resolution modeling to generate clinically clear images with improved quality and detail. The following sections describe the use of several DL architectures [4852] for PET image generation/synthesis and quality enhancement.

PET Image Synthesis/generation Using AI

In RPT, PET images are commonly used for image-based dosimetry assessment because of the high resolution and sensitivity of PET imaging. However, one of the primary limitations is the use of short-lived radionuclides, which limits the imaging time and demands rapid coordination between radiotracer production and patient administration [53]. Therefore, the use of AI for generating new or delayed PET images (Fig. 1) from early acquired images has the potential to overcome this limitation and reduce radiation exposure. Jyoti et al. proposed a generative adversarial network (GAN)-based model for synthesizing brain PET images representing three stages of Alzheimer’s disease: normal control (NC), mild cognitive impairment, and Alzheimer’s disease (AD) [54]. The model was trained separately for each stage using real PET images and noise samples. The synthetic images were evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), achieving a PSNR of 32.83 and SSIM of 77.48, indicating strong visual similarity to real PET scans. Qualitative assessments and a classification task using a 2D Convolutional Neural Networks (2D-CNN) showed that models trained with the synthetic data achieved improved diagnostic performance. Wang et al. investigated the feasibility of using deep learning to generate synthetic PET images of synaptic density (¹¹C-UCB-J) and amyloid deposition (¹¹C-PiB) from more widely available ¹⁸F-FDG PET scans [55]. Utilizing a 3D U-Net architecture, four models were trained to predict the standardized uptake value ratio (SUVR) and distribution volume ratio (DVR) of ¹¹C-UCB-J using different inputs derived from ¹⁸F-FDG scans (SUVR and Ki ratio). The models were trained and tested on data from 54 participants (21 CN, 33 with AD). Evaluation using normalized root mean square error (NRMSE), SSIM, and Pearson’s correlation showed satisfactory results, with mean region-of-interest biases mostly within ± 2% across the AD and CN groups. Although the ¹¹C-PiB SUVR prediction was more challenging, the study demonstrated that incorporating additional diagnostic or clinical information could help to reduce bias to < 5% in most regions. Overall, this work supports the potential of deep learning to synthesize high-value PET modalities from routine tracers.

Fig. 1.

Fig. 1

Conceptional graphic of delayed time PET image synthesis

While most studies focus on generating new synthetic PET images for new patients or on transforming images from a different modality, Kim et al. developed a DL-based method to synthesize delayed [¹⁸F]FDG PET images from early scans, aiming to reducing the need for multiple time-point acquisitions [56]. Eighteen healthy participants underwent PET imaging at 5, 14, 31, and 52 min post-injection. A paired image-to-image (I2I) translation framework based on GAN with a U-Net-based generator and a PatchGAN discriminator was used. The mode showed high capability for preserving image details, achieving a PSNR of 53.29 dB and an Fréchet inception distance (FID) score of 21.36 when predicting 52-min images from 14-min scans. The translation accuracy improved with the time gap between early and delayed scans. Furthermore, the organ-level mean standardized uptake values of generated images showed good agreement with the ground truth for the muscle, heart, liver, and spleen (errors < 0.2), although the model underperformed for the kidneys and bladder due to individual variability and dynamic uptake. Regardless, this study demonstrates the feasibility of using GAN-based I2I translation to synthesize delayed time PET images. Another study by Kim et al. synthesized delayed PET images of [⁶⁴Cu]DOTA-rituximab from early-time scans [57]. PET scans from six patients with lymphoma were acquired at 1, 24, and 48 h post-injection. A paired I2I translation framework based on the Pix2Pix GAN architecture, using a U-Net-like generator with residual blocks and a PatchGAN discriminator was used. The model showed SSIM and FID values of 0.8094 and 62.93 for generating PET images at 24 h post-injection and 0.7714 and 74.35 for synthesized PET images at 48 h post-injection, respectively. Furthermore, organ-level dosimetry using new synthesized images was in good agreement with real acquired images. The study demonstrates that GAN-based image synthesis can potentially reduce the need for multiple imaging points in radiopharmaceutical radioimmunoconjugates (RIC) dosimetry, streamlining the clinical workflow and improving patient convenience.

PET Image Denoising/reconstruction Using AI

Nuclear medicine imaging modalities such as PET can suffer from high levels of noise and limited spatial resolution, which can hinder image clarity and reduce the accuracy of image-based dosimetry. The use of radiotracers at low doses is often attributed to these noisy low-quality images. However, increasing the injected dose raises concerns regarding increased radiation exposure in patients. Techniques such as noise reduction and super-resolution modeling can generate clinically relevant images with improved quality and detail, without compromising patient safety. Post-reconstruction image-enhancement techniques are commonly employed to improve both image quality and quantitative accuracy [5861]. Recently, AI has emerged as a powerful tool that offers various innovative approaches for denoising (Fig. 2), deblurring, and partial-volume correction in PET imaging.

Fig. 2.

Fig. 2

Conceptional graphic of image denoising

Several efforts have been made to develop DL models for enhancing nuclear medicine imaging Enhancement [6265]. Kaplan et al. proposed a DL model to denoise low-dose PET images and estimate their full-dose equivalents by incorporating image-specific features into the loss function [66]. The model comprised two networks: an estimator and an adversarial discriminator. The estimator network included four convolutional and four deconvolutional layers with skip connections to extract and refine features from the input. The discriminator network, with one convolutional layer and one fully connected layer, classified the image patches as real or generated. The estimator was first pretrained using the mean squared error (MSE) loss, texture, and edge-preserving features, followed by the adversarial network, which further enhanced the texture and structure of the generated images. This approach results in improved image quality comparable to that of full-dose images. An unsupervised DL model was proposed by Cui et al. to perform PET image denoising using prior information from the same patient [67]. No training pairs are required because the high-quality prior image is used as the input and the noisy PET image is treated as the training label. The network learns to restore the noisy image based on the intrinsic structure of a prior image. A simulation study using the BrainWeb phantom and clinical data from PET/CT and PET/MR datasets demonstrated that the proposed method outperformed other methods in terms of contrast-to-noise ratio (CNR) improvements. Pan et al. introduced the PET consistency model (PET-CM), a diffusion-based method designed to generate high-quality full-dose PET images from low-dose inputs [68]. PET-CM uses a two-step process involving the addition of Gaussian noise in forward diffusion followed by denoising via a PET shifted-window vision transformer network in reverse diffusion. The network learns a consistency function to effectively denoise low-dose images into clean full-dose images. PET-CM outperformed state-of-the-art methods in terms of image quality and efficiency, requiring 12 times less computation time than previous models.

Traditionally, image reconstruction relies on analytical methods such as filtered backprojection (FBP) [69] or iterative algorithms such as ordered subset expectation maximization (OSEM) [70]. These methods use projection data acquired at different angles to estimate the 3D distribution of radiotracer activity. Although FBP is computationally efficient, it struggles with noise and artifacts. In contrast, OSEM incorporates corrections for attenuation, scatter, and resolution, producing higher-quality images but requiring more computational time. DL models, such as automated transform by manifold approximation (AUTOMAP), which uses a fully connected layer and CNNs [71], DeepPET using Fully Convolutional Networks (FCN) architecture [72], and a CNN iterative PET image reconstruction model using existing inter-patient information [73], have been developed. Vashistha et al. recently proposed a method that focused on reconstructing high-quality parametric PET images by integrating kinetic modeling and denoising directly into an image reconstruction pipeline [74]. This approach combines deconvolution, U-Net-based denoising, and a 1D deconvolution long short-term memory (1D-LSTM) model trained on simulated tissue time-activity curves to estimate pixel-wise kinetic parameters. Refinement was achieved using an unsupervised deep image prior. The model was tested on a brain phantom and five 18F-FDG PET scans; the method demonstrated high accuracy (Pearson r = 0.91–0.93), low error (< 0.0004), and improved CNRs compared to conventional methods.

Organ Segmentation

The segmentation of organs and tumors is important for dose calculations and tissue segmentation is a key parameter [75]. DL transforms organ segmentation by enabling automated pixelwise labeling with high accuracy. For example, FCNs capture spatial and contextual features, making them effective in delineating complex anatomical structures and adapting to varying shapes and sizes [52]. In this section, we discuss the advancements in DL techniques for organ segmentation.

Vavekanand et al. introduced NeuroDNet, a CNN-based model for brain tumor detection that combines 3D and 2D convolutional layers to process volumetric data from MRI scans [76]. The hybrid architecture captures both local and global tumor characteristics using multiscale convolutional layers. Trained on a large and diverse dataset of over 2,000 MR images, NeuroDNet achieved 94% validation and 92% testing accuracy. Tian et al. introduced another tumor image segmentation model, which was an improved U-Net model incorporating the GSConv module and the Efficient Channel Attention (ECA) mechanism [77]. GSConv enhances the spatial feature extraction via grouped and displaced convolutions, whereas ECA recalibrates the feature channels to focus on important details. The model was trained on 500 image-mask pairs. These improvements enabled the model to capture multiscale features and produce more accurate tumor segmentation, particularly at the tumor boundaries. The experimental results showed that the enhanced U-Net outperformed the traditional U-Net. Liao et al. introduced AbsegNet, a DL model designed to segment 16 organs [78]. Using three datasets totaling 544 CT scans from different centers, AbsegNet was trained, validated, and clinically assessed. The robustness of the model was enhanced via data augmentation and knowledge distillation. It achieved high Dice similarity coefficients (DSCs) (mean: 86.73–88.04%) and outperformed established models, such as SwinUNETR [79] and UNet [80]. Clinical evaluations showed that no revisions were required for several organs (e.g., the liver, kidneys, and spleen) and minimal revisions for others, with only 15% of patients needing major revisions for the colon and small bowel contours. Peng et al. improved the U-Net architecture by introducing a batch normalization layer, residual squeeze-and-excitation layer, and unique organ-specific loss function for DL training [81]. A total of 260 and 50 CT images were used as the training and test sets, respectively. Validation against manual delineations and STAPLE contours showed strong performance, with an average DSC of 83.75%. OrganNet achieved high accuracy in segmenting large-volume organs (84.97–95.00%) and competitive results for small-volume organs (55.46–91.56%), often surpassing manual delineation accuracy. Another study by Amjad et al. developed and evaluated five DL-based autosegmentation models for 42 organs across three major tumor sites: male pelvis (MP), head and neck (HN), and abdomen (ABD) [82]. The models, which were based on a modified 3D U-Net architecture, were trained using both general multi-institutional and custom single-institution datasets. The models used adaptive spatial resolution for small or narrow values and pseudo-scan extension for short CT scans. The custom models generally outperformed the general models, with DSCs ranging from 0.8 to 0.98 for 74%. Notably, the models showed improved accuracy for small or complex organs such as the eye lens and optic nerves. Auto-segmentation reduced the time required for manual contouring by up to 88% for MP, 80% for HN, and 65% for ABD, thereby demonstrating its clinical utility in radiation treatment planning. Several other models have been developed for medical image segmentation [8388].

AI for Image-Based Dosimetry

DL has become an important tool in image-based dosimetry, increasing the speed and accuracy of the process. Voxel-based dose maps can be estimated or predicted using DL models, such as 3D CNNs, to analyze medical images and predict the distribution of radiation across voxels. In addition to direct dose estimation or prediction, the accurate segmentation of organs is crucial for dosimetry because it directly affects the precision of radiation dose calculations. DL models such as U-Net are used to automatically identify and outline organs and tumors in medical images with high precision and reduced time compared with manual work. In this section, we discuss recent DL models for dose prediction or estimation and image segmentation.

Xue et al. addressed the challenge of predicting voxel-wise absorbed-dose maps for RPT using pre-therapy PET imaging, emphasizing intra-organ heterogeneity [89]. Data from 23 patients with metastatic castration-resistant prostate cancer treated with 177Lu-PSMA I&T were analyzed, revealing moderate correlations between PET imaging and actual dose maps due to pharmacokinetic variability. A 3D RPT DoseGAN with a generator and discriminator was trained on 3D image patches (32 × 32 × 32) to overcome the limited training samples. The DL-based approach significantly outperformed the traditional organ dose-guided projection methods, achieving a lower voxel-wise normalized root mean squared error (0.79% vs. 1.11%, p < 0.05) and superior dose prediction accuracy (e.g., R² = 0.92 for the kidneys). Using SPECT/CT images, Mansouri et al. introduced a hybrid transformer-based DL model designed for voxel-level dosimetry in 177Lu-DOTATATE therapy to enhance the accuracy of dose map predictions [90]. The model used a multiple-VSV (MSV) approach and a modified UNet Transformer architecture, which was trained on co-registered CT images from 22 patients undergoing therapy. The model was tasked with predicting the differences between MC and MSV dose maps. Once the difference was predicted, the MC dose maps were reconstructed by adding the output to the MSV maps. The model was trained using fivefold cross-validation with 2D axial slices of CT images. The results revealed that the DL model outperformed both the MSV and SSV approaches, achieving a voxel-level relative absolute error (RAE) of 5.28 ± 1.32. The model also exhibited high gamma analysis pass rates (99.0 ± 1.2%) and significantly improved computational time, processing a single-bed SPECT scan in only 3 s (compared to 2 days for MC). Kassar et al. developed a physiologically based pharmacokinetic (PBPK)-adapted deep learning approach for the pre-therapy prediction of voxel-wise dosimetry in RPT using synthetic patient data simulating 68Ga-PSMA-11 and 177Lu-PSMA-I&T images [91]. The model was guided to produce physiologically plausible dose maps by integrating PBPK modeling into a conditional GAN. A 3D U-Net generator and custom discriminator were trained with PBPK-informed loss functions, significantly improving the dose prediction accuracy in critical organs, such as the kidneys, liver, spleen, and salivary glands, compared with purely data-driven methods. This hybrid framework demonstrates the potential of combining mechanistic models with AI to enable personalized voxel-level dose planning using single pre-therapy scans. Additional DL image-based dose estimation models are presented in Table 2. Additional DL models for dose prediction, estimation, and dose accuracy enhancement have been proposed [92109].

Table 2.

DL based models for dose Estimation and prediction

Author Model/architecture Data Achievement
Karimipourfard et al. [110] Pix2Pix GAN [¹⁸F]FDG PET/CT images The model achieved strong agreement with MC reference doses
Akhavanallaf et al. [111] DNN (ResNET) CT images for density maps and MC-generated voxelwise S-value The DNN predicted S-value kernels with 4.5% MRAE compared to MC (voxel level: 2.6% and organ level: 5.1% MRAE)
Mao et al. [112] CNN (Dw, w CNN, FiLM and 3D U-net) CT images Dose distributions closely matched MC simulations, (0.73% difference for prostate CTV D90 and 1.1% for rectum D2cc.)
Götz et al. [113] DNN-EMD (hybrid U-net/EMD) CT, Dose maps estimated according to the MIRD protocol and measured SPECT distributions of 177Lu radionuclei Superior performance of the hybrid DNN-EMD method compared to MIRD DVK dose calculation
Xing et al. [114] Hierarchically dense U-Net CT, AAA, and AXB dose maps Boosted AAA doses showed improved matching to the AXB doses, with an average standard deviation gamma passing rate (1 mm/1%) 97.6% (± 2.4%) compared to 87.8% (± 9.0%) of the original AAA doses
Lee et al. [115] CNN (U-Net) CT and PET images High accuracy, with voxel dose rate errors of 2.54% ± 2.09%, outperforming the VSV kernel method (9.97% ± 1.79%) and OLINDA/EXM dosimetry software (34.22%)
Liu et al. [116] Combination of a deep residual network (ResNet) and a deconvolution network (U-ResNet-D) CT images representing the anatomical structures and their corresponding dose-related information for each slice Reducing bias ranging from − 2.0–2.3%, prediction error between 1.5% and 4.5%

AAA anisotropic analytic algorithm, AXB Acuros XB, CNN Convolutional neural network, CT computed tomography, CTV Clinical target volume, DNN-EMD deep neural networks with empirical mode decomposition, DVK dose voxel kernel, FDG fluorodeoxyglucose, MC Monte Carlo, MIRD Medical Internal Radiation Dose, MRAE Mean relative absolute error, PET positron emission tomography

Discussion

This review examines the role of AI in different tasks contributing to dosimetry. Regarding image synthesis and generation, all reviewed models showed promising results. However, the statistical analysis of the image quality metric values varied for each model, potentially owing to differences in dataset size, preprocessing techniques, target region, network architectures, or loss functions. The produced image should be as similar as possible to the labeled or reference image. The images produced were evaluated based on the SSIM, PSNR, MSE, and mean absolute error (MAE), among other metrics. SSIM measures the similarity between two images by considering the luminance, contrast, and structural information, with higher values indicating better resemblance. The PSNR quantifies the ratio between the maximum possible power of a signal and the power of noise, expressed in decibels, where higher values correspond to superior image quality. The MSE calculates the average squared difference between the predicted and actual values, with lower values reflecting better image fidelity. Similarly, the MAE evaluates the average magnitude of the absolute differences between the predicted and actual values, thereby providing a clear measure of the overall prediction accuracy.

The potential of DL to generate clinically viable high-quality images should be considered. Schaefferkoetter investigated the DL transformations in medical imaging [117]. used GANs to generate synthetic CT images from whole-body MR data in PET/MR systems. This study focused on ensuring high-quality, anatomically accurate images for PET attenuation correction. The results indicated that synthetic CT images performed better than traditional MR-based methods in quantifying tracer uptake. Additionally, the synthetic CT approach showed an improved correlation with the CT-derived mu maps. Galapon et al., used MC dropout-based uncertainty maps to evaluate sCT quality for adaptive proton therapy [118], achieving high correlations (r = 0.92) between the uncertainty maps and Hounsfield Unit (HU) errors. Alvarez Andres et al. [119] investigated 3D CNNs for pseudo-CT (pCT) generation and revealed that larger training datasets enhanced pCT quality, whereas MRI preprocessing and sequence variations had minimal impact on dosimetric performance. Kazemifar et al. aimed to improve the dosimetric accuracy of synthetic CT images generated using a DL approach [120]. GAN was used with mutual information loss to address MRI-CT misalignment, achieving an MAE of 47.2 ± 11.0 HU and DSC of 80% ± 6% in bone structures, demonstrating its feasibility for MRI-only workflows.

The accuracy of DL for dose prediction was investigated by Götz et al. [113], who focused on kidney dosimetry in patients undergoing Lu-177-DOTATOC or PSMA therapy [121], in which the neural network predicted dose voxel kernels from density kernels derived using MC simulations. The results showed that the DL approach provided superior accuracy in predicting absorbed radiation doses compared with the traditional method, with no additional computational effort. This method has been proven highly effective in estimating radiation doses in clinical practice.

The accuracy, generalization, and robustness of DL models for organ segmentation are very important for dosimetry and should be prioritized. Koo et al. conducted a comparative evaluation of DL for autosegmentation [122]. This study involved the training of a prototype algorithm built on a combined U-Net and V-Net architecture. The model demonstrated superior accuracy across multiple evaluation metrics, including the DSC, Hausdorff distance (HD), and voxel-penalty metric. When trained with gold-standard data, the prototype achieved a DSC of 0.81, surpassing commercial models trained on the same dataset (0.74) and external data (0.66). In addition, 93% of the auto-segmented structures from the prototype model were clinically useful. However, the study also mentioned that segmentation results can vary depending on the training data and institutional differences. Therefore, the standardization and optimization of DL models for clinical use is critical.

Although DL models have shown promising results regarding dose estimation, organ segmentation, image synthesis, and image generation, several challenges remain. First, therapeutic radiopharmaceuticals are administered at low injection doses owing to their high cytotoxicity, which may cause side effects in normal tissues. Therefore, estimating the therapeutic radiopharmaceutical radiation dose based on theranostic pair data, which is a data extracted form imaging a surrogate radionuclide (surrogate imaging) that is labeled to the same targeting vehicle or with similar chemical properties as the therapeutic radionuclide, using DL can improve treatment planning by providing dose assessments while minimizing toxicity risks.

Secondly, owing to the low injection dose and patient burden, there is a high possibility of the absence of medical images. Consequently, DL models designed to estimate human dose or generating human medical images utilizing animal data and applying extrapolation methods [123] could bridge this gap by leveraging cross-species data translation.

Third, image-based dosimetry typically requires multiple post-injection scans with at least two to three time points. However, acquiring multiple scans remains challenging with the use of many radiopharmaceuticals. Therefore, DL models that generate later or earlier scans from a single timepoint scan, such as the model developed by Kim et al. [56, 57], should receive greater attention and be extended to a wider range of radiopharmaceuticals. This would contribute to more efficient imaging protocols, thereby reducing the scanning burden while maintaining dosimetric accuracy.

Fourth, some radionuclides used in RPT, such as 213Bi, 177Lu, 211At, and 212Pb, exhibit short decay chains. However, others such as 225Ac and 223Ra produce multiple daughter radionuclides that contribute to the total radiation dose and may exhibit different biodistribution patterns by breaking the bond with the parent radionuclide, owing to their recoil energy [124126]. Therefore, a DL model that accounts for daughter radionuclide dose contributions and their in vivo behaviors is required to enhance dosimetric modeling and precise radiation dose calculations.

Fifth, in the absence of 3D images, AI models for transforming 2D into 3D images are helpful alternatives for obtaining 3D dose maps. Almeida et al. proposed a CNN-based DL model to reconstruct 3D medical image volumes from a single 2D X-ray image [127]. The model demonstrated promising performance, achieving mean SSIM scores of 0.77 ± 0.05 for knee and 0.78 ± 0.06 for hip reconstructions when compared to ground-truth CT volumes. Other models have also been proposed [128, 129], however, other modalities for 2D images, such as planar images, have been addressed using AI models.

Finally, standardization, the optimization, and rigorous evaluation of DL models is crucial for their reliable application in dosimetry. Variability in training datasets, preprocessing techniques, network architectures, and loss functions can significantly affect the performance and generalizability of DL models. Without standardization, inconsistencies across institutions or datasets may lead to suboptimal or inaccurate results that undermine the clinical utility. The optimization of DL models ensures that they are tuned to achieve the best possible outcomes in terms of accuracy, efficiency, and robustness. This is particularly important in clinical applications, such as organ segmentation or dose prediction, where precision directly affects patient safety and treatment efficacy. Establishing standardized protocols and benchmarks for training, testing, and validation fosters consistency, facilitates cross-institutional collaboration, and accelerates the clinical adoption of DL models.

Conclusion

The transformative role of DL in advancing personalized dosimetry for cancer treatment was reviewed. By addressing the limitations of traditional organ-level and voxel-based dosimetry methods, DL has demonstrated remarkable potential for enhancing image quality and organ segmentation, which directly contributes to the accuracy of estimated dose distributions. DL architectures, including U-Net, GANs, and transformer-based models, significantly improve the dosimetry precision and efficiency. However, key challenges persist, including the need for accurate dose prediction from theranostic pairs, addressing missing imaging data, and modeling complex radionuclide decay chains. Additionally, the standardization and optimization of DL models is essential to ensure accuracy, efficiency, and clinical reliability. Overcoming these challenges is critical to ensure their reliable and effective integration into clinical workflows.

Acknowledgements

Not applicable.

Funding

This work was supported by the Korea Institute of Radiological and Medical Sciences (KIRAMS) (50461 − 2025, 50554 − 2025).

Data Availability

Data sharing not applicable to this review article as no data analysis.

Declarations

Ethics Approval and Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Competing interests

Sang-Keun Woo declare no competing interests.

Declaration of Generative AI in Scientific Writing

Authors did not use generative AI in writing.

Preprint Sharing

Not applicable

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Bailey DL, Maisey MN, Townsend DW, Valk PE. Positron emission tomography. London: Springer; 2005. [Google Scholar]
  • 2.Bray F, Laversanne M, Sung H, Ferlay J, Siegel RL, Soerjomataram I, Jemal A. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. Cancer J Clin. 2024;74(3):229–63. [DOI] [PubMed] [Google Scholar]
  • 3.Hutton BF. The origins of SPECT and SPECT/CT. Eur J Nucl Med Mol Imaging. 2014;41(Suppl 1):S3–16. 10.1007/s00259-013-2606-5. [DOI] [PubMed] [Google Scholar]
  • 4.Zukotynski K, Jadvar H, Capala J, Fahey F. Targeted radionuclide therapy: practical applications and future prospects [Suppl]: biomarkers and their essential role in the development of personalised therapies (a). Biomark Cancer. 2016;8(Suppl 2):BIC–S31804. 10.4137/BIC.S31804. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ueno NT, Tahara RK, Fujii T, Reuben JM, Gao H, Saigal B, et al. Phase II study of radium-223 dichloride combined with hormonal therapy for hormone receptor‐positive, bone‐dominant metastatic breast cancer. Cancer Med. 2020;9:1025–32. 10.1002/cam4.2780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Feuerecker B, Tauber R, Knorr K, Heck M, Beheshti A, Seidl C, et al. Activity and adverse events of actinium-225-PSMA-617 in advanced metastatic castration-resistant prostate cancer after failure of lutetium-177-PSMA. Eur Urol. 2021;79:343–50. 10.1016/j.eururo.2020.11.013. [DOI] [PubMed] [Google Scholar]
  • 7.Ballal S, Yadav MP, Bal C, Sahoo RK, Tripathi M. Broadening horizons with 225 Ac-DOTATATE targeted alpha therapy for gastroenteropancreatic neuroendocrine tumour patients stable or refractory to 177 Lu-DOTATATE PRRT: first clinical experience on the efficacy and safety. Eur J Nucl Med Mol Imaging. 2020;47:934–46. 10.1007/s00259-019-04567-2. [DOI] [PubMed] [Google Scholar]
  • 8.Hofman MS, Violet J, Hicks RJ, Ferdinandus J, Thang SP, Akhurst T, et al. [177Lu]-PSMA-617 radionuclide treatment in patients with metastatic castration-resistant prostate cancer (LuPSMA trial): a single-centre, single-arm, phase 2 study. Lancet Oncol. 2018;19:825–33. 10.1016/S1470-2045(18)30198-0. [DOI] [PubMed] [Google Scholar]
  • 9.Strosberg J, El-Haddad G, Wolin E, Hendifar A, Yao J, Chasen B, et al. Phase 3 trial of 177Lu-dotatate for midgut neuroendocrine tumors. N Engl J Med. 2017;376:125–35. 10.1056/NEJMoa1607427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Allen BJ, Raja C, Rizvi S, Li Y, Tsui W, Graham P, et al. Intralesional targeted alpha therapy for metastatic melanoma. Cancer Biol Ther. 2005;4:1318–24. 10.4161/cbt.4.12.2251. [DOI] [PubMed] [Google Scholar]
  • 11.Witzig TE, Gordon LI, Cabanillas F, Czuczman MS, Emmanouilides C, Joyce R, et al. Randomized controlled trial of yttrium-90–labeled ibritumomab tiuxetan radioimmunotherapy versus rituximab immunotherapy for patients with relapsed or refractory low-grade, follicular, or transformed B-cell non-Hodgkin’s lymphoma. J Clin Oncol. 2002;20:2453–63. 10.1200/JCO.2002.11.076. [DOI] [PubMed] [Google Scholar]
  • 12.Jurcic JG, Larson SM, Sgouros G, McDevitt MR, Finn RD, Divgi CR, et al. Targeted α particle immunotherapy for myeloid leukemia. Blood Blood. 2002;100:1233–9. [PubMed] [Google Scholar]
  • 13.Alexander EK, Larsen PR. High dose 131I therapy for the treatment of hyperthyroidism caused by graves’ disease. J Clin Endocrinol Metab. 2002;87:1073–7. 10.1210/jcem.87.3.8333. [DOI] [PubMed] [Google Scholar]
  • 14.Ilanchezhian M, Jha A, Pacak K, Del Rivero J. Emerging treatments for advanced/metastatic pheochromocytoma and paraganglioma. Curr Treat Options Oncol. 2020;21:1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hennrich U, Kopka K. Lutathera®: the first FDA-and EMA-approved radiopharmaceutical for peptide receptor radionuclide therapy. Pharmaceuticals (Basel). 2019;12(3):114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hennrich U, Eder M. [177Lu] Lu-PSMA-617 (pluvictoTM): the first FDA-approved radiotherapeutical for treatment of prostate cancer. Pharmaceuticals. 2022;15(10):1292. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Höllriegl V, Petoussi-Henss N, Hürkamp K, Ocampo Ramos JC, Li WB. Radiopharmacokinetic modelling and radiation dose assessment of 223Ra used for treatment of metastatic castration-resistant prostate cancer. EJNMMI Phys. 2021;8(1):44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Strigari L, Konijnenberg M, Chiesa C, Bardies M, Du Y, Gleisner KS, et al. The evidence base for the use of internal dosimetry in the clinical practice of molecular radiotherapy. Eur J Nucl Med Mol Imaging. 2014;41:1976–88. 10.1007/s00259-014-2824-5. [DOI] [PubMed] [Google Scholar]
  • 19.Loevinger R, Budinger TF, Watson EE. MIRD primer for absorbed dose calculations. MIRD committee; 1988.
  • 20.Stabin MG, Sparks RB, Crowe E. OLINDA/EXM: the second-generation personal computer software for internal dose assessment in nuclear medicine. J Nucl Med. 2005;46:1023–7. [PubMed] [Google Scholar]
  • 21.Bertelli L, Melo DR, Lipsztein J, Cruz-Suarez R. AIDE: internal dosimetry software. Radiat Prot Dosimetry. 2008;130:358–67. 10.1093/rpd/ncn059. [DOI] [PubMed] [Google Scholar]
  • 22.Eckerman KF, Leggett RW, Cristy M, Nelson CB, Ryman JC, Sjoreen AL et al. UT-battelle LL. User’s guide to the DCAL system. Oak Ridge National Laboratory/TM-2001/190; 2006.
  • 23.Andersson M, Johansson L, Eckerman K, Mattsson S. IDAC-Dose 2.1, an internal dosimetry program for diagnostic nuclear medicine based on the ICRP adult reference voxel phantoms. EJNMMI Res. 2017;7:1–0. 10.1186/s13550-017-0339-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Kesner AL, Carter LM, Ramos JC, Lafontaine D, Olguin EA, Brown JL, et al. MIRD pamphlet 28. MIRDcalc—a software tool for medical internal radiation dosimetry. J Nucl Med. 2023;64(7):1:1117–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Stabin MG. MIRDOSE: personal computer software for internal dose assessment in nuclear medicine. J Nucl Med. 1996;37:538–46. [PubMed] [Google Scholar]
  • 26.Chiesa C, Bardiès M, Zaidi H. Voxel-based dosimetry is superior to mean absorbed dose approach for establishing dose-effect relationship in targeted radionuclide therapy. Med Phys. 2019;46:5403–6. 10.1002/mp.13851. [DOI] [PubMed] [Google Scholar]
  • 27.Kost SD, Dewaraja YK, Abramson RG, Stabin MG. VIDA: a voxel-based dosimetry method for targeted radionuclide therapy using Geant4. Cancer Biother Radiopharm. 2015;30:16–26. 10.1089/cbr.2014.1713. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Marcatili S, Pettinato C, Daniels S, Lewis G, Edwards P, Fanti S, et al. Development and validation of RAYDOSE: a Geant4-based application for molecular radiotherapy. Phys Med Biol. 2013;58(8):2491–508. 10.1088/0031-9155/58/8/2491. [DOI] [PubMed] [Google Scholar]
  • 29.Ljungberg M, Sjögreen K, Liu X, Frey E, Dewaraja Y, Strand SE. A 3-dimensional absorbed dose calculation method based on quantitative SPECT for radionuclide therapy: evaluation for 131I using Monte Carlo simulation. J Nucl Med. 2002;43:1101–9. [PMC free article] [PubMed] [Google Scholar]
  • 30.Prideaux AR, Song H, Hobbs RF, He B, Frey EC, Ladenson PW, et al. Three-dimensional radiobiologic dosimetry: application of radiobiologic modeling to patient-specific 3-dimensional imaging–based internal dosimetry. J Nucl Med. 2007;48:1008–16. 10.2967/jnumed.106.038000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Chiavassa S, Bardiès M, Guiraud-Vitaux F, Bruel D, Jourdain JR, Franck D, et al. OEDIPE: a personalized dosimetric tool associating voxel-based models with MCNPX. Cancer Biother Radiopharm. 2005;20:325–32. 10.1089/cbr.2005.20.325. [DOI] [PubMed] [Google Scholar]
  • 32.Berger MJ. Distribution of absorbed dose around point sources of electrons and beta particles in water and other media. Washington, DC: National Bureau of Standards; 1971. [PubMed] [Google Scholar]
  • 33.Bolch WE, Bouchet LG, Robertson JS, Wessels BW, Siegel JA, Howell RW, et al. MIRD pamphlet 17: the dosimetry of nonuniform activity distributions–radionuclide S values at the voxel level. Medical internal radiation dose committee. J Nucl Med. 1999;40:S11–36. [PubMed] [Google Scholar]
  • 34.Graves SA, Flynn RT, Hyer DE. Dose point kernels for 2,174 radionuclides. Med Phys. 2019;46:5284–93. 10.1002/mp.13789. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Nussbaumer HJ, Nussbaumer HJ. The fast fourier transform. Fast fourier transform and Convolution algorithms. Berlin, Heidelberg: Springer Berlin Heidelberg; 1982. pp. 80–111. 10.1007/978-3-642-81897-4_4. [Google Scholar]
  • 36.Bracewell RN. The fast Hartley transform. Proc IEEE. 1984;72:1010–8. 10.1109/PROC.1984.12968. [Google Scholar]
  • 37.Gardin I, Bouchet LG, Assié K, Caron J, Lisbona A, Ferrer L, et al. Voxeldose: a computer program for 3-D dose calculation in therapeutic nuclear medicine. Cancer Biother Radiopharm. 2003;18:109–15. 10.1089/108497803321269386. [DOI] [PubMed] [Google Scholar]
  • 38.Li T, Zhu L, Lu Z, Song N, Lin KH, Mok GS. BIGDOSE: software for 3D personalized targeted radionuclide therapy dosimetry. Quant Imaging Med Surg. 2020;10:160–70. 10.21037/qims.2019.10.09. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Guy MJ, Flux GD, Papavasileiou P, Flower MA, Ott RJ. RMDP: a dedicated package for 131I SPECT quantification, registration and patient-specific dosimetry. Cancer Biother Radiopharm. 2003;18:61–9. 10.1089/108497803321269331. [DOI] [PubMed] [Google Scholar]
  • 40.Dong X, Wang T, Lei Y, Higgins K, Liu T, Curran WJ, et al. Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging. Phys Med Biol. 2019;64: 215016. 10.1088/1361-6560/ab4eb7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Shiri I, Ghafarian P, Geramifar P, Leung KH, Ghelichoghli M, Oveisi M, et al. Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC). Eur Radiol. 2019;29:6867–79. 10.1007/s00330-019-06229-1. [DOI] [PubMed] [Google Scholar]
  • 42.Li Q, Zhu X, Zou S, Zhang N, Liu X, Yang Y, et al. Eliminating CT radiation for clinical PET examination using deep learning. Eur J Radiol. 2022;154: 110422. 10.1016/j.ejrad.2022.110422. [DOI] [PubMed] [Google Scholar]
  • 43.Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z et al. Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN. In: Deep learning in medical image analysis and multimodal learning for clinical decision support fourth international workshop. Proceedings 4, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI. Springer International Publishing; 2018, Granada, Spain, September 20, 2018, pp. 174–82.
  • 44.Gong K, Yang J, Kim K, El Fakhri G, Seo Y, Li Q. Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images. Phys Med Biol. 2018;63: 125011. 10.1088/1361-6560/aac763. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Ahangari S, Beck Olin A, Kinggård Federspiel M, Jakoby B, Andersen TL, Hansen AE, et al. A deep learning-based whole-body solution for PET/MRI attenuation correction. EJNMMI Phys. 2022;9:55. 10.1186/s40658-022-00486-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Lei Y, Harms J, Wang T, Liu Y, Shu HK, Jani AB, et al. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med Phys. 2019;46:3565–81. 10.1002/mp.13617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Eshraghi Boroojeni P, Chen Y, Commean PK, Eldeniz C, Skolnick GB, Merrill C, et al. Deep-learning synthesized pseudo‐ct for Mr high‐resolution pediatric cranial bone imaging (mr‐hipcb). Magn Reson Med. 2022;88:2285–97. 10.1002/mrm.29356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S et al. Generative adversarial Nets. Adv Neural Inf Process Syst. 2014;27.
  • 49.Chu C, Zhmoginov A, Sandler M. Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950. 2017.
  • 50.Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Rt-PA of the IEEE conference on computer vision and pattern recognition; 2017, pp 5967–76. 10.1109/CVPR.2017.632
  • 51.Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. Inmed Image Comput Comput-Assist Interv–MICCAI. 2016; 424–32.
  • 52.Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Rt-PA of the IEEE conference on computer vision and pattern recognition. 2015:(3431–40). [DOI] [PubMed]
  • 53.Alqahtani FF. SPECT/CT and PET/CT, related radiopharmaceuticals, and areas of application and comparison. Saudi Pharm J. 2023;31:312–28. 10.1016/j.jsps.2022.12.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Islam J, Zhang Y. GAN-based synthetic brain PET image generation. Brain Inf. 2020;7:3. 10.1186/s40708-020-00104-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Wang R, Liu H, Toyonaga T, Shi L, Wu J, Onofrey JA, et al. Generation of synthetic PET images of synaptic density and amyloid from 18F-FDG images using deep learning. Med Phys. 2021;48:5115–29. 10.1002/mp.15073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Kim K, Byun BH, Lim I, Lim SM, Woo SK. Deep learning-based delayed PET image synthesis from corresponding early scanned PET for dosimetry uptake estimation. Diagnostics (Basel). 2023. 10.3390/diagnostics13193045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Kim K, Yang J, Almaslamani M, Kang CS, Lee I, Lim I, et al. Deep learning-based organ-wise dosimetry of 64Cu-DOTA-rituximab through only one scanning. Sci Rep. 2025;15:5627. 10.1038/s41598-025-88498-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Buades A, Coll B, Morel JM. A non-local algorithm for image denoising 2 IEEE computer society conference on computer vision and pattern recognition (CVPR’05). IEEE; 2005, pp 60–5. 10.1109/CVPR.2005.38
  • 59.Kumari S, Singh ES. A review of image denoising techniques. Int J Adv Res Comput Sci. 2014;5.
  • 60.Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell. 1986;679–98. 10.1109/TPAMI.1986.4767851. [PubMed] [Google Scholar]
  • 61.Zhang L, Wu X. An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Trans Image Process. 2006;15:2226–38. 10.1109/tip.2006.877407. [DOI] [PubMed] [Google Scholar]
  • 62.Xie H, Gan W, Zhou B, Chen MK, Kulon M, Boustani A et al. Dose-aware diffusion model for 3D low-dose PET: multi-institutional validation with reader study and real low-dose data. arXiv preprint arXiv:2405.12996. 2024.
  • 63.Chen KT, Gong E, de Carvalho Macruz FB, Xu J, Boumis A, Khalighi M, et al. Ultra–low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290:649–56. 10.1148/radiol.2018180940. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Wang Y, Yu B, Wang L, Zu C, Lalush DS, Lin W, et al. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. Neuroimage. 2018;174:550–62. 10.1016/j.neuroimage.2018.03.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Khader F, Müller-Franzes G, Tayebi Arasteh S, Han T, Haarburger C, Schulze-Hagen M, et al. Denoising diffusion probabilistic models for 3D medical image generation. Sci Rep. 2023;13:7303. 10.1038/s41598-023-34341-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Kaplan S, Zhu YM. Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study. J Digit Imaging. 2019;32:773–8. 10.1007/s10278-018-0150-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Cui J, Gong K, Guo N, Wu C, Meng X, Kim K, et al. Pet image denoising using unsupervised deep learning. Eur J Nucl Med Mol Imaging. 2019;46:2780–9. 10.1007/s00259-019-04468-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Pan S, Abouei E, Peng J, Qian J, Wynne JF, Wang T, et al. Full-dose whole‐body PET synthesis from low‐dose PET using high‐efficiency denoising diffusion probabilistic model: PET consistency model. Med Phys. 2024;51:5468–78. 10.1002/mp.17068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Zeng GL. Image reconstruction—a tutorial. Comput Med Imaging Graph. 2001;25:97–103. 10.1016/s0895-6111(00)00059-8. [DOI] [PubMed] [Google Scholar]
  • 70.Hudson HM, Larkin RS. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans Med Imaging. 1994;13:601–9. 10.1109/42.363108. [DOI] [PubMed] [Google Scholar]
  • 71.Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487–92. 10.1038/nature25988. [DOI] [PubMed] [Google Scholar]
  • 72.Zhou B, Miao T, Mirian N, Chen X, Xie H, Feng Z, et al. Federated transfer learning for low-dose PET denoising: a pilot study with simulated heterogeneous data. IEEE Trans Radiat Plasma Med Sci. 2023;7:284–95. 10.1109/trpms.2022.3194408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Gong K, Guan J, Kim K, Zhang X, Yang J, Seo Y, et al. Iterative PET image reconstruction using convolutional neural network representation. IEEE Trans Med Imaging. 2018;38:675–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Vashistha R, Moradi H, Hammond A, O’Brien K, Rominger A, Sari H, et al. ParaPET: non-invasive deep learning method for direct parametric brain PET reconstruction using histoimages. EJNMMI Res. 2024;14:10. 10.1186/s13550-024-01072-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Bazalova M, Graves EE. The importance of tissue segmentation for dose calculations for kilovoltage radiation therapy. Med Phys. 2011;38:3039–49. 10.1118/1.3589138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Vavekanand R. Deep learning A approach for medical image segmentation integrating magnetic resonance imaging to enhance brain tumor recognition. Available SSRN 4827019. 2024.
  • 77.Tian Q, Wang Z, Cui X. Improved Unet brain tumor image segmentation based on GSConv module and ECA attention mechanism. Appl Comput Eng. 2024. 10.54254/2755-2721/88/20241740. [Google Scholar]
  • 78.Liao W, Luo X, He Y, Dong Y, Li C, Li K, et al. Comprehensive evaluation of a deep learning model for automatic organs-at-risk segmentation on heterogeneous computed tomography images for abdominal radiation therapy. Int J Radiat Oncol Biol Phys* Biology* Phys. 2023;117:994–1006. 10.1016/j.ijrobp.2023.05.034. [DOI] [PubMed] [Google Scholar]
  • 79.Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Med Image Comput Comput Assist Interv MICCAI. 2015; 234–41.
  • 80.Tang Y, Yang D, Li W, Roth HR, Landman B, Xu D et al. Self-supervised pre-training of swin transformers for 3d medical image analysis. Rt-PA of the IEEE/CVF conference on computer vision and pattern recognition; 2022. pp 20698–708. 10.1109/CVPR52688.2022.02007.
  • 81.Peng Y, Liu Y, Shen G, Chen Z, Chen M, Miao J, et al. Improved accuracy of auto-segmentation of organs at risk in radiotherapy planning for nasopharyngeal carcinoma based on fully convolutional neural network deep learning. Oral Oncol. 2023;136: 106261. 10.1016/j.oraloncology.2022.106261. [DOI] [PubMed] [Google Scholar]
  • 82.Amjad A, Xu J, Thill D, Lawton C, Hall W, Awan MJ, et al. General and custom deep learning autosegmentation models for organs in head and neck, abdomen, and male pelvis. Med Phys. 2022;49:1686–700. 10.1002/mp.15507. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Johnston N, De Rycke J, Lievens Y, Van Eijkeren M, Aelterman J, Vandersmissen E, et al. Dose-volume-based evaluation of convolutional neural network-based auto-segmentation of thoracic organs at risk. Phys Imaging Radiat Oncol. 2022;23:109–17. 10.1016/j.phro.2022.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Kawula M, Purice D, Li M, Vivar G, Ahmadi SA, Parodi K, et al. Dosimetric impact of deep learning-based CT auto-segmentation on radiation therapy treatment planning for prostate cancer. Radiat Oncol. 2022;17:21. 10.1186/s13014-022-01985-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Zhong Y, Yang Y, Fang Y, Wang J, Hu W. A preliminary experience of implementing deep-learning based auto-segmentation in head and neck cancer: a study on real-world clinical cases. Front Oncol. 2021;11: 638197. 10.3389/fonc.2021.638197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Kim N, Chun J, Chang JS, Lee CG, Keum KC, Kim JS. Feasibility of continual deep learning-based segmentation for personalized adaptive radiation therapy in head and neck area. Cancers (Basel). 2021;13:702. 10.3390/cancers13040702. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Peng Z, Fang X, Yan P, Shan H, Liu T, Pei X, et al. A method of rapid quantification of patient-specific organ doses for CT using deep‐learning‐based multi‐organ segmentation and GPU‐accelerated monte carlo dose computing. Med Phys. 2020;47:2526–36. 10.1002/mp.14131. [DOI] [PubMed] [Google Scholar]
  • 88.Wong J, Fong A, McVicar N, Smith S, Giambattista J, Wells D, et al. Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning. Radiother Oncol. 2020;144:152–8. 10.1016/j.radonc.2019.10.019. [DOI] [PubMed] [Google Scholar]
  • 89.Xue S, Gafita A, Zhao Y, Mercolli L, Cheng F, Rauscher I, et al. Pre-therapy PET-based voxel-wise dosimetry prediction by characterizing intra-organ heterogeneity in PSMA-directed radiopharmaceutical theranostics. Eur J Nucl Med Mol Imaging. 2024;51:3450–60. 10.1007/s00259-024-06737-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Mansouri Z, Salimi Y, Akhavanallaf A, Shiri I, Teixeira EP, Hou X, et al. Deep transformer-based personalized dosimetry from SPECT/CT images: a hybrid approach for [177Lu] Lu-DOTATATE radiopharmaceutical therapy. Eur J Nucl Med Mol Imaging. 2024;51(6):1516–29. 10.1007/s00259-024-06618-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Kassar M, Drobnjakovic M, Birindelli G, Xue S, Gafita A, Wendler T, et al. PBPK-adapted deep learning for pre-therapy prediction of voxel-wise dosimetry: in-silico proof-of-concept. IEEE Trans Radiat Plasma Med Sci. 2024;8:646–54. 10.1109/TRPMS.2024.3381849. [Google Scholar]
  • 92.Barateau A, De Crevoisier R, Largent A, Mylona E, Perichon N, Castelli J, et al. Comparison of CBCT-based dose calculation methods in head and neck cancer radiotherapy: from Hounsfield unit to density calibration curve to deep learning. Med Phys. 2020;47:4683–93. 10.1002/mp.14387. [DOI] [PubMed] [Google Scholar]
  • 93.Wu C, Nguyen D, Xing Y, Montero AB, Schuemann J, Shang H, et al. Improving proton dose calculation accuracy by using deep learning. Mach Learn Sci Technol. 2021;2: 015017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Wang Y, Piao Z, Gu H, Chen M, Zhang D, Zhu J. Deep learning-based prediction of radiation therapy dose distributions in nasopharyngeal carcinomas: a preliminary study incorporating multiple features including images, structures, and dosimetry. Technol Cancer Res Treat. 2024;23:15330338241256594. 10.1177/15330338241256594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Jia Y, Li Z, Akhavanallaf A, Fessler JA, Dewaraja YK. 90Y SPECT scatter estimation and voxel dosimetry in radioembolization using a unified deep learning framework. EJNMMI Phys. 2023;10:82. 10.1186/s40658-023-00598-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Salimi Y, Akhavanallaf A, Mansouri Z, Shiri I, Zaidi H. Real-time, acquisition parameter-free voxel-wise patient-specific Monte Carlo dose reconstruction in whole-body CT scanning using deep neural networks. Eur Radiol. 2023;33:9411–24. 10.1007/s00330-023-09839-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 97.Pastor-Serrano O, Perkó Z. Millisecond speed deep learning based proton dose calculation with Monte Carlo accuracy. Phys Med Biol. 2022;67: 105006. 10.1088/1361-6560/ac692e. [DOI] [PubMed] [Google Scholar]
  • 98.Kim KM, Suh M, Selvam HS, Tan TH, Cheon GJ, Kang KW, et al. Enhancing voxel-based dosimetry accuracy with an unsupervised deep learning approach for hybrid medical image registration. Med Phys. 2024;51:6432–44. 10.1002/mp.17129. [DOI] [PubMed] [Google Scholar]
  • 99.Zhang L, Holmes JM, Liu Z, Vora SA, Sio TT, Vargas CE, et al. Beam mask and sliding window-facilitated deep learning‐based accurate and efficient dose prediction for pencil beam scanning proton therapy. Med Phys. 2024;51:1484–98. 10.1002/mp.16758. [DOI] [PubMed] [Google Scholar]
  • 100.Maier J, Klein L, Eulig E, Sawall S, Kachelrieß M. Real-time estimation of patient‐specific dose distributions for medical CT using the deep dose estimation. Med Phys. 2022;49:2259–69. 10.1002/mp.15488. [DOI] [PubMed] [Google Scholar]
  • 101.Zhang X, Hu Z, Zhang G, Zhuang Y, Wang Y, Peng H. Dose calculation in proton therapy using a discovery cross-domain generative adversarial network (DiscoGAN). Med Phys. 2021;48:2646–60. 10.1002/mp.14781. [DOI] [PubMed] [Google Scholar]
  • 102.Li Z, Fessler JA, Mikell JK, Wilderman SJ, Dewaraja YK. DblurDoseNet: a deep residual learning network for voxel radionuclide dosimetry compensating for single-photon emission computerized tomography imaging resolution. Med Phys. 2022;49:1216–30. 10.1002/mp.15397. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Jia M, Wu Y, Yang Y, Wang L, Chuang C, Han B, et al. Deep learning-enabled EPID-based 3D dosimetry for dose verification of step-and-shoot radiotherapy. Med Phys. 2021;48:6810–9. 10.1002/mp.15218. [DOI] [PubMed] [Google Scholar]
  • 104.Yang J, Zhao Y, Zhang F, Liao M, Yang X. Deep learning architecture with transformer and semantic field alignment for voxel-level dose prediction on brain tumors. Med Phys. 2023;50:1149–61. 10.1002/mp.16122. [DOI] [PubMed] [Google Scholar]
  • 105.Bai T, Wang B, Nguyen D, Jiang S. Deep dose plug in: towards real-time Monte Carlo dose calculation through a deep learning-based denoising algorithm. Mach Learn Sci Technol. 2021;2: 025033. [Google Scholar]
  • 106.Kontaxis C, Bol GH, Lagendijk JJ, Raaymakers BW. Deepdose: towards a fast dose calculation engine for radiation therapy using deep learning. Phys Med Biol. 2020;65: 075013. 10.1088/1361-6560/ab7630. [DOI] [PubMed] [Google Scholar]
  • 107.Chen X, Men K, Li Y, Yi J, Dai J. A feasibility study on an automated method to generate patient-specific dose distributions for radiotherapy using deep learning. Med Phys. 2019;46:56–64. 10.1002/mp.13262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108.Fan J, Wang J, Chen Z, Hu C, Zhang Z, Hu W. Automatic treatment planning based on three-dimensional dose distribution predicted from deep learning technique. Med Phys. 2019;46:370–81. 10.1002/mp.13271. [DOI] [PubMed] [Google Scholar]
  • 109.Chen X, Men K, Zhu J, Yang B, Li M, Liu Z, et al. DVHnet: a deep learning-based prediction of patient‐specific dose volume histograms for radiotherapy planning. Med Phys. 2021;48:2705–13. 10.1002/mp.14758. [DOI] [PubMed] [Google Scholar]
  • 110.Karimipourfard M, Sina S, Mahani H, Karimkhani S, Sadeghi M, Alavi M, et al. A Taguchi-optimized Pix2pix generative adversarial network for internal dosimetry in 18F-FDG PET/CT. Radiat Phys Chem. 2024;218: 111532. 10.1016/j.radphyschem.2024.111532. [Google Scholar]
  • 111.Akhavanallaf A, Shiri I, Arabi H, Zaidi H. Whole-body voxel-based internal dosimetry using deep learning. Eur J Nucl Med Mol Imaging. 2021;48:670–82. 10.1007/s00259-020-05013-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112.Mao X, Pineau J, Keyes R, Enger SA. Rapidbrachydl: rapid radiation dose calculations in brachytherapy via deep learning. Int J Radiat Oncol Biol Phys. 2020;108:802–12. 10.1016/j.ijrobp.2020.04.045. [DOI] [PubMed] [Google Scholar]
  • 113.Götz TI, Schmidkonz C, Chen S, Al-Baddai S, Kuwert T, Lang EW. A deep learning approach to radiation dose estimation. Phys Med Biol. 2020;65: 035007. 10.1088/1361-6560/ab65dc. [DOI] [PubMed] [Google Scholar]
  • 114.Xing Y, Zhang Y, Nguyen D, Lin MH, Lu W, Jiang S. Boosting radiotherapy dose calculation accuracy with deep learning. J Appl Clin Med Phys. 2020;21:149–59. 10.1002/acm2.12937. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Lee MS, Hwang D, Kim JH, Lee JS. Deep-dose: a voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry. Sci Rep. 2019;9:10308. 10.1038/s41598-019-46620-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.Liu Z, Fan J, Li M, Yan H, Hu Z, Huang P, et al. A deep learning method for prediction of three-dimensional dose distribution of helical tomotherapy. Med Phys. 2019;46:1972–83. 10.1002/mp.13490. [DOI] [PubMed] [Google Scholar]
  • 117.Schaefferkoetter J, Yan J, Moon S, Chan R, Ortega C, Metser U, et al. Deep learning for whole-body medical image generation. Eur J Nucl Med Mol Imaging. 2021;48:3817–26. 10.1007/s00259-021-05413-0. [DOI] [PubMed] [Google Scholar]
  • 118.Galapon AV Jr., Thummerer A, Langendijk JA, Wagenaar D, Both S. Feasibility of monte carlo dropout-based uncertainty maps to evaluate deep learning‐based synthetic CTs for adaptive proton therapy. Med Phys. 2024;51:2499–509. 10.1002/mp.16838. [DOI] [PubMed]
  • 119.Alvarez Andres EA, Fidon L, Vakalopoulou M, Lerousseau M, Carré A, Sun R, et al. Dosimetry-driven quality measure of brain Pseudo computed tomography generated from deep learning for MRI-only radiation therapy treatment planning. Int J Radiat Oncol. 2020;108:813–23. 10.1016/j.ijrobp.2020.05.006. [DOI] [PubMed] [Google Scholar]
  • 120.Kazemifar S, McGuire S, Timmerman R, Wardak Z, Nguyen D, Park Y, et al. MRI-only brain radiotherapy: assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach. Radiother Oncol. 2019;136:56–63. 10.1016/j.radonc.2019.03.026. [DOI] [PubMed] [Google Scholar]
  • 121.Götz TI, Lang EW, Schmidkonz C, Kuwert T, Ludwig B. Dose voxel kernel prediction with neural networks for radiation dose estimation. Z Med Phys. 2021;31:23–36. 10.1016/j.zemedi.2020.09.005. [DOI] [PubMed] [Google Scholar]
  • 122.Koo J, Caudell JJ, Latifi K, Jordan P, Shen S, Adamson PM, et al. Comparative evaluation of a prototype deep learning algorithm for autosegmentation of normal tissues in head and neck radiotherapy. Radiother Oncol. 2022;174:52–8. 10.1016/j.radonc.2022.06.024. [DOI] [PubMed] [Google Scholar]
  • 123.Cicone F, Viertl D, Denoël T, Stabin MG, Prior JO, Gnesin S. Comparison of absorbed dose extrapolation methods for mouse-to-human translation of radiolabelled macromolecules. EJNMMI Res. 2022;12:21. 10.1186/s13550-022-00893-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Sakmár M, Kozempel J, Kučka J, Janská T, Štíbr M, Vlk M, et al. Biodistribution study of 211Pb progeny released from intravenously applied 223Ra labelled TiO2 nanoparticles in a mouse model. Nucl Med Biol. 2024;130–131:108890. 10.1016/j.nucmedbio.2024.108890. [DOI] [PubMed] [Google Scholar]
  • 125.Kruijff RM, Raavé R, Kip A, Molkenboer-Kuenen J, Morgenstern A, Bruchertseifer F, et al. The in vivo fate of 225Ac daughter nuclides using polymersomes as a model carrier. Sci Rep. 2019;9: 11671. 10.1038/s41598-019-48298-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 126.De Kruijff RM, Drost K, Thijssen L, Morgenstern A, Bruchertseifer F, Lathouwers D, et al. Improved 225Ac daughter retention in InPO4 containing polymersomes. Appl Radiat Isot. 2017;128:183–9. 10.1016/j.apradiso.2017.07.030. [DOI] [PubMed] [Google Scholar]
  • 127.Almeida DF, Astudillo P, Vandermeulen D. Three-dimensional image volumes from two‐dimensional digitally reconstructed radiographs: a deep learning approach in lower limb CT scans. Med Phys. 2021;48:2448–57. 10.1002/mp.14835. [DOI] [PubMed] [Google Scholar]
  • 128.Shen L, Zhao W, Capaldi D, Pauly J, Xing L. A geometry-informed deep learning framework for ultra-sparse 3D tomographic image reconstruction. Comput Biol Med. 2022;148: 105710. 10.1016/j.compbiomed.2022.105710. [DOI] [PubMed] [Google Scholar]
  • 129.Yim D, Lee S, Nam K, Lee D, Kim DK, Kim JS. Deep learning-based image reconstruction for few-view computed tomography. Nucl Instrum Methods Phys Res Sect A. 2021;1011: 165594. 10.1016/j.nima.2021.165594. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing not applicable to this review article as no data analysis.


Articles from Nuclear Medicine and Molecular Imaging are provided here courtesy of Springer

RESOURCES