Abstract
Aims:
The study aims to develop a modified Pix2Pix convolutional neural network framework to enhance the quality of cone-beam computed tomography (CBCT) images. It also seeks to reduce the Hounsfield unit (HU) variations, making CBCT images closely resemble the internal anatomy as depicted in computed tomography (CT) images.
Materials and Methods:
We used datasets from 50 patients who underwent Gamma Knife treatment to develop a deep learning model that translates CBCT images into high-quality synthetic CT (sCT) images. Paired CBCT and ground truth CT images from 40 patients were used for training and 10 for testing on 7484 slices of 512 × 512 pixels with the Pix2Pix model. The sCT images were evaluated against ground truth CT scans using image quality assessment metrics, including the structural similarity index (SSIM), mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), normalized cross-correlation, and dice similarity coefficient.
Results:
The results demonstrate significant improvements in image quality when comparing sCT images to CBCT, with SSIM increasing from 0.85 ± 0.05 to 0.95 ± 0.03 and MAE dropping from 77.37 ± 20.05 to 18.81 ± 7.22 (p < 0.0001 for both). PSNR and RMSE also improved, from 26.50 ± 1.72 to 30.76 ± 2.23 and 228.52 ± 53.76 to 82.30 ± 23.81, respectively (p < 0.0001).
Conclusion:
The sCT images show reduced noise and artifacts, closely matching CT in HU values, and demonstrate a high degree of similarity to CT images, highlighting the potential of deep learning to significantly improve CBCT image quality for radiosurgery applications.
Keywords: Computed tomography, cone-beam computed tomography, deep learning, gamma knife, Pix2Pix model, synthetic computed tomography
INTRODUCTION
Gamma Knife treatments are a highly specialized and precise form of stereotactic radiosurgery designed for treating brain tumors and other neurological disorders.[1,2] The Leksell Gamma Knife Icon system comprises 192 Co-60 sources arranged in eight sectors, each generating narrow collimated beams that converge or focus on a single point in space, known as the isocenter. Unlike traditional surgery, Gamma Knife treatments do not involve any incisions; instead, use focused beams of radiation to target and treat small to medium-sized brain tumors with submillimeter accuracy.[3] As such, imaging modalities with excellent soft-tissue discrimination such as magnetic resonance imaging (MRI) play a vital role in Gamma Knife treatments to precisely define the target and identify organs at risk to protect surrounding healthy tissue.[4,5]
More broadly, computed tomography (CT) remains a staple imaging modality in radiation oncology enabling visualization of internal body structures and providing electron density data for dose calculations that account for tissue heterogeneities.[6] Body tissues exhibit varying electron densities that result in the contrast between internal structures within a reconstructed image.[7] Cone-beam CT (CBCT) has been introduced in radiation oncology as a supporting tool for verifying the patient’s position before treatment. It typically consists of a diverging, conical beam of kilovoltage X-rays directed at a large flat-panel detector, allowing for a sequence of 2D projections to be captured along a rotational path around the subject.[8] Owing to its low dose, speed, and compact form factor, CBCT has the advantage over conventional CT scanners to be mounted to radiotherapy units for pretreatment imaging, setup verification, and verifying delivered doses.
The incorporation of CBCT systems represents a significant technological advancement for the Gamma Knife (known as the ICON system), which was originally designed for single-session invasive frame-based treatments only.[9] This evolution allows for the introduction of the accurate administration of radiation over multiple sessions, known as a fractionated treatment.[10] In these fractionated therapies, precise immobilization and positioning of the patient’s head are achieved through a custom-fitted thermoplastic mask to ensure accurate radiation delivery to the tumor volume while minimizing exposure to adjacent brain tissue.[11] This method is especially beneficial when treating larger lesions or when fractionation improves the therapeutic window.[12]
The ability of CBCT images to map attenuation and assist in dose calculation for heterogeneous areas is a significant advantage.[13] Accurate representation of tissue densities in the form of Hounsfield units (HUs) is paramount for precise dose calculation in radiotherapy. However, CBCT systems, including the Gamma Knife Icon, exhibit variations in HU values due to issues such as noise and artifacts.[14,15] These discrepancies can pose challenges, particularly in scenarios where spatial resolution and soft-tissue contrast are critical, such as in dose calculation. The presence of noise, influenced by the number of photons in the X-ray beam and tube current, degrades image interpretability. This is further exacerbated in CBCT due to its operation at lower tube currents compared to conventional CT.[16,17] In addition, CBCT scans are more susceptible to artifacts, such as streaking and cupping due to beam hardening.[18,19]
One of the main issues with CBCT images acquired by the Gamma Knife Icon system is the presence of large variations in HU values, due to the nonnormal incidence of the X-ray central axis to the imaging panel. In Gamma Knife treatments, the challenges of suboptimal image quality, HU variability, and imaging artifacts in Gamma Knife-based CBCT images limit their utility for accurate dose calculation, particularly when inhomogeneity corrections are required.[15] Addressing these limitations is imperative for optimizing the clinical applicability of CBCT in Gamma Knife radiosurgery procedures if images are used for dose computation. In general, several methods have been proposed in the literature to reduce CBCT artifacts; these include adjusting the voltage and current settings, integrating filters during the imaging process to even out the intensity of the X-ray beam, and utilizing varying energy levels to manage beam hardening and scatter effects, especially when imaging tissues that are of significantly different densities. These methods are beneficial where conventional single-energy CBCT might struggle.
A contemporary approach that is gaining in popularity involves the application of artificial intelligence (AI). AI-based methods show promise in effectively mitigating the impact of noise and artifacts that affect CBCT imaging.[20,21,22,23] Traditional machine learning methods are limited in processing raw data without extensive feature extraction. In contrast, deep learning models, particularly neural networks, can process high-dimensional data and extract features automatically.[24,25] Convolutional neural networks and generative adversarial networks (GANs), like the U-Net and Pix2Pix, respectively, have shown significant potential in complex tasks such as image segmentation and translation.[26,27,28,29,30] By leveraging these advanced neural network architectures, CBCT images can be translated into higher-quality outputs, addressing issues of noise and artifacts. In this study, we propose the development of a deep learning model using Pix2Pix architecture specifically tailored to enhance the Gamma Knife-based CBCT image quality and improve the accuracy of raw HU values.
METHODS
In this retrospective study, we analyzed datasets from 50 patients treated with a Gamma Knife Icon system. Ethics approval for this study was obtained from the institutional review board. All patients included in this study underwent both MRI and CT as part of their treatment procedures. In addition, each patient underwent CBCT imaging to ensure precise alignment to the treatment isocenter. The Gamma Knife Icon CBCT operates at 90 kVp with a scan angle of 200°. The Icon CBCT images were acquired with a 0.5 mm slice thickness and a matrix size of 448 × 448, while CT images were captured at 120 kVp with a 2 mm slice thickness and a matrix size of 512 × 512. For consistency, the matrix size in both cases was resampled to 512 × 512, and the slice thickness was adjusted to 2 mm for training and testing purposes. For the development of the deep learning model, CT images were used as ground truth targets, while CBCT images were used as inputs. The training dataset consisted of data from 40 patients, while the testing dataset comprised data from 10 patients, amounting to 5980 and 1504 slices of 512 × 512 pixels, respectively.
For generating synthetic CT (sCT) images, a Pix2Pix model proposed by Isola et al.[30] was employed. It significantly expanded the capabilities of cGANs for image-to-image translation. Figure 1 shows the workflow adopted in this study to generate sCT. The Pix2Pix model used in this workflow operates by learning a mapping from input images (CBCT) to output images (sCT) while training a discriminator simultaneously to evaluate how well the generated images (sCT) agree with the ground truth data (CT). An essential prerequisite for the Pix2Pix model involves pairing the ground truth and input images so that the translated output can be properly evaluated against the ground truth when training. This involves harmonizing the matrix dimensions of the corresponding CBCT and CT scans and aligning the spatial information. This was achieved using in-built registration functions from MATLAB’s image processing toolbox. Each scan series was imported as a 3D volumetric matrix, with HU values shifted to start at 0 to assist subsequent registration. The input CBCT scans were then resampled using the “imresize3” function with cubic interpolation to match the matrix dimensions of the corresponding CT scan. A rigid registration using a monomodal gradient descent optimizer was then performed using the “imregister” function. After reversing the previous HU shift, the new volumes were saved in NIfTI format and assessed for cases of failed registration. Due to the limited coverage of cone beams, slices in the caudal direction that exceeded the CBCT field of view (FOV) were removed from both volumes.
Figure 1.

Workflow for synthetic computed tomography generation using Pix2Pix model
In AI, the deconvolution technique is commonly used for up-sampling in image generation. However, sCT generation can introduce checkerboard artifacts, which degrade the quality and accuracy of synthetic images. To address this, the original Pix2Pix model was modified by replacing up-sampling deconvolutions with resize-convolutions to minimize these artifacts.[31] Resize-convolutions first resize the image and then it is passed through a convolutional layer. This helps in producing smoother, artifact-free sCT images. To ensure a fair comparison, image datasets were allocated to the same training and validation sets as in the previous methods.
The generated CBCT images were evaluated against the ground truth CT scans (CT ↔ sCT) using multiple metrics, including the structural similarity index (SSIM), mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC), and dice similarity coefficient (DSC). Similarly, the above metrics were used to compare the agreement between the ground truth images (CT) against the ICON CBCT images (CT ↔ CBCT). If X and Y represent the input (CBCT) and the predicted (sCT) images, respectively, then the above metrics can be defined as follows.
Structural similarity index
SSIM is a method designed for measuring the similarity between two images. It takes into account changes in luminance, contrast, and structure between the two images.[32]
where μx and μy are the average intensity values of x and y, σx2 and σy2 are their variances, and σxy is the covariance of images x and y, C1 and C2 are constants to stabilize the division.
Mean absolute error
MAE measures the average magnitude of the errors between images x and y, without considering their direction. It is the average over the test sample of the absolute differences between prediction and actual observation where all individual differences have equal weight.[33] When used in a loss function this metric is called L1 loss.
where xi and yi are the intensity values of pixels i in images X and Y.
Root mean square error
RMSE is a frequently used measure of the differences between values predicted by a model and the true values. It represents the square root of the second sample moment of the differences between predicted values and observed values.[34]
where xi and yi are the intensity values of pixels i and y in images X and Y.
Peak signal-to-noise ratio
PSNR is most commonly used to measure the quality of reconstruction in lossy compression codecs. The signal in this case is the sCT, and the ground truth is the true CT. Noise is introduced during CBCT to sCT enhancement by the Pix2Pix model.[35]
where MAXI is the maximum possible pixel value of the image and MSE is the mean squared error.
Normalized cross-correlation
NCC is used in image processing to measure the similarity of two images. It is a measure of the correlation between the intensity values of two images, normalized to the range (−1,1).[36]
where xi, j and yi, j denote the intensity values of the corresponding pixels at position (i, j) in images X and Y.
Dice similarity coefficient
DSC is a statistical tool used for comparing the similarity of two sets of data, with a value of 1 indicating perfect overlap and 0 indicating no overlap. It is widely used in image segmentation to compare the pixel-wise agreement between a predicted segmentation and its corresponding ground truth.[37]
where X and Y represent the sets of data belonging to each image.
The Shapiro–Wilk normality test was performed on the SSIM, MAE, PSNR, RMSE, NCC, and DSC metrics, followed by the two-tailed Mann–Whitney U to evaluate the statistical significance (p < 0.05) of these metrics computed between CT ↔ CBCT and between CT ↔ sCT.
RESULTS
Figure 2 shows an example of a comparison of CBCT axial images from the Gamma Knife ICON system, sCT, and CT images for a test patient, signifying a high similarity between the sCT and the ground truth planning CT images. One of the advantages of using deep learning is the ability to generate partially missing information in the input images. A part of the nose, which is clipped in Figure 2a, has been partially reconstructed in these sCT images [Figure 2b] and closely resembles Figure 2c. Figure 2f illustrates the ground truth image at the frontal sinus, while Figure 2e shows the generated synthetic CT derived from Figure 2d. FOV limitations in CBCT images, as depicted in Figure 2a, tend to marginally lower the SSIM when compared to ground truth CT images. Figures 3 and 4 show the axial, coronal and sagittal views of the Icon CBCT [Figures 3a and 4a], CT [Figures 3b and 4b], and synthetic CT [Figures 3c and 4c], images for two test cases. In addition to the above, the average HU values calculated for each slice are plotted against the slice numbers [Figures 3d and 4d]. It demonstrates that the CT and sCT HU values agree each other and CBCT HU values deviate away from the average HU values for each axial slice.
Figure 2.

Comparison of cone-beam computed tomography (CBCT) (a), generated CBCT (b), and the ground truth (c) axial slices at the nasal septum. Comparison of CBCT (d), generated CBCT (e), and the ground truth (f) axial slices at the frontal sinus. CT: Computed tomography, CBCT: Cone-beam CT, sCT: Synthetic CT
Figure 3.

Axial, coronal and sagittal views of the Icon cone-beam computed tomography, CT and synthetic CT images with the average voxel value against the slice number for a test case. (a) Icon CBCT, (b) CT, (c) Synthetic CT, (d) Mean voxel values for CBCT, CT, and synthetic CT. CT: Computed tomography, CBCT: Cone-beam CT, sCT: Synthetic CT
Figure 4.

Axial, coronal and sagittal views of the icon cone-beam computed tomography, CT and synthetic CT images with the average voxel value against the slice number for a second test case. (a) Icon CBCT, (b) CT, (c) Synthetic CT, (d) Mean voxel values for CBCT, CT, and synthetic CT. CT: Computed tomography, CBCT: Cone-beam CT, sCT: Synthetic CT
Table 1 shows the comparison of the calculated SSIM, MAE, PSNR, RMSE, NCC, and DSC metrics between CT ↔ CBCT images, and between CT ↔ sCT images, respectively. In the quantitative analysis of image quality metrics comparing 10 patients’ CT and sCT image sets, the SSIM has shown an average value of 0.95 ± 0.03, indicating a high level of similarity as opposed to 0.85 ± 0.05 for CT versus CBCT (p < 0.0001). It also shows higher average values in other key metrics such as PSNR (30.76 vs. 26.50), and DSC (0.94 vs. 0.88), alongside lower averages in MAE (18.81 vs. 77.37) and RMSE (82.30 vs. 228.52), when compared to the CBCT images. The ranges of these metrics further corroborate the improved consistency and reduced variability in SCT images. Upon employing the Mann–Whitney U test to assess the statistical significance of these differences, we found that the disparities in SSIM, MAE, PSNR, RMSE, NCC, and DSC between CT ↔ CBCT and CT ↔ sCT were statistically significant. This suggests that sCT images offer a notable improvement in image quality metrics over traditional icon CBCT images, emphasizing the potential of synthetic imaging techniques for enhancing image quality and usability of these images for treatment planning in clinical settings.
Table 1.
Comparison of structural similarity index, mean absolute error, peak signal-to-noise ratio, root mean square error, normalized cross-correlation, and DSC for computed tomography and cone-beam computed tomography images (computed tomography ↔ cone-beam computed tomography) and computed tomography and synthetic computed tomography images (computed tomography ↔ synthetic computed tomography)
| Metrics | CT ↔ CBCT | CT ↔ sCT | Statistical significance (P) |
|---|---|---|---|
| SSIM | 0.85±0.05 (0.71–0.97) | 0.95±0.03 (0.85–0.99) | <0.0001 |
| MAE | 77.37±20.05 (28.75–147.53) | 18.81±7.22 (3.32–46.02) | <0.0001 |
| PSNR | 26.50±1.72 (19.95–33.98) | 30.76±2.23 (20.99–37.96) | <0.0001 |
| RMSE | 228.52±53.76 (125.38–376.48) | 82.30±23.81 (26.21–214.56) | <0.0001 |
| NCC | 0.97±0.03 (0.69–0.99) | 0.99±0.01 (0.86–0.99) | <0.0001 |
| DSC | 0.88±0.10 (0.20–0.97) | 0.94±0.06 (0.50–0.998) | <0.0001 |
SSIM: Structural similarity index, MAE: Mean absolute error, PSNR: Peak signal-to-noise ratio, RMSE: Root mean square error, NCC: Normalized cross-correlation, CT: Computed tomography, CBCT: Cone-beam CT, sCT: Synthetic CT, DSC: Dice similarity coefficient
Figure 5 shows the comparison of HU profiles along the midline in CBCT, CT, and sCT images. The x-axis represents the voxel position along the plotted vertical line, from the anterior (Ant) to the posterior (Post) direction, against the HU values. The CBCT data (blue line) displays noticeable fluctuations and broader peaks in HU values in the bony region compared to the CT (red) and sCT (green) profiles. The CT and sCT images agree closely and exhibit smoother transitions, except at the posterior end outside the body due to the headrest in the CT dataset. It reveals that CBCT tends to show a high degree of variation in HU values compared to CT, while sCT and CT profiles show better agreement, indicating that sCT is a better alternative to the direct use of CBCT in terms of image quality and density representation.
Figure 5.

Comparison of Hounsfield unit profiles along the mid-vertical line in cone-beam computed tomography (CT), CT, and synthetic CT images. CT: Computed tomography, CBCT: Cone-beam CT, sCT: Synthetic CT
DISCUSSION
This study explored the development of a modified Pix2Pix framework to convert Gamma Knife CBCT images into sCT images, aiming to enhance the accuracy of radiotherapy treatment planning and monitoring with CBCT data on the Gamma Knife system. To our knowledge, this is the first study that focuses on generating sCT images for Gamma Knife Icon CBCT images. Our dataset comprised a significant number of slices in both training and validation sets, collected from the datasets of patients who underwent Gamma Knife radiotherapy. In Gamma Knife treatments, the Leksell frame is used for stereotactic radiosurgery procedures, while thermoplastic masks are employed for fractionated treatments. CT images acquired with the Leksell frame often introduce streaking artifacts primarily due to the screws, which cause dark streaks as a result of beam hardening. The artifacts can obscure important anatomical details, which limits the usefulness of these slices for training a deep learning model. Therefore, it is highly recommended to use artifact-free images as ground truth data when generating deep learning models for producing sCT images. The CBCT images in Figures 2–4 illustrate the use of thermoplastic masks for Gamma Knife treatments, which are free from metal artifacts. One challenge encountered with the data was the limited coverage of CBCT volumes compared to CT, leading to the exclusion of inferior slices and a reduction in the number of usable slices. Our study adopted Isola et al.’s method, which employs a hybrid of L1 and adversarial loss functions, along with a PatchGAN discriminator. A PatchGAN discriminator works by classifying patches of images as real or fake rather than entire images. In addition to being less computationally intensive, this approach helps to better capture high-frequency details. GANs tend to focus on the global coherence of images, which can encourage generators to produce globally coherent images that lack the fine details and artifacts that could make them more easily detected by a discriminator. Because PatchGANs focus on smaller portions of images, they are inclined to pay greater attention to high-frequency features, or a lack thereof, which encourages generators to preserve fine details. The training process, driven by the Adam optimization algorithm, showed a steady improvement in the generator’s performance, eventually plateauing after approximately 50 epochs.
Our results indicate a generally high degree of structural similarity and correlation in the images analyzed, as evidenced by the SSIM, DSC, and NCC metrics computed between sCT and CT images. However, it is important to note the limitations posed by the CBCT’s FOV, which can impact the overall SSIM and DSC scores between CT ↔ CBCT and CT ↔ sCT datasets. SSIM and NCC are both used to measure the similarity between two image sets, but they vary in terms of the aspects of the image they focus on and how they compute similarity. NCC focuses on the correlation between two images by comparing their pixel intensity values directly. SSIM reflects on how well the structural information of one image matches that of another. It attempts to simulate human visual perception by evaluating three important factors: contrast, luminance, and structure. Figures 2–4 clearly illustrate the similarity between the generated sCT and the ground truth CT images. The higher values in SSIM compared to NCC underscores the fact that sCT images are superior not only in terms of direct intensity correlation but also in perceptual image quality, posing them as a stronger alternative to CBCT as they outperform CBCT images in terms of both how closely they resemble the CT images to the human eye and computation of image similarity. The comparison of HUs further validates the accuracy of the sCT images, with Hounsfield values closer to that of the CT images. This indicates that, in sCT images, deep learning effectively eliminates the high-end HU value artifacts observed in the axial CBCT images [Figures 3d and 4d], aligning them with the HU values of the ground truth CT. This allows for the opportunity to perform dose calculation directly on CBCT images, which is a major benefit for Gamma Knife treatment.
The generation of an sCT mimicking the structure of the ground truth CT is inherently limited by the spatial information provided in the original CBCT images. The most significant HU discrepancies occur at boundary regions of air, tissue, and bone, and the sCT images may lack fine textures, such as cerebral grooves or lateral ventricles, predominantly due to the presence of CBCT noise. This leads to the model approximating the region with a generalized soft-tissue HU rather than attempting to generate structures that are not present in the original CBCT images. Interestingly, the model exhibits a tendency to predict the patient’s nose when absent from the CBCT scan. This contrasts with the model’s handling of intracranial regions, where it rarely attempts to estimate tissue textures. The sCT images, having Hounsfield values that more closely match those of CT images, represent a significant advancement, particularly for applications such as Gamma Knife treatment where precise dose calculation is critical.[38,39] If a facility decides to use CBCT for dose calculation, the ability to perform dose calculations directly on enhanced CBCT (sCT) images circumvents the need for additional CT scans, offering a substantial benefit in terms of treatment planning efficiency and patient comfort.
The primary reason we did not use an electron density phantom to validate the HU conversion of CBCT to sCT is that our trained model requires patient-like input data to generate accurate sCT images. Developing a separate model for electron-density phantoms would require a different approach, as such a model would need to be trained on that specific data type. The weight and bias values would likely differ between the models, affecting the generated images. Our proposed deep learning model transforms CBCT images to more closely resemble high-quality CT images, ensuring that the operational benefits of CBCT are fully utilized without compromising the accuracy and reliability essential for clinical applications like dose planning in radiosurgery. These findings underscore the system’s strengths in producing high-quality images; however, limitations in the CBCT FOV scan length may result in parts of the patient’s anatomy being clipped. Future work will focus on validating the clinical utility of these sCT images and exploring alternative architectures for further improvement.
CONCLUSION
This study demonstrates the efficacy of using Pix2Pix for generating high-quality sCT images from Gamma Knife CBCT scans with high agreement of HU values between sCT and CT images. The high SSIM score of 0.95, low MAE, RMSE, and high DSC highlight our model’s capability to produce sCT images that are structurally similar to the original CT scans. These results are promising for enhancing the utility of sCT images in clinical settings, particularly for Gamma Knife treatment planning and monitoring.
Conflicts of interest
There are no conflicts of interest.
Funding Statement
Nil.
REFERENCES
- 1.Gerosa M, Nicolato A, Foroni R. The role of gamma knife radiosurgery in the treatment of primary and metastatic brain tumors. Curr Opin Oncol. 2003;15:188–96. doi: 10.1097/00001622-200305000-00002. [DOI] [PubMed] [Google Scholar]
- 2.Monaco EA, Grandhi R, Niranjan A, Lunsford LD. The past, present and future of gamma knife radiosurgery for brain tumors: The Pittsburgh experience. Expert Rev Neurother. 2012;12:437–45. doi: 10.1586/ern.12.16. [DOI] [PubMed] [Google Scholar]
- 3.Velnar T, Bosnjak R. Radiosurgical techniques for the treatment of brain neoplasms: A short review. World J Methodol. 2018;8:51–8. doi: 10.5662/wjm.v8.i4.51. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Lindquist C. Gamma knife radiosurgery. Semin Radiat Oncol. 1995;5:197–202. doi: 10.1054/SRAO00500197. [DOI] [PubMed] [Google Scholar]
- 5.Niranjan A, Bowden G, Flickinger JC, Lunsford LD. Gamma knife radiosurgery. In: Chin LS, Regine WF, editors. Principles and Practice of Stereotactic Radiosurgery. 2nd ed. New York: Springer Science+Business Media; 2015. pp. 111–9. [Google Scholar]
- 6.Prabhakar R, Ganesh T, Rath GK, Julka PK, Sridhar PS, Joshi RC, et al. Impact of different CT slice thickness on clinical target volume for 3D conformal radiation therapy. Med Dosim. 2009;34:36–41. doi: 10.1016/j.meddos.2007.09.002. [DOI] [PubMed] [Google Scholar]
- 7.Diwakar M, Kumar M. A review on CT image noise and its denoising. Biomed Signal Process Control. 2018;42:73–88. [Google Scholar]
- 8.Abramovitch K, Rice DD. Basic principles of cone beam computed tomography. Dent Clin North Am. 2014;58:463–84. doi: 10.1016/j.cden.2014.03.002. [DOI] [PubMed] [Google Scholar]
- 9.Xu AY, Wang YF, Wang TJ, Cheng SK, Elliston CD, Savacool MK, et al. Performance of the cone beam computed tomography-based patient positioning system on the gamma knife Icon™. Med Phys. 2019;46:4333–9. doi: 10.1002/mp.13740. [DOI] [PubMed] [Google Scholar]
- 10.Stieler F, Wenz F, Abo-Madyan Y, Schweizer B, Polednik M, Herskind C, et al. Adaptive fractionated stereotactic gamma knife radiotherapy of meningioma using integrated stereotactic cone-beam-CT and adaptive re-planning (a-gkFSRT) Strahlenther Onkol. 2016;192:815–9. doi: 10.1007/s00066-016-1008-6. [DOI] [PubMed] [Google Scholar]
- 11.Bush A, Vallow L, Ruiz-Garcia H, Herchko S, Reimer R, Ko S, et al. Mask-based immobilization in gamma knife stereotactic radiosurgery. J Clin Neurosci. 2021;83:37–42. doi: 10.1016/j.jocn.2020.11.033. [DOI] [PubMed] [Google Scholar]
- 12.Dong P, Pérez-Andújar A, Pinnaduwage D, Braunstein S, Theodosopoulos P, McDermott M, et al. Dosimetric characterization of hypofractionated gamma knife radiosurgery of large or complex brain tumors versus linear accelerator-based treatments. J Neurosurg. 2016;125:97–103. doi: 10.3171/2016.7.GKS16881. [DOI] [PubMed] [Google Scholar]
- 13.Hatton J, McCurdy B, Greer PB. Cone beam computerized tomography: The effect of calibration of the Hounsfield unit number to electron density on dose calculation accuracy for adaptive radiation therapy. Phys Med Biol. 2009;54:N329–46. doi: 10.1088/0031-9155/54/15/N01. [DOI] [PubMed] [Google Scholar]
- 14.DenOtter TD, Schubert J. StatPearls. StatPearls Publishing; Treasure Island, FL, USA: 2019. Hounsfield unit. [PubMed] [Google Scholar]
- 15.Ramachandran P, Perrett B, Dancewicz O, Seshadri V, Jones C, Mehta A, et al. Use of GammaPlan convolution algorithm for dose calculation on CT and cone-beam CT images. Radiat Oncol J. 2021;39:129–38. doi: 10.3857/roj.2020.00640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Nagarajappa AK, Dwivedi N, Tiwari R. Artifacts: The downturn of CBCT image. J Int Soc Prev Community Dent. 2015;5:440–5. doi: 10.4103/2231-0762.170523. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Goldman LW. Principles of CT and CT technology. J Nucl Med Technol. 2007;35:115–28. doi: 10.2967/jnmt.107.042978. [DOI] [PubMed] [Google Scholar]
- 18.Brooks RA, Di Chiro G. Beam hardening in x-ray reconstructive tomography. Phys Med Biol. 1976;21:390–8. doi: 10.1088/0031-9155/21/3/004. [DOI] [PubMed] [Google Scholar]
- 19.Jin SO, Kim JG, Lee SY, Kwon OK. Bone-induced streak artifact suppression in sparse-view CT image reconstruction. Biomed Eng Online. 2012;11:44. doi: 10.1186/1475-925X-11-44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Zhang Y, Yu H. Convolutional neural network based metal artifact reduction in x-ray computed tomography. IEEE Trans Med Imaging. 2018;37:1370–81. doi: 10.1109/TMI.2018.2823083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Xu S, Prinsen P, Wiegert J, Manjeshwar R, editors. Deep Residual Learning in CT Physics: Scatter Correction for Spectral CT. 2017 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC) IEEE. 2017 [Google Scholar]
- 22.Hansen DC, Landry G, Kamp F, Li M, Belka C, Parodi K, et al. ScatterNet: A convolutional neural network for cone-beam CT intensity correction. Med Phys. 2018;45:4916–26. doi: 10.1002/mp.13175. [DOI] [PubMed] [Google Scholar]
- 23.Yang Q, Yan P, Kalra MK, Wang G. CT image denoising with perceptive deep neural networks. ArXiv Preprint. 2017:170207019. [Google Scholar]
- 24.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
- 25.Liu W, Wang Z, Liu X, Zeng N, Liu Y, Alsaadi FE. A survey of deep neural network architectures and their applications. Neurocomputing. 2017;234:11–26. [Google Scholar]
- 26.Li Z, Liu F, Yang W, Peng S, Zhou J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans Neural Netw Learn Syst. 2022;33:6999–7019. doi: 10.1109/TNNLS.2021.3084827. [DOI] [PubMed] [Google Scholar]
- 27.Ronneberger O, Fischer P, Brox T, editors. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015:18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings. Part III 18. Springer; 2015. U-net: Convolutional Networks for Biomedical Image Segmentation. [Google Scholar]
- 28.Kandel ME, He YR, Lee YJ, Chen TH, Sullivan KM, Aydin O, et al. Phase imaging with computational specificity (PICS) for measuring dry mass changes in sub-cellular compartments. Nat Commun. 2020;11:6256. doi: 10.1038/s41467-020-20062-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial networks. Commun ACM. 2020;63:139–44. [Google Scholar]
- 30.Isola P, Zhu JY, Zhou T, Efros AA. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. Image-to-Image Translation with Conditional Adversarial Networks; pp. 1125–34. [Google Scholar]
- 31.Odena A, Dumoulin V, Olah C. Deconvolution and checkerboard artifacts. Distill. 2016;1:e3. [Google Scholar]
- 32.Wang Z, Simoncelli EP, Bovik AC. Multiscale Structural Similarity for Image Quality Assessment. The 27th Asilomar Conference on Signals, Systems and Computers. 2003;2:1398–402. [Google Scholar]
- 33.Walther BA, Moore JL. The concepts of bias, precision and accuracy, and their use in testing the performance of species richness estimators, with a literature review of estimator performance. Ecography. 2005;28:815–29. [Google Scholar]
- 34.Willmott CJ. On the validation of models. Phys Geogr. 1981;2:184–94. [Google Scholar]
- 35.Richardson IE. Video Codec Design: Developing Image and Video Compression Systems. USA, NJ, Hoboken: John Wiley and Sons; 2002. [Google Scholar]
- 36.Nakhmani A, Tannenbaum A. A new distance measure based on generalized image normalized cross-correlation for robust video tracking and image recognition. Pattern Recognit Lett. 2013;34:315–21. doi: 10.1016/j.patrec.2012.10.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Sorensen TA. A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. Biol skr. 1948;5:1–34. [Google Scholar]
- 38.Zhang Y, Ding SG, Gong XC, Yuan XX, Lin JF, Chen Q, et al. Generating synthesized computed tomography from CBCT using a conditional generative adversarial network for head and neck cancer patients. Technol Cancer Res Treat. 2022;21:15330338221085358. doi: 10.1177/15330338221085358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Radzi M, Binti J. Treatment Plan Optimization Based on Biologically Effective Dose in Gamma Knife Radiotherapy Doctoral Dissertation, Dissertation, Heidelberg, Universität Heidelberg. 2020 [Google Scholar]
