Abstract
PET scans provide additional clinical value but are costly and not universally accessible. Salehjahromi et al.1 developed an AI-based pipeline to synthesize PET images from diagnostic CT scans, demonstrating its potential clinical utility across various clinical tasks for lung cancer.
PET scans provide additional clinical value but are costly and not universally accessible. Salehjahromi et al. developed an AI-based pipeline to synthesize PET images from diagnostic CT scans, demonstrating its potential clinical utility across various clinical tasks for lung cancer.
Main text
Positron emission tomography (PET) scans have been widely used in oncology practice by offering functional imaging, complementing the anatomical information provided by computed tomography (CT). The widespread application of PET technology is limited compared to CT technology, primarily due to its higher cost and complexity, notably in regions with limited resources, such as low-income countries. Additionally, PET scans contribute to increased radiation exposure for patients, underscoring the need for alternative methods to acquire PET imaging data.
The advent of generative artificial intelligence (AI) in computer vision has opened new possibilities for generating PET images by cross-modality imaging synthesis, where an AI model can be trained to generate one imaging modality from another. This technique, drawing from image-to-image translation capabilities initially developed for applications like colorizing black and white photos or transforming sketches into realistic images, has now been extended to medical domains, such as converting magnetic resonance imaging (MRI) to CT images.2 However, most studies have focused primarily on quantitative image quality metrics without incorporating evaluations by expert radiologists. Additionally, there is a notable gap in biological and clinical validation of these synthesized images for relevant clinical tasks, raising questions about their practical utility beyond aesthetic or numerical accuracy. The true clinical value of these AI-generated images, especially in performing relevant clinical tasks, remains to be fully ascertained.
In a previous issue of Cell Reports Medicine, Salehjahromi et al.1 introduced a conditional generative adversarial network (cGAN) to synthesize PET images from diagnostic CT scans. The fidelity of synthetic PET images was validated at imaging, biological, and clinical levels with multi-center, multi-modal data. Specifically, for imaging signal level validation, beyond standard quantitative assessments, experienced thoracic radiologists confirmed the equivalent image quality of synthesized PET and real PET by scoring a set of PET images that contains an unknown number of synthesized ones and identifying synthesized PET images from real ones. The study also assessed the consistency of key PET metrics, such as metabolic tumor activity and total lesion glycolysis, through radiomics analysis. Biologically, the synthetic PET images were shown to represent cancer hallmark pathways as consistently as real PET images, as demonstrated in a radiogenomics study. The AI pipeline is further validated under a series of clinical tasks. It is seen that the synthetic PET complements CT for cancer diagnosis of indeterminant pulmonary nodules and high-risk patient identification for developing lung cancer, shows similar accuracy as real PET in cancer staging, and achieves statistically meaningful cancer survival prediction in the majority of cases.
Previous investigations into using cGANs for generating synthetic medical images have predominantly centered on the conversion between CT and MR images, particularly for radiation therapy treatment planning.3,4 In these cases, the diagnostic utility of the synthetic images is often not a primary concern and, as such, has rarely been evaluated. Furthermore, the evaluation of diagnostic value necessitates the involvement of clinical experts, a requirement that poses a challenge for research teams that have a strong computer science foundation but lack direct access to experienced radiologists. The distinct contribution of this study is therefore 2-fold: it not only establishes the diagnostic relevance of AI-generated PET images in specific clinical tasks, thereby affirming their practical utility, but it also introduces an exemplary pipeline for assessing the clinical value of AI-synthetic imaging techniques. Particularly, the clinical validation of this study involves machine learning model testing. For instance, in evaluating the synthetic PET images’ utility in identifying high-risk patients and predicting survival, machine learning models specifically trained for these tasks were applied to both synthetic and real PET images to assess their comparability. While it may be noted that these machine learning models are not yet clinically adopted, their use offers an alternative approach to assessing synthetic images for tasks where human experts’ input is either unavailable or subject to significant variability.
Although the application of AI in synthesizing PET images from CT scans presents a promising advancement in medical imaging, questions regarding its readiness for clinical application persist. Notably, the cGAN utilized has already been extensively explored within the field of computer vision. However, the recent rise of diffusion models as a leading edge in deep generative models—known for their impressive image generation capabilities, enhanced stability during training, and resilience against common issues like mode collapse and hyperparameter sensitivity—suggests potential for even greater advancements.5 It would be compelling to assess whether the advancements attributed to diffusion models in computer vision could similarly revolutionize real-world clinical applications. Furthermore, as a preliminary proof of concept, this study necessitates larger prospective investigations and focused research on specific cancer subtypes to thoroughly validate the proposed AI pipeline for clinical use.
In summary, this proof-of-concept study unveils the promising clinical potential of AI-synthesized PET in the diagnosis, staging, risk prediction, and prognosis of lung cancer. Although further detailed investigation is essential for clinical adoption, this AI model stands as a promising advancement toward enhancing lung cancer management practices.
Acknowledgments
Declaration of interests
The authors declare no competing interests.
Declaration of generative AI and AI-assisted technologies in the writing process
During the preparation of this work the authors used ChatGPT to improve the readability and language of the work. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
Contributor Information
Tonghe Wang, Email: wangt8@mskcc.org.
Xiaofeng Yang, Email: xiaofeng.yang@emory.edu.
References
- 1.Salehjahromi M., Karpinets T.V., Sujit S.J., Qayati M., Chen P., Aminu M., Saad M.B., Bandyopadhyay R., Hong L., Sheshadri A., et al. Synthetic PET from CT improves diagnosis and prognosis for lung cancer: Proof of concept. Cell Rep. Med. 2024;5 doi: 10.1016/j.xcrm.2024.101463. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Wang T., Lei Y., Fu Y., Wynne J.F., Curran W.J., Liu T., Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J. Appl. Clin. Med. Phys. 2021;22:11–36. doi: 10.1002/acm2.13121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Spadea M.F., Maspero M., Zaffino P., Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review [published online ahead of print 20210915] Med. Phys. 2021;48:6537–6566. doi: 10.1002/mp.15150. [DOI] [PubMed] [Google Scholar]
- 4.Boulanger M., Nunes J.-C., Chourak H., Largent A., Tahri S., Acosta O., De Crevoisier R., Lafond C., Barateau A. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys. Med. 2021;89:265–281. doi: 10.1016/j.ejmp.2021.07.027. [DOI] [PubMed] [Google Scholar]
- 5.Müller-Franzes G., Niehues J.M., Khader F., Arasteh S.T., Haarburger C., Kuhl C., Wang T., Han T., Nolte T., Nebelung S., et al. A multimodal comparison of latent denoising diffusion probabilistic models and generative adversarial networks for medical image synthesis. Sci. Rep. 2023;13 doi: 10.1038/s41598-023-39278-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
