Skip to main content
Cancer Biology & Medicine logoLink to Cancer Biology & Medicine
editorial
. 2025 Feb 4;22(1):6–13. doi: 10.20892/j.issn.2095-3941.2024.0422

Integrating artificial intelligence into radiological cancer imaging: from diagnosis and treatment response to prognosis

Sunyi Zheng 1,*, Xiaonan Cui 1,*, Zhaoxiang Ye 1,
PMCID: PMC11795265  PMID: 39907115

Cancer poses a serious threat to human health worldwide and is a leading cause of death1. The analysis of radiological imaging is crucial in early detection, accurate diagnosis, effective treatment planning, and ongoing monitoring of patients with cancer. However, several challenges impede the effectiveness of cancer imaging analysis in clinical practice. One difficulty is that healthcare professionals’ immense clinical workloads can result in time constraints and increase pressure, thereby hindering their ability to maintain high accuracy and thoroughness in image analysis. Additionally, subjective variability among radiologists can lead to inconsistent interpretations and diagnoses. Because this variability is often influenced by personal biases, standardized assessments are often difficult to achieve. Moreover, the inherent complexity of cancer imaging necessitates extensive clinical experience; this aspect can also be a limiting factor, particularly if expertise or resources are limited. The application of artificial intelligence (AI) can alleviate these problems by enhancing the accuracy, objectivity, and efficiency of cancer imaging analysis while assisting physicians. Therefore, the advancement of AI research is crucial for achieving progress in radiology.

With the development of computing resources, AI technologies have been widely applied in the field of radiological cancer imaging. Numerous AI models, through continuous refinement, have achieved performance comparable to or even surpassing that of radiologists in identifying various types of lesions. In recent years, AI has been effectively used in the detection of pulmonary nodules, breast cancer, and colon cancer25. These successful applications have prompted the evaluation of AI approaches in more complex decision-making tasks, including cancer diagnosis, predicting treatment responses, and assessing disease prognosis. Herein, we compare mainstream AI methods used in the field of radiology and illustrate their applications in tumor imaging analysis. We also discuss the current limitations of these AI methods and explore potential directions for future AI advancements, to better integrate AI into clinical practice.

AI technologies applied in radiology

Currently, three primary types of AI approaches are widely used for analyzing cancer imaging in radiology: machine learning with radiomics, deep learning, and large models. These AI technologies can leverage imaging data to identify biomarkers for diagnosis, response prediction and prognosis, thereby offering a non-invasive, tissue-preserving method that is compatible with existing clinical workflows (Figure 1).

Figure 1.

Figure 1

Overview of AI-driven methods in radiological cancer imaging. This schematic illustrates the foundational concepts of machine learning with radiomics, deep learning, and large models, and their roles in cancer diagnosis, treatment response, and prognosis prediction.

Among these approaches, machine learning with radiomics involves extraction of predefined features from radiological cancer images through data characterization algorithms. These features capture various aspects of tumoral patterns, such as intensity-based metrics; texture; shape; peritumoral characteristics; first-order statistics; and tumor heterogeneity, volume, and vascular features. An initial step in the radiomics workflow is feature selection, wherein a broad array of features is refined to a smaller, task-specific subset. This process is aimed at enhancing predictive accuracy, minimizing feature redundancy, or improving robustness and stability. The selected features are then fed into machine learning models, such as logistic regression or random forest, for outcome prediction. Developing radiomics models generally does not require extensive training data or high computational resources. Moreover, the model features are derived from fixed mathematical formulas and have interpretable definitions. Accurately delineating tumor boundaries or regions of interest is essential for feature extraction, but this step is normally labor intensive for radiologists.

Deep learning, a type of machine learning, uses neural networks to process data through multiple layers of nonlinear transformations. Each layer creates more abstract features, thus aiding in recognition of patterns in the data. These features are then used to produce outputs such as predicted treatment outcomes or tumor subtype classification. Building a deep learning model involves several steps comprising data collection and preprocessing; selection of a network architecture; and splitting of the data into training, validation, and test sets. During training, the model is continually adjusted according to validation data results. Once optimized, the model can be deployed to evaluate its performance in real-world settings. Unlike radiomics, which requires manual feature extraction, deep learning automatically learns patterns from the data through convolutional operations. However, it is often considered a “black box” because of the lack of explanation for its decision-making process. Furthermore, deep learning requires more computational resources and larger datasets than radiomics for training; however, it can be trained with less detailed manual annotation and can flexibly work with 2D and 3D data. Deep learning is also adaptable in addressing data problems such as data imbalance and hard sample learning, by using approaches including data augmentation, custom loss functions, and optimized models. These approaches enable deep learning to effectively analyze cancer imaging findings.

Large models, often referred to as foundational models, are deep learning architectures characterized by vast numbers of parameters and complex structures. These models have emerged in areas such as natural language processing and computer vision6,7. Their design, aimed at enhancing both expressive power and predictive accuracy, enables these models to handle highly complex tasks. Compared with smaller models related to radiomics or standard deep learning, large models have better performance in identifying intricate patterns and generalizing output for previously unseen data. However, because building such models normally requires substantial computational resources and large datasets, they are typically developed in industrial settings. In contrast, adapting general-purpose models to specific tasks often requires less data. Researchers typically fine-tune or pretrain models on task-specific data to improve performance and effectiveness in specialized scenarios. To further clarify the differences among these 3 AI approaches, we provide a comparison of their characteristics is provided in Table 1.

Table 1.

Comparison of the 3 main AI methods applied in cancer imaging

Characteristics Machine learning with radiomics Deep learning Large models
Data requirement Moderate Adequate Enormous
Hardware requirement Moderate High Very high
Annotation Manual delineation Flexible Flexible
Image features Predefined Learned automatically Learned automatically
Performance Moderate High Very high
Explainability Good Poor Poor

AI for cancer diagnosis

Accurate interpretation of imaging is crucial in cancer diagnosis. In cancer imaging analysis, tumor heterogeneity and the diverse imaging features of tumors are key factors influencing diagnostic accuracy. By leveraging AI and imaging technologies, radiologists can effectively extract multidimensional tumor information that can aid in patient stratification, molecular diagnostics, metastasis prediction, radiology report generation, answering medical questions, and assessing malignancy.

To predict and stratify pathological low- and high-grade bladder cancer according to CT images, Zhang et al. have developed a radiomics-based logistic regression model by using data from 108 patients; this model achieved an area under receiver operating characteristic curve (AUC) of 0.86 on a validation set of 37 patients8. Additionally, Kniep et al. have used a random forest model with radiomics features to predict tumor type from MRI images of brain metastases9. Their model was trained on 526 brain metastases and achieved an AUC ranging from 0.64 for non-small cell lung cancer to 0.82 for melanoma, in 132 metastases. Furthermore, Fan et al. have trained and tested a logistic regression model using radiomics features extracted from CT scans of 119 patients with stage II colorectal cancer; this model predicted microsatellite instability status with an AUC of 0.7510. In the field of deep learning, researchers have developed and validated convolutional neural networks for estimating malignancy risk, by using 16,077 lung nodules from CT scans in the National Lung Screening Trial. This algorithm showed a high AUC of 0.93 on 883 nodules in the Danish Lung Cancer Screening Trial, achieving a performance comparable to that of thoracic radiologists11. Beyond image-based diagnosis, large AI models also facilitate the generation of diagnostic reports. Researchers have explored the effectiveness of the large language model GPT4 in generating radiology reports for various anatomical locations. This model was found to accurately generate structured reports based on free-text PET/CT reports for 131 patients with breast cancer, particularly regarding primary lesion size (accuracy: 89.6%) and metastatic lesion details (accuracy: 96.3%)12. Other researchers have developed large models based on both vision and language information, which have achieved promising results in radiology visual question answering, report generation, and summarization tasks for ultrasound and chest X-ray images7,13.

Although deep learning or large models have shown satisfactory results in cancer imaging diagnosis, their decision-making process remains opaque. Unlike machine learning with radiomics, which provides interpretable features, deep learning and large models must improve the interpretability of their extracted features to enable more effective integration into clinical practice. Techniques such as gradient-weighted class activation mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP) can provide insights into the decision-making process by highlighting the most relevant features or regions contributing to the predictions of deep learning models14,15. For instance, Song et al. have applied Grad-CAM to identify the image regions that convolutional neural networks considered significant in distinguishing between benign and malignant thyroid nodules in ultrasound images16. Similarly, Islam et al. have used SHAP to highlight important heatmap regions contributing to the diagnosis of lung abnormalities by using a transformer-based model17. By integrating these visualization techniques, AI models can enhance transparency and interpretability, thereby fostering trust among healthcare professionals and ensuring reliable decision-making in cancer management.

AI for treatment response evaluation

Treatment responses describe the direct reactions of the body to therapeutic interventions. These responses include specific indicators such as tumor reduction, symptom improvement, and changes in biomarker levels. Evaluating these responses can aid in determining the effectiveness of a patient’s current treatment plan and potentially influence decisions regarding subsequent therapies.

To distinguish controlled from progressive disease, Colen et al. have created an XGBoost model with radiomics to predict pembrolizumab response in 57 patients with advanced rare cancers enrolled in a phase II clinical trial of pembrolizumab. After application of the least absolute shrinkage and selection operator for feature selection on pretreatment contrast enhanced CT scans, the model achieved high accuracy, sensitivity, and specificity (94.7%, 97.3% and 90%, respectively), as assessed according to RECIST criteria18. Furthermore, Antunovic et al. have developed and validated logistic regression models with radiomics to predict pathological complete response to neoadjuvant chemotherapy in 79 patients with locally advanced breast cancer on PET/CT19. The models yielded AUC values of 0.70–0.73 and indicated that HER2+, triple negative patients were more likely to have a pathological complete response than those with the luminal subtype. The aforementioned studies used imaging data to predict treatment response. Incorporating additional clinical information, such as demographic details, racial and ethnic background, molecular subtypes, and laboratory results, may further improve the accuracy of pathological complete response prediction20. Compared with machine learning approaches with radiomics, deep learning excels in handling complex, multimodal datasets; consequently, deep learning offers greater potential for accurate response prediction. For instance, by integrating CT imaging, histopathologic, and genomic features, Vanguri et al. have successfully built a multimodal deep learning model for assessing immunotherapy response to PD-(L)1 blockade, by using data from 247 patients with advanced non-small cell lung cancer21. Their model achieved an AUC of 0.80 and outperformed investigated unimodal models.

For treatment response evaluation, AI models might have reasonable performance on internal validation but frequently face a problem of performance decline in the external validation of data from diverse sources. This problem is attributable to variations in population characteristics, imaging parameters, or image quality. Using large-scale data, transfer learning, and self-supervised learning techniques for model training might help improve model generalizability in real-world clinical settings.

AI for cancer prognosis

Cancer prognosis refers to the expected future health outcomes and disease progression after cancer treatment. Physicians rely on prognostic assessments to guide the development of personalized treatment plans, evaluate treatment efficacy, and discuss strategies for managing disease. However, prognosis is influenced by factors including cancer type, cancer stage, treatment modality, patient age, overall health, and specific tumor biomarkers. In clinical practice, physicians face challenges in providing accurate prognostic assessments based on imaging.

AI technologies offer new avenues for determining cancer treatment prognosis from medical imaging findings. For example, Zheng et al. have trained a survival prediction model on CT data and clinical information for 189 patients with stage I-IIIA non-small cell lung cancer who received stereotactic body radiation therapy22. This multimodal model effectively stratified low- and high-risk patients, and achieved an AUC of 0.76 on the internal validation set comprising 81 patients and an AUC of 0.64 on the Maastro test set comprising 228 patients. Moreover, Leger et al. have used a radiomics-based random forest model trained on 48 patients to improve the prediction of overall survival in patients with head and neck cancer undergoing CT imaging during treatment23. Second-week CT scans in 30 patients presented a higher C-index of 0.79 than the value of 0.65 observed in pretreatment scans, as confirmed with Kaplan-Meier analyses. Similarly, Zhou et al. have developed a logistic regression model based on radiomics features from arterial- and portal venous-phase CT scans to predict early recurrence of hepatocellular carcinoma. The model, trained and tested on data from 215 patients, achieved an AUC of 0.8224. Wei et al. have conducted a study in a cohort of 94 patients to assess whether a logistic regression model with radiomics features could predict 3-year recurrence of advanced ovarian cancer before surgery25. Their radiomics nomogram, using pretherapeutic contrast material-enhanced CT of the abdomen and pelvis for feature extraction, achieved an AUC of 0.85 on the validation set comprising 39 patients. Focusing on stage II and III colorectal cancer, Badic et al. have extracted radiomics features from 136 contrast-enhanced CT scans for model training, and used a random forest model to predict recurrence after surgery; the model achieved an AUC of 0.79 on a test cohort of 57 patients26.

Numerous studies have focused on using radiomics instead of deep learning or large models for prognostic research, primarily because obtaining prognostic data is challenging, and radiomics can create models by using datasets of moderate size. However, radiomics models encounter clinical challenges because of inconsistencies in tumor segmentation, thus resulting in variable feature extraction and ultimately affecting model performance. When model development or validation involves tumor region segmentation, the large model Segment Anything can be applied or fine-tuned for automated segmentation27. Consequently, reproducibility can be increased, and the integration of radiomics and deep learning algorithms can be accelerated. Furthermore, AI in prognosis research has yet to establish quantifiable diagnostic markers and remains largely qualitative. Phrases such as “poor prognosis” or “unfavorable outcome” lack specificity in offering clear prognostic insights. Thus, a need persists to develop AI models that can further quantify the effects of factors or imaging features on prognosis, and provide more specific imaging biomarkers before treatment.

Potential of AI beyond clinical practice

Beyond cancer diagnosis, treatment response, and prognosis, AI has the potential to revolutionize the radiological cancer imaging field beyond direct clinical applications, by addressing operational challenges. AI can optimize the allocation of medical resources by predicting patient demand and streamlining imaging protocols28, thereby preventing unnecessary examinations and increasing cost efficiency29. Workflow automation is another critical area in which AI excels, particularly in automating repetitive tasks, extracting unstructured data, and summarizing the literature in radiology analysis30,31. These capabilities not only decrease radiologists’ workload but also minimize turnaround time, thus ensuring timely patient care32. Furthermore, AI can contribute to quality improvement by introducing standardized and consistent methods. For example, by automating complex image processing tasks, AI can minimize the influence of subjective judgment and consequently decrease interobserver variability33. Additionally, AI systems can identify subtle errors that can potentially arise during manual workflows, such as minor discrepancies in lesion measurements or inaccurate use of radiology terms in report writing34,35. These capabilities are particularly valuable in high-volume imaging centers, where the risk of human error increases with the workload. Moreover, AI can serve as a quality control tool by cross-referencing imaging findings with radiology reports and flagging inconsistencies for further review36. This proactive approach has the potential to not only efficiently identify missed findings but also markedly expedite radiology quality assurance programs. These advancements would collectively enhance the efficiency, accuracy, and reliability of cancer imaging processes, and might pave the way to more sustainable and high-performing healthcare systems.

Conclusions and future perspectives

Herein, we provided an overview of the major AI technologies and their applications in radiological cancer imaging. Although AI has shown promising results in various clinical tasks, substantial room remains for performance improvement.

The success of high-performing AI models depends on access to high-quality datasets. To provide such datasets, standardized acquisition protocols and rigorous imaging quality control processes should be carefully designed and implemented, to ensure generation of consistent, low-noise images that accurately represent clinical conditions. Additionally, multiple experienced clinicians should participate in the data annotation process, including tasks such as lesion delineation and imaging diagnosis. These efforts would enable AI models to effectively learn reproducible and clinically relevant imaging features, while minimizing the influence of noisy labels and reader variability during training.

Beyond using high quality datasets for modeling, integrating and analyzing heterogeneous, multidimensional clinical data—including pathology, proteomic, and genomic data—is also essential to effectively enable AI-powered cancer diagnosis, treatment response evaluation, and disease prognosis37. The combination of these diverse data modalities might provide a more comprehensive understanding of the underlying biological mechanisms, thus enhancing the predictive accuracy of AI models. Integrating genomic data can reveal tumor-specific mutations, whereas proteomic profiles can uncover dynamic changes in protein expression levels, both of which are highly relevant to individualized treatment planning and outcome prediction. To support this integration, efforts should focus on establishing unified frameworks for feature reduction, alignment, and merging, to efficiently use different types of data.

Additionally, addressing the technical bottlenecks that AI models might encounter in real-world clinical applications is crucial. For instance, ensuring the transferability and generalizability of AI models across diverse data sources remains a major challenge, because of variations in imaging protocols, equipment, and patient demographics. To overcome this challenge, robust domain adaptation techniques must be developed, and standardized benchmarks for cross-institutional validation must be established. Moreover, the development of AI should prioritize privacy protection, by using advanced encryption techniques and federated learning frameworks, to minimize risks associated with data breaches. In addition, addressing data imbalance is essential, to avoid biased model performance and diminished model reliability in underrepresented patient groups or conditions. Furthermore, enhancing real-time processing and inference speed with efficient architectures is also important to ensure that AI models can meet the demands of time-sensitive clinical scenarios, and enable faster, more effective decision-making in critical care and emergency settings.

The integration of AI with advanced imaging modalities such as 5T MRI and spectral CT has immense potential for improving diagnostic accuracy and clinical decision-making38,39. For 5T MRI, AI can enhance image reconstruction, reduce noise, and extract novel quantitative biomarkers from high-resolution data, thus providing deeper insights into subtle pathological changes. Similarly, in spectral CT, AI can leverage multi-energy data to achieve precise tissue characterization, material decomposition, and improved lesion differentiation. By harnessing the capabilities of AI, these cutting-edge imaging techniques can be further optimized, to facilitate more accurate diagnoses, personalized treatment planning, and advancements in precision medicine.

To effectively integrate AI into clinical practice, radiologists must gain a foundational understanding of AI, including proficiency in data interpretation and algorithmic processes, as well as understanding the limitations of AI tools, to critically assess their reliability and applicability. Training in machine learning basics, programming, and data science would enhance collaboration with AI developers, whereas knowledge of ethical considerations, such as bias and data privacy, would ensure responsible implementation. Additionally, radiologists should adapt to new workflows that incorporate AI, focusing on tasks requiring human judgment, such as correlating AI findings with clinical context and improving patient communication. These skills would position radiologists as essential intermediaries between AI tools and patient care, in maximizing the potential benefits of the technology for clinical use.

In the future, the advancement of AI in radiology will depend on close collaboration between radiologists and technical experts. Promoting partnerships between these professionals is necessary to ensure that developed AI systems are both technically robust and clinically meaningful.

Funding Statement

This study was funded by grants from the National Natural Science Foundation of China (Grant Nos. 82171932 and 82302180), the Ministry of Science and Technology of China (Grant No. 2024ZD0520002), the Chinese National Key Research and Development Project (Grant Nos. 2021YFC2500402 and 2021YFC2500400), the National Health Commission Capacity Building and Continuing Education Center (Grant No. YXFSC2022JJSJ011), the Tianjin Key Medical Discipline (Specialty) Construction Project (Grant No. TJYXZDXK-010A), and the Scientific Developing Foundation of Tianjin Education Commission (Grant No. 2024KJ182).

Conflict of interest statement

No potential conflicts of interest are disclosed.

Author contributions

Conceived and designed the analysis: Sunyi Zheng, Xiaonan Cui, and Zhaoxiang Ye.

Wrote the paper: Sunyi Zheng and Xiaonan Cui.

References

  • 1.Siegel RL, Giaquinto AN, Jemal A. Cancer statistics, 2024. CA Cancer J Clin. 2024;74:12–49. doi: 10.3322/caac.21820. [DOI] [PubMed] [Google Scholar]
  • 2.Cui X, Zheng S, Heuvelmans MA, Du Y, Sidorenkov G, Fan S, et al. Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. Eur J Radiol. 2022;146:110068. doi: 10.1016/j.ejrad.2021.110068. [DOI] [PubMed] [Google Scholar]
  • 3.Lotter W, Diab AR, Haslam B, Kim JG, Grisot G, Wu E, et al. Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nat Med. 2021;27:244–9. doi: 10.1038/s41591-020-01174-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Talukder MA, Islam MM, Uddin MA, Akhter A, Hasan KF, Moni MA. Machine learning-based lung and colon cancer detection using deep feature extraction and ensemble learning. Expert Syst Appl. 2022;205:117695 [Google Scholar]
  • 5.Li C, Wang H, Jiang Y, Fu W, Liu X, Zhong R, et al. Advances in lung cancer screening and early detection. Cancer Biol Med. 2022;19:591–608. doi: 10.20892/j.issn.2095-3941.2021.0690. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Reichenpfader D, Müller H, Denecke K. A scoping review of large language model based approaches for information extraction from radiology reports. NPJ Digit Med. 2024;7:222. doi: 10.1038/s41746-024-01219-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Zhang K, Zhou R, Adhikarla E, Yan Z, Liu Y, Yu J, et al. A generalist vision-language foundation model for diverse biomedical tasks. Nat Med. 2024;30:3129–41. doi: 10.1038/s41591-024-03185-2. [DOI] [PubMed] [Google Scholar]
  • 8.Zhang G, Xu L, Zhao L, Mao L, Li X, Jin Z, et al. CT-based radiomics to predict the pathological grade of bladder cancer. Eur Radiol. 2020;30:6749–56. doi: 10.1007/s00330-020-06893-8. [DOI] [PubMed] [Google Scholar]
  • 9.Kniep HC, Madesta F, Schneider T, Hanning U, Schönfeld MH, Schön G, et al. Radiomics of brain MRI: utility in prediction of metastatic tumor type. Radiology. 2019;290:479–87. doi: 10.1148/radiol.2018180946. [DOI] [PubMed] [Google Scholar]
  • 10.Fan S, Li X, Cui X, Zheng L, Ren X, Ma W, et al. Computed tomography-based radiomic features could potentially predict microsatellite instability status in stage II colorectal cancer: a preliminary study. Acad Radiol. 2019;26:1633–40. doi: 10.1016/j.acra.2019.02.009. [DOI] [PubMed] [Google Scholar]
  • 11.Venkadesh KV, Setio AAA, Schreuder A, Scholten ET, Chung K, Wille MM, et al. Deep learning for malignancy risk estimation of pulmonary nodules detected at low-dose screening CT. Radiology. 2021;300:438–47. doi: 10.1148/radiol.2021204433. [DOI] [PubMed] [Google Scholar]
  • 12.Chen K, Xu W, Li X. The potential of Gemini and GPTs for structured report generation based on free-text 18F-FDG PET/CT breast cancer reports. Acad Radiol. 2024 doi: 10.1016/j.acra.2024.08.052. [DOI] [PubMed] [Google Scholar]
  • 13.Li C, Wong C, Zhang S, Usuyama N, Liu H, Yang J, et al. LLaVA-Med: training a large language-and-vision assistant for biomedicine in one day. Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS’23); December 10-16, 2023; New Orleans, LA, USA. Curran Associates Inc.; 2017. pp. 28541–64. Article No. 1240. [Google Scholar]
  • 14.Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision; October 22-29, 2017; Venice, Italy. IEEE; 2017. pp. 618–26. [Google Scholar]
  • 15.Lundberg S. A unified approach to interpreting model predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17); December 4-9, 2017; Long Beach, CA, USA. Curran Associates Inc.; 2017. pp. 4768–77. [Google Scholar]
  • 16.Song D, Yao J, Jiang Y, Shi S, Cui C, Wang L, et al. A new xai framework with feature explainability for tumors decision-making in ultrasound data: comparing with grad-cam. Comput Methods Programs Biomed. 2023;235:107527. doi: 10.1016/j.cmpb.2023.107527. [DOI] [PubMed] [Google Scholar]
  • 17.Islam MK, Rahman MM, Ali MS, Mahim S, Miah MS. Enhancing lung abnormalities diagnosis using hybrid DCNN-ViT-GRU model with explainable AI: a deep learning approach. Image Vis Comput. 2024;142:104918 [Google Scholar]
  • 18.Colen RR, Rolfo C, Ak M, Ayoub M, Ahmed S, Elshafeey N, et al. Radiomics analysis for predicting pembrolizumab response in patients with advanced rare cancers. J Immunother Cancer. 2021;9:e001752. doi: 10.1136/jitc-2020-001752. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Antunovic L, De Sanctis R, Cozzi L, Kirienko M, Sagona A, Torrisi R, et al. PET/CT radiomics in breast cancer: promising tool for prediction of pathological response to neoadjuvant chemotherapy. Eur J Nucl Med Mol Imaging. 2019;46:1468–77. doi: 10.1007/s00259-019-04313-8. [DOI] [PubMed] [Google Scholar]
  • 20.Yoo C, Ahn JH, Jung KH, Kim SB, Kim HH, Shin HJ, et al. Impact of immunohistochemistry-based molecular subtype on chemosensitivity and survival in patients with breast cancer following neoadjuvant chemotherapy. J Breast Cancer. 2012;15:203–10. doi: 10.4048/jbc.2012.15.2.203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Vanguri RS, Luo J, Aukerman AT, Egger JV, Fong CJ, Horvat N, et al. Multimodal integration of radiology, pathology and genomics for prediction of response to PD-(L) 1 blockade in patients with non-small cell lung cancer. Nat Cancer. 2022;3:1151–64. doi: 10.1038/s43018-022-00416-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Zheng S, Guo J, Langendijk JA, Both S, Veldhuis RNJ, Oudkerk M, et al. Survival prediction for stage I-IIIA non-small cell lung cancer using deep learning. Radiother Oncol. 2023;180:109483. doi: 10.1016/j.radonc.2023.109483. [DOI] [PubMed] [Google Scholar]
  • 23.Leger S, Zwanenburg A, Pilz K, Zschaeck S, Zöphel K, Kotzerke J, et al. CT imaging during treatment improves radiomic models for patients with locally advanced head and neck cancer. Radiother Oncol. 2019;130:10–17. doi: 10.1016/j.radonc.2018.07.020. [DOI] [PubMed] [Google Scholar]
  • 24.Zhou Y, He L, Huang Y, Chen S, Wu P, Ye W, et al. CT-based radiomics signature: a potential biomarker for preoperative prediction of early recurrence in hepatocellular carcinoma. Abdom Radiol. 2017;42:1695–704. doi: 10.1007/s00261-017-1072-0. [DOI] [PubMed] [Google Scholar]
  • 25.Wei W, Rong Y, Liu Z, Zhou B, Tang Z, Wang S, et al. Radiomics: a novel CT-based method of predicting postoperative recurrence in ovarian cancer. 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); July 18-21, 2018; Honolulu, HI, USA. IEEE; 2018. pp. 4130–3. [DOI] [PubMed] [Google Scholar]
  • 26.Badic B, Da-Ano R, Poirot K, Jaouen V, Magnin B, Gagnière J, et al. Prediction of recurrence after surgery in colorectal cancer patients using radiomics from diagnostic contrast-enhanced computed tomography: a two-center study. Eur Radiol. 2022;32:405–14. doi: 10.1007/s00330-021-08104-4. [DOI] [PubMed] [Google Scholar]
  • 27.Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, et al. Segment anything. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV); October 01-06, 2023; Paris, France. IEEE; 2023. pp. 4015–26. [Google Scholar]
  • 28.Kapoor N, Lacson R, Khorasani R. Workflow applications of artificial intelligence in radiology and an overview of available tools. J Am Coll Radiol. 2020;17:1363–70. doi: 10.1016/j.jacr.2020.08.016. [DOI] [PubMed] [Google Scholar]
  • 29.Prabhod KJ. The role of artificial intelligence in reducing healthcare costs and improving operational efficiency. Q J Emerg Technol Innovat. 2024;9:47–59. [Google Scholar]
  • 30.Kottlors J, Bratke G, Rauen P, Kabbasch C, Persigehl T, Schlamann M, et al. Feasibility of differential diagnosis based on imaging patterns using a large language model. Radiology. 2023;308:e231167. doi: 10.1148/radiol.231167. [DOI] [PubMed] [Google Scholar]
  • 31.Kinney R, Anastasiades C, Authur R, Beltagy I, Bragg J, Buraczynski A, et al. The semantic scholar open data platform. 2023:2301.10140 arXiv: [Google Scholar]
  • 32.Alexander R, Waite S, Bruno MA, Krupinski EA, Berlin L, Macknik S, et al. Mandating limits on workload, duty, and speed in radiology. Radiology. 2022;304:274–82. doi: 10.1148/radiol.212631. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Jing X, Wielema M, Monroy-Gonzalez AG, Stams TRG, Mahesh SVK, Oudkerk M, et al. Automated breast density assessment in MRI using deep learning and radiomics: strategies for reducing inter-observer variability. J Magn Reson Imaging. 2024;60:80–91. doi: 10.1002/jmri.29058. [DOI] [PubMed] [Google Scholar]
  • 34.Hirsch L, Huang Y, Luo S, Rossi Saccarelli C, Lo Gullo R, Daimiel Naranjo I, et al. Radiologist-level performance by using deep learning for segmentation of breast cancers on MRI scans. Radiol Artif Intell. 2021;4:e200231. doi: 10.1148/ryai.200231. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Schmidt RA, Seah JCY, Cao K, Lim L, Lim W, Yeung J. Generative large language models for detection of speech recognition errors in radiology reports. Radiol Artif Intell. 2024;6:e230205. doi: 10.1148/ryai.230205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Wismüller A, Stockmaster L, Vosoughi MA. Re-defining radiology quality assurance (QA): artificial intelligence (AI)-based QA by restricted investigation of unequal scores (AQUARIUS) Pattern Recognit Track. 2022;12101:117–22. [Google Scholar]
  • 37.Zhang Q, Yang M, Zhang P, Wu B, Wei X, Li S. Deciphering gastric inflammation-induced tumorigenesis through multi-omics data and AI methods. Cancer Biol Med. 2023;21:312–30. doi: 10.20892/j.issn.2095-3941.2023.0129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Guo Y, Lin L, Zhao S, Sun G, Chen Y, Xue K, et al. Myocardial fibrosis assessment at 3-T versus 5-T myocardial late gadolinium enhancement MRI: early results. Radiology. 2024;313:e233424. doi: 10.1148/radiol.233424. [DOI] [PubMed] [Google Scholar]
  • 39.Greffier J, Villani N, Defez D, Dabli D, Si-Mohamed S. Spectral CT imaging: technical principles of dual-energy CT and multi-energy photon-counting CT. Diagn Interv Imaging. 2023;104:167–77. doi: 10.1016/j.diii.2022.11.003. [DOI] [PubMed] [Google Scholar]

Articles from Cancer Biology & Medicine are provided here courtesy of Chinese Anti-Cancer Association

RESOURCES