Skip to main content
. 2025 Jul 29;19:1953. doi: 10.3332/ecancer.2025.1953

Table 2. Current literature available on AI applications in diagnosis, grading and treatment of prostate cancer.

Study Study design Primary outcome Secondary outcome Results References
Pathomics
Morozov et al [93] Systematic review & meta-analysis AI accuracy in differentiating between prostate cancer and benign hyperplasia AI accuracy in determining Gleason grade and agreement among AI and pathologists The sensitivity for diagnosing prostate cancer was over 90%, with a range of 87%-100%, while the specificity varied between 68% and 99%. [93]
PANDA challenge Prospective Development of reproducible AI algorithms for Gleason grading using digitized prostate biopsies Validation of AI algorithms' performance on independent cross-continental cohorts Agreements with expert uropathologists on United States and European validation sets: 0.862 (κ, 95% CI, 0.840-0.884) and 0.868 (95% CI, 0.835-0.900). [96]
DeepDx Prospective Cohort Performance validation of DeepDx for prostate cancer diagnosis and grading using an independent external dataset Evaluation of DeepDx’s value to the general pathologist DeepDx achieved high accuracy for prostate cancer detection similar to original pathology reports and higher concordance [97]
Paige prostate Prospective Cohort Diagnostic performance of pathologists diagnosing prostatic core needle biopsies unaided and with AI assistance Not applicable Reduction in the number of atypical small acinar proliferation reports, immunohistochemistry studies, second opinions, and time required for reading and reporting slides [98]
Radiomics
Wang et al [99] Prospective observational Prediction of BM in prostate cancer using texture features from mp-MRI Comparison of predictive performance with PSA level and Gleason Score Texture features from T2-w and DCE T1-w MRI showed strong association with BM (p < 0.01) [99]
Li et al [100] Prospective observational Development of AI model using TRUS images to predict prostate cancer Comparison of AI model diagnostic performance with radiologists and clinical models Better diagnostic efficacy than senior radiologists (AUC: 0.667). Detected 82.9% of prostate cancer cases versus 55.8% by radiologists [100]
Faiella et al [101] Systematic review Evaluation of AI models for LNI detection and prediction in prostate cancer Comparison of AI model performance with standard nomograms and imaging modalities MRI-based AI models showed comparable LNI prediction accuracy to standard modalities [101]
Genomics
Decipher, Prolaris, Oncotype Dx Study Systematic review Evaluation of Decipher, Prolaris, and Oncotype Dx for prognostication in localized prostate cancer Assessment of their impact on treatment decisions and patient outcomes Decipher, Prolaris, Oncotype Dx demonstrated rigorous quality criteria and potential clinical utility in prognostication of localized prostate cancer providing additional prognostic information beyond clinicopathologic variables [102]
Dadhania et al [103] Prospective cohort Development of a DL algorithm to identify ERG rearrangement status in prostate cancer based on digitized slides Not applicable All models showed similar ROC curves with AUC results ranging between 0.82 and 0.85. Sensitivity and specificity of the 20×- model was 75.0% and 83.1%, respectively [103]
Mena et al [104] Prospective cohort Development of a classifier to predict the occurrence of prostate cancer using gene expression data and providing understandable explanations to assist pathologists Identification of relevant genes for prostate cancer screening RF algorithm with majority class down sampling achieved an average sensitivity of 0.90, specificity of 0.8, and an AUC of 0.84. Relevant genes include DLX1, MYL9, FGFR, CAV2, and MYLK [104]
Recurrence and biomarker
Huang et al [105] Prospective cohort Development of an AI-powered method for predicting 3-year biochemical recurrence of prostate cancer Identification of a new potential prostate cancer biomarker, TMEM173, related to the STING pathway The AI model achieved an AUC of 0.78 for predicting 3-year biochemical recurrence, outperforming Gleason Grade Group (AUC = 0.62). [105]
Lui et al [106] Systematic review Accuracy of AI-based models in predicting biochemical recurrence (BCR) of prostate cancer post-prostatectomy Comparison of AI models with traditional BCR prediction methods; Impact of radiological features on AI performance AI demonstrated high accuracy, especially when incorporating radiological features, occasionally outperforming traditional prediction methods. However, due to limited high-quality studies and insufficient external validation, further research is needed to confirm the reliability and clinical applicability of AI-based techniques. [106]
Eminaga et al [107] Observational study AI-based system for predicting recurrence and mortality in prostate cancer using histology images Comparison of AI model with existing grading systems; Agreement among pathology experts AI-based prediction system outperformed existing grading systems and demonstrated superiority in categorizing PCa into four distinct risk groups. High consensus was observed among pathology experts. AI may aid in informed clinical decision-making for PCa patients. [107]
Research and drug discovery
CancerOmicsNet Prospective observational Development of CancerOmicsNet to predict the therapeutic effects of kinase inhibitors across various tumors using a graph neural network NA CancerOmicsNet achieved an AUC of 0.83 in predicting therapeutic effects, outperforming other approaches [108]
AndroPred Prospective observational Development of AI algorithms to predict AR inhibitors using a dataset of 2242 compounds Validation of predictive models through experimental assays The DL-based prediction model outperformed others with accuracies of 92.18% and 93.05% on the training and test datasets, respectively [109]
Registry
CAPRI-3 Retrospective observational Demonstrating the reliability and efficiency of AI-driven patient identification and data collection for metastatic prostate cancer registry Not applicable Completeness and accuracy of automated data extraction were 92.3% or higher, except for date fields and inaccessible data [110]
AUC = Area under curve; ROC = receiver operating characteristic