Skip to main content
. 2025 Dec 22;9:62. doi: 10.1038/s41746-025-02226-5

Fig. 5. Examples of algorithmic image analyses.

Fig. 5

a AI-driven texture analysis distinguishes advanced-stage lung adenocarcinomas with and without PD-L1 expression on CT, despite no significant visual difference to radiologists. Images show a PD-L1 lesion from multiple views (top) and undergoing region of interest segmentation for feature extraction. Key discriminative features (GLCM angular second momentum and various GLRLM metrics) indicated more homogeneous small-scale high-attenuation patterns in PD-L1-positive lesions, though there was no association with any qualitative imaging feature. Reproduced with permission from 10.1111/1759-7714.13352. b Deep learning AI trained on T1-weighted MRI scans achieves 95% accuracy (AUC = 0.98) in distinguishing Parkinson’s disease from healthy controls. Example of control (top) and Parkinson’s disease (bottom) images appear grossly similar. Class activation maps (right) highlight the substantia nigra pars compacta, consistent with known dopaminergic pathology. Reproduced with permission from doi:10.3390/diagnostics10060402. c AI-driven texture analysis detects myocardial infarction (MI) on noncontrast low-dose CT scans, even when no abnormality is visually apparent (top left). The contrast-enhanced scan (top middle) shows hyperdense tissue (white arrow) consistent with MI. Region of interest segmented the left ventricle (top right), and coronary angiogram indicates coronary artery disease (bottom left). Of the noncontrast scans, key radiomic features (GLCM and autoregressive coefficients) indicated fine textural changes (i.e., abundant, small-scale changes in intensity) undetectable by radiologists in the study but indicative of MI. Bottom right depicts a parametric map of a GLCM textural feature which differed significantly between control and MI. Reproduced with permission from doi:10.1097/RLI.0000000000000448. d Deep learning AI trained on UK Biobank non-dilated fundus images predicted patient sex with 87% accuracy on internal validation and 79% accuracy on external validation. Saliency maps revealed the model’s reliance on subtle, voxel-level cues not apparent to clinicians. Reproduced with permission from doi:10.1038/s41598-021-89743-x. e AI trained on MR scans (left) paired with histopathology results (right) used radiomics to classify medulloblastomas into four main molecular subgroups with 88–98% accuracy. Feature selection identified shape, first-order intensity, and texture-based descriptors (e.g., GLCM correlation, run length nonuniformity) as key predictors. This approach outperformed the accuracy of conventional MRI-based assessments reported elsewhere74. Reproduced with permission from doi:10.1148/radiol.212137. Abbreviations: GLCM Gray-Level Co-occurrence Matrix, GLRLM Gray-Level Run Length Matrix, PD-L1 programmed death-ligand 1.