Skip to main content
. 2023 May 23;13(2):168–183. doi: 10.4103/tjo.TJO-D-23-00022

Table 3.

Summary of studies using visual fields for predicting glaucoma progression

Year First author Aim Outcome Dataset Model Input Output Results
2012 Goldbaum et al.[63] ML model (POP) to define VF progression Percentage of eyes progressing 2085 subjects POP model based on VIM versus GPA, MD and VFI VF VF progression POP has similar performance to GPA/MD/VFI in glaucoma suspects but performs better in subjects with glaucoma and those with documented glaucoma
2014 Yousefi et al.[64] Define hierarchical approach to VF analysis Percentage of eyes progressing 939 eyes (677 subjects) abnormal, 1146 eyes (721 subjects) normal (ML models) GEM, VIM versus GPA, MD and VFI VF VF progression GEM: 28.9%, VIM: 26.6%, GPA: 19.7%, MD: 16.9%, VFI: 14.1%
2018 Yousefi et al.[65] Predict progression using different methods Time to progression 3 datasets: 2085 eyes to identify patterns, no change/test-retest data: 133 eyes (10 times/10 weeks) 270 eyes to validate GEM versus conventional models VF Time to progression Time to detect progression in 25% of the eyes: MD: 5.2 (95% CI: 4.1–6.5) years; region-wise: 4.5 (4.0–5.5) years, point-wise: 3.9 (3.5–4.6) years, GEM: 3.5 (3.1–4.0) years. When more visits added 6.6 (5.6–7.4) years, 5.7 (4.8–6.7) years, 5.6 (4.7–6.5) years, and 5.1 (4.5–6.0) years for global, region-wise, point-wise and GEM
2019 Wen et al.[17] Predict future HVF PMAE 32,443 VF CNN: Cascade 5 VF raw sensitivity values 52-point raw sensitivity at 0.5–5.5 years Overall point-wise PMAE (dB): 2.47 (95% CI: 2.45–2.48)
2019 Berchuck et al.[73] Predict rate of progression MAE 29,161 VF VAE VF VF VAE predicts higher progression than MD at 2/4 years (25%–35% vs. 9%–15%), VAE also better than PWE error at visit 8 (5.14 dB vs. 8.07 dB)
2019 Wang et al.[70] VF progression Kappa, accuracy 12,217 eyes, 7360 patients Archetypal analysis[67] 5 reliable VF, 5 years follow up, 6-month interval VF progression Clinical validation cohort (397 eyes with 27.5% of confirmed progression), the agreement (kappa) and accuracy (mean of hit rate and correct rejection rate) of the archetype method (0.51 and 0.77) significantly (P<0.001 for all) outperformed AGIS (0.06 and 0.52), CIGTS (0.24 and 0.59), MD slope (0.21 and 0.59) and PoPLR (0.26 and 0.60)
2019 Park et al.[74] Predict future HVF RMSE Training: 1408 eyes, 281 eyes test RNN Five consecutive VF 52-point TDV values RNN outperformed OLR and gave an overall prediction error (RMSE) of 4.31±2.54 dB versus 4.96±2.76 for the OLR model (P<0.001)
2020 Yousefi et al.[71] AI dashboard for VF progression SN, SP 31,591 VF, 8077 subjects Combination of PCA + t-distributed stochastic neighbor embedding (tSNE) VF VF progression SP for detecting “likely nonprogression” was 94% and SN for detecting “likely progression” was 77%
2021 Saeedi et al.[66] MLC for VF progression Accuracy, SN, PPV, class bias 90,713 VF, 13,156 eyes ML classifiers versus conventional progression algorithms VF VF progression 6 ML classifiers involved: Logistic regression, random forest, extreme gradient boosting, support vector classifier, CNN, fully connected neural network. 87%–91% accuracy, SN: 0.83–0.88, SP: 0.92–0.96
2021 Shuldiner et al.[67] ML can predict VF progression AUC 175,786 VF, 22,925 initial VF, 14,217 subjects >5 reliable VF Various ML classifiers like SVM, ANN, random forest and naive bayes classifier VF VF progression SVM model (AUC: 0.72 [95% CI: 0.70–0.75]) versus ANN (AUC: 0.72), random forest (AUC: 0.70), logistic regression (AUC: 0.69) and naive Bayes classifiers (AUC: 0.68). Older age and higher PSD associated with progression. 2 VF versus 1 VF model no difference
2022 Eslami et al.[13] CNN/RNN for estimating VF changes PMAE 24–4 VF CNN: 54,373, 7472 subjects RNN: 24,430, 1809 subjects CNN and RNN VF 52-point VF values CNN: 2.21–2.24 dB, RNN: 2.56–2.61 dB, large errors in identifying those with worsening and failed to outperform no change model
2022 Chen et al.[75] VF progression Progression yes/no 7428 eyes, 3871 patients Elastic-net cox regression model First VF, age, gender, laterality, and MD at baseline Sample size required for appropriate trial effect size 13% progressed over 5 years, for a trial length of 3 years and effect size of 30%, the number of patients required was 1656 (95% CI: 1638–1674), 903 (95% CI: 884–922) and 636 (95% CI: 625–646) for the entire cohort, the subgroup and the model-selected patients, respectively
2022 Yousefi et al.[7] VF progression Pattern of loss 2231 VF, 205 eyes, 176 OHTS subjects over 16 years Deep archetypal analysis[68] VF Pattern of loss 18 machine-identified patterns of VF loss similar to 13 expert-identified patterns. Most prevalent expert-identified patterns included partial arcuate, paracentral and nasal step defects and most prevalent machine-identified patterns included temporal wedge, partial arcuate, nasal step and paracentral VF defects
2022 Shon et al.[72] VF progression by AI versus linear models AUROC 9212 eyes, 6047 subjects >4 years VF block: CNN Three VF as 3D tensor VF progression over 3 years CNN: AUROC: 0.864, SN: 0.42, SP: 0.95; PLR: AUROC: 0.611, SN: 0.28, SP: 0.84

AGIS=Advanced glaucoma intervention study scoring, AUC/AUROC=Area under the receiver operating characteristic curve, CIGTS=Collaborative initial glaucoma treatment study scoring, CNN=Convolutional neural network, GEM=Gaussian mixture model-expectation maximization, GPA=Glaucoma progression analysis, MD=Mean deviation, OLR=Ordinary linear regression, PCA=Principal component analysis, MAE=Mean absolute error, PMAE=Pointwise MAE, RNN=Recurrent neural network, SN=Sensitivity, SP=Specificity, SVM=Support vector machine, TDV=Total deviation values, VAE=Variational auto-encoder, VIM=Variational Bayesian independent component analysis mixture model, VF=Visual field, VFI=Visual field index, POP=Permutation of pointwise, HVF=Humphrey VF, AI=Artificial intelligence, PPV=Positive predictive value, RMSE=Root mean square error, OHTS=Ocular hypertension treatment study, PLR=Pointwise linear regression, tSNE=t-distributed stochastic neighbor embedding, CI=Confidence interval