Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2020 May 22;10:8565. doi: 10.1038/s41598-020-65417-y

Using Artificial Intelligence and Novel Polynomials to Predict Subjective Refraction

Radhika Rampat 1,#, Guillaume Debellemanière 1,#, Jacques Malet 1, Damien Gatinel 1,
PMCID: PMC7244728  PMID: 32444650

Abstract

This work aimed to use artificial intelligence to predict subjective refraction from wavefront aberrometry data processed with a novel polynomial decomposition basis. Subjective refraction was converted to power vectors (M, J0, J45). Three gradient boosted trees (XGBoost) algorithms were trained to predict each power vector using data from 3729 eyes. The model was validated by predicting subjective refraction power vectors of 350 other eyes, unknown to the model. The machine learning models were significantly better than the paraxial matching method for producing a spectacle correction, resulting in a mean absolute error of 0.301 ± 0.252 Diopters (D) for the M vector, 0.120 ± 0.094 D for the J0 vector and 0.094 ± 0.084 D for the J45 vector. Our results suggest that subjective refraction can be accurately and precisely predicted from novel polynomial wavefront data using machine learning algorithms. We anticipate that the combination of machine learning and aberrometry based on this novel wavefront decomposition basis will aid the development of refined algorithms which could become a new gold standard to predict refraction objectively.

Subject terms: Imaging and sensing, Machine learning

Introduction

Globally it is estimated that 153 million people aged 5 or above are visually impaired due to uncorrected refractive errors1.The ability to automatically refract a patient and provide a spectacle prescription, equivalent to the time consuming current gold standard of subjective refraction, is an elusive goal that has intrigued many ophthalmic clinicians and researchers24.

One such automated and objective method is optical wavefront sensing using aberrometry which allows mathematical reconstruction and analysis of lower and higher order monochromatic aberrations of the eye. This has led many to believe that this objective method had the potential to be the new standard for optimizing correction of refractive errors by converting aberrometry data to accurate sphero-cylindrical refractions57.

Though several small sample studies showed promising results in terms of accuracy and precision of objective refraction from several methods related to wavefront analysis to date513 no study has found a validated method that can be used to prescribe a spectacle correction. It was found that results from the aberrometer, autorefractor and subjective refraction, though comparable with each other, were not accurate enough to prescribe spectacles directly from either instruments14. A recent publication found that a visual image quality metric could predict subjective refraction in myopic eyes but not habitually undercorrected hyperopic eyes, though the data set was again small15. Variability in the gold standard subjective refraction measurements themselves were also thought to be a source of poor precision7.

The ocular wavefront error is most commonly described by the Zernike polynomials16. To satisfy orthogonality constraints with low order modes, some higher order Zernike polynomials contain low order terms in their analytical expression leading to lack of accuracy when predicting the sphero-cylindrical refraction17. It is known that conventional therapies such as spectacles or contact lenses correct just the lower-order aberrations but the presence of higher-order aberrations influences the prescription itself12. An important finding by Cheng et al.6 showed that subjective judgment of best focus does not minimize RMS wavefront error (Zernike defocus = 0), nor create paraxial focus (Seidel defocus = 0), but makes the retina conjugate to a plane between these two. The levels of spherical aberration (Z4°) and secondary astigmatism (Z4±2) influenced the levels of defocus and primary astigmatism that produced the best visual performance. These objective metrics were tested based on an assumption that the Zernike polynomial decomposition was producing a clear distinction between the low and high order components of the wavefront error. This assumption could explain the poor correlation between subjective and objective refraction, especially when it came to large amounts of higher order aberration57,9,11,12,18,19.

A new series of polynomials, labeled LD/HD (Low Degree/ High Degree), have been proposed to provide a more mathematically relevant separation of the higher and lower modes20. In this decomposition, the normalized higher order modes are devoid of low order terms and mutually orthogonal within but not with lower order aberrations. With this approach, the low order wavefront component is equal to the paraxial curvature matching of the wavefront map.

Machine learning is already in use in Ophthalmology for image analysis in medical retina2123, and glaucoma (Visual Fields and Disc Photos)24,25 as well as recent developments for use in diagnoses including retinopathy of prematurity26. It is also used in regression tasks, notably in IOL calculations27. Deep learning has been applied to predict refractive error from fundus images and other image analysis techniques also28,29. Attempts to predict subjective refraction from Zernike polynomials have also been tried using a multilayer perceptron with two hidden layers30.

Our aim was to build and evaluate a set of predictive machine learning models to accurately and precisely objectively refract a patient using wavefront aberrometry with LD/HD polynomial decomposition, and to evaluate the relative importance of each polynomial in the prediction process for each vector.

Results

Group comparability

Patients demographics are presented in Table 1. Training set and test set were comparable in terms of patient ages, sex-ratio, side repartition, mean refractive spherical equivalent and mean refractive cylinder.

Table 1.

Patients demographics in the training set and test set. T-test was performed to test for group comparability. Abbreviations used: Spherical Equivalent (SE), Cylinder (Cyl) and number (n). SE and Cyl are in Diopters (D).

Training set Test set p value
n eyes 3729 350
n patients 1809 193
Female % 57.1% 58.9% 0.71
Right Eye % 49.7% 49.4% 1.00
Mean SD Min. Max. Mean SD Min. Max.
Age 36.03 11.24 18.10 72.40 36.35 11.48 20.00 66.00 0.71
Refractive SE −1.89 2.54 −6.75 6.13 −1.96 2.58 −6.63 4.88 1.00
Refractive Cyl −0.81 0.90 −6.00 0.00 −0.87 0.94 −5.25 0.00 1.00

Prediction performance of the different methods

The performance of prediction methods are presented in Table 2. Figure 1 illustrates the prediction performances of the different approaches for the M vector and Fig. 2 illustrates those for the J0 and J45 vectors. Statistical testing for differences between the various prediction methods is presented in Table 3. The XGBoost models using all the polynomials, resulted in a Mean Absolute Error of 0.30 ± 0.25 Diopters (D) for the M vector, 0.12 ± 0.09 D for the J0 vector and 0.09 ± 0.08 D for the J45 vector, whilst the Paraxial matching method resulted in a Mean Absolute Error of 0.40 ± 0.35 D for the M vector, 0.17 ± 0.14 D for the J0 vector and 0.14 ± 0.1 D for the J45 vector. The XGBoost models using only the low-degree polynomials resulted in a Mean Absolute Error of 0.35 ± 0.29 D for the M vector, 0.16 ± 0.14 D for the J0 vector and 0.12 ± 0.10 D for the J45 vector. Bland-Altman plots showed a good agreement between subjective refraction and the predictions obtained with the machine learning models, with no systematic error depending on degree of refractive error (Fig. 4). Paired t-test were not significant.

Table 2.

Absolute Prediction Error, Prediction Error and Precision of the three methods were evaluated (Paraxial Matching, XGBoost using Low order polynomials only, XGBoost using all available polynomials) for each vector M, J0 and J45. P values for the comparison between methods are presented in Table 3. Abbreviations used: Low Order (LO), Low and High Order (LO/HO) and Paraxial Matching (PM).

Prediction Method Absolute Prediction Error Prediction Error (Accuracy) Precision
Mean SD Min. Max. Mean SD Min. Max.
PM -M 0.40 0.35 0.00 1.66 0.06 0.53 −1.66 1.66 1.06
XGB (LO only) -M 0.35 0.29 0.00 1.50 -0.02 0.46 −1.43 1.50 0.91
XGB (LO/HO) -M 0.30 0.25 0.00 1.41 0.01 0.39 −1.14 1.41 0.78
PM - J0 0.17 0.14 0.00 0.69 -0.05 0.22 −0.69 0.66 0.44
XGB (LO only) - J0 0.16 0.14 0.00 0.74 0.00 0.22 −0.74 0.71 0.43
XGB (LO/HO)- J0 0.12 0.09 0.00 0.44 -0.01 0.15 −0.44 0.43 0.30
PM - J45 0.14 0.10 0.00 0.55 0.02 0.17 −0.55 0.45 0.34
XGB (LO only)- J45 0.12 0.10 0.00 0.61 0.01 0.15 −0.61 0.59 0.31
XGB (LO/HO)-J45 0.09 0.08 0.00 0.49 0.00 0.13 −0.49 0.43 0.25

Figure 1.

Figure 1

Probability density function (Gaussian kernel density estimate) for the Spherical Prediction Error, for the 3 methods studied. We compare paraxial fitting with low degree LD/HD polynomials (Red), with XGBoost model using low degree only (Green) and XGBoost model with all aberrations (Blue). The density of accurate predictions is more important with the latter.

Figure 2.

Figure 2

Scatter plot showing the J0 vector prediction error on the X-axis and J45 vector prediction error on the Y-axis with corresponding 95% confidence ellipses for the 3 methods studied. We compare paraxial fitting with low degree LD/HD polynomials (Red), with XGBoost model using low degree only (Green) and XGBoost model with all aberrations (Blue). The black cross locates the (0,0) coordinate. Precision is better using the last method.

Table 3.

Pairwise statistical comparison of the different prediction methods. Mean Absolute Errors and Mean Errors were compared using a Wilcoxon signed-rank test and Precision differences were compared using the Levene test for equal variances. Abbreviations used: Low (L), High (H), Paraxial Matching (PM).

Prediction Method Mean Absolute Error Mean Error (Accuracy) Precision
M vector prediction
XGB L & H/PM p < 0.0001 p < 0.001 p < 0.0001
XGB L/PM p < 0.0001 p < 0.0001 p = 0.06 (NS)
XGB L/XGB L & H p < 0.0001 p < 0.001 p = 0.02 (NS)
J0 vector prediction
XGB L & H/PM p < 0.0001 p < 0.0001 p < 0.0001
XGB L/PM p = 0.05 (NS) p < 0.0001 p = 0.82 (NS)
XGB L/XGB L & H p < 0.0001 p = 0.03 (NS) p < 0.0001
J45 vector prediction
XGB L & H/PM p < 0.0001 p = 0.03 (NS) p < 0.0001
XGB L/PM p < 0.0001 p = 0.03 (NS) p = 0.003
XGB L/XGB L & H p < 0.0001 p = 0.48 (NS) p = 0.003

Figure 4.

Figure 4

Bland-Altman diagrams showing the agreement between the predictions made using the XGBoost model trained with all available aberrations, and subjective refraction, for M (a), J0 (b) and J45 (c). No statistical difference was found using one sample t-test, for each vector prediction.

Pairwise prediction methods comparison

The XGBoost models using all the polynomials performed statistically better than Paraxial matching for every vector and every metric, except for accuracy for the J45 vector prediction. They also performed better than the XGBoost models trained with low-degree polynomials only, although the difference was not significant for precision in predicting the M vector and accuracy in predicting the J0 and J45 vectors.

Feature importance

SHAP value analysis for the three XGBoost models trained with the full set of polynomials is presented in Fig. 4. It showed that G20(defocus) was by far the most influential feature to predict the M vector, with G4° (primary spherical aberration) being the second most important feature. The bottom two graphs demonstrate that G22 (Vertical astigmatism) and G4−2 (Oblique secondary astigmatism) were the most important features to predict the J0 vector, while G2−2 (Oblique astigmatism) and G42 (Vertical secondary astigmatism) were the most important features to predict the J45 vector.

Discussion

The machine learning approach using LD/HD polynomials was more effective than the paraxial matching method for predicting the results of conventional, sphero-cylindrical refraction from wavefront aberrations used by Thibos et al. previously7. Interestingly, the XGBoost models trained using low-order aberrations only proved more accurate than paraxial matching. This could suggest that those low-order polynomials interact, in some circumstances, in a more complex way than previously thought. The best precision and accuracy were obtained when all the novel polynomials coefficients were used as predictors, demonstrating the significant influence of the higher order aberrations on the spectacle correction.

Gradient boosting creates new models that predict the residual errors of prior models during the training process. The models are used together to predict the target value. XGBoost is an implementation of gradient boosted trees focused on performance and computer efficiency. It can perform both regression and classification tasks. It was chosen because of its recognized performances and its resistance to overfitting31.

Feature importance G2° (defocus) was unsurprisingly the most influential feature to predict the M vector, with G4° (primary spherical aberration) being the second most important feature. One interesting finding was that G4−2 (Oblique secondary astigmatism) was the second most important feature to predict J0, and G42 (Vertical secondary astigmatism) the second most important feature to predict J45, while the inverse would be more intuitive. This demonstrates the interest in the machine learning approach, that allows us to discover new patterns and relationships between predictors by disregarding previous assumptions.

Our results confirm the prevalence of 4th order aberrations within the higher order coefficients influencing the sphero-cylindrical refraction as it has been previously shown6. The LD/HD modes being devoid of defocus terms (radial degree 2), they unambiguously confirm the influence of the radial degree 4 of the wavefront error, on sphero-cylindrical refraction.

The predictive influence of the variables used in the model does not explain their exact role, and that is a weakness of such machine learning algorithms, as interpretability and model comprehension are limited by the big number of decision trees, their complexity and depth.

Of note, we did not test our method for repeatability. However, it relies solely on the OPD-Scan III output, and this device has already shown very good repeatability3235.

Our study had some unavoidable limitations, among which is accommodation. We created a study design using undilated refraction, mirroring the real life clinical environment where spectacle correction is provided in adults, as well as allowing preservation of data volume. We did not test children or elderly patients so cannot generalize to these groups. By virtue of the technique, it is not possible to objectively refract patients with strabismus, corneal scarring, cataracts or vitreous opacity that would preclude clear wavefront analysis.

Precision may be masked by the imprecision of the gold standard of subjective refraction. Of note the examiner was aware of the autorefraction. We hope our study results will enable future development of machine learning algorithms from the LD/HD polynomials and objective refraction techniques, to prescribe glasses efficiently, not only to adults but also to children and vulnerable adults without need for their input or prolonged cooperation.

Methods

Patients and dataset constitution

This study was approved by the Institutional Review Board at Rothschild foundation and followed the tenets of the Declaration of Helsinki. Informed consent was obtained from all participants. A total of 2890 electronic medical records of patients (6397 eyes) evaluated for refractive surgery at the Laser Vision Institute Noémie de Rothschild (Foundation Adolphe de Rothschild Hospital, Paris) were retrieved and consenting patients data was analyzed. We excluded patients with strabismus and any other ocular abnormalities except ametropia. After data cleaning, eyes with subjective refraction and a valid wavefront aberrometry examination were randomly split into a 350 eyes test set and a training set, with no cross over of same patient data. A manual review of medical records of eyes in the test set was checked to ensure the quality of data, leaving 3729 eyes for the training set.

Aberrometry

Wavefront analysis was obtained using the OPD-Scan III (Nidek, Gamagori, Japan). The aberrometer was specially configured to run using beta-software incorporating the new series of LD/HD polynomials, noted Gnm using the same double index scheme of the Zernike polynomials. The wavefronts were decomposed up to the 6th order. We chose to stop our polynomials analysis at the 6th order. This cut-off is beyond the number of polynomials that was determined by the members of the Vision Science and its Applications Standards task force (VSIA) to be necessary to describe the HOA of the human eye with sufficient accuracy in 200036. It applied to the paraxial matching analysis as well as the machine learning approach. The first three polynomials (Piston, Tilt, Tip) were removed from the features because of their low relevance in this work. Defocus, Vertical Astigmatism and Oblique Astigmatism constituted the Low order polynomial group, and all the others constituted the High order polynomials group. A 4 mm pupil disk diameter was chosen to obtain the coefficients and any pupil less than 4 mm during the acquisition of the wavefront with the OPD-Scan III, was an exclusion criterion. A 4 mm pupil diameter analysis cut-off was used because it is close to the mean physiological photopic pupil diameter in different studies3739. Our results may not reflect the results that could be found using very large or very small pupils.

Subjective refraction

Corresponding non-cycloplegic subjective refractions conducted on the same day by an experienced optometrist were analyzed. The maximum plus rule was used to the nearest 0.25 D to minimize accommodation and maximize the depth of focus7.

Power vector analysis

Each refraction in Sphere S, Cylinder C, and axis A format was transformed into 3D dioptric vector space (M, J0, J45) where the three components are orthogonal. Refraction data sets were vectorized using standard power vector analysis with the components M, J0 and J4540 using Eqs. (1), (2) and (3).

M=S+C2 1
J0=C2×cos(2α) 2
J45=C2×sin(2α) 3

Machine learning methodology

Three machine learning models were separately trained to predict each vector component from the new series of polynomials. We used a Gradient Boosted Trees algorithm (XGBoost)41. Parameter tuning was performed using 5-folds randomized search cross-validation. Mean squared error regression loss was chosen as the evaluation metric. We used Python 3.6.8 with the following libraries: Jupyter 4.4.0. Pandas 0.23.4, Scikit-learn 0.20.2, Matplotlib 3.0.2, Seaborn 0.9.0, XGBoost 0.81.

Feature importance analysis

SHAP (SHapley Additive exPlanations) values were calculated for each model in order to determine the most influential polynomials (Fig. 3).

Figure 3.

Figure 3

SHAP feature importance for each model of the XGBoost using the all aberrations approach. The top graph (a) displays the most important features for M prediction: G2° (defocus) and G4° (primary spherical aberration) were the most influential. The bottom two (b,c) graphs demonstrate that G22 (Vertical astigmatism) and G4−2 (Oblique secondary astigmatism) were the most important features to predict the J0 vector, while G2−2 (Oblique astigmatism) and G42 (Vertical secondary astigmatism) were the most important features to predict the J45 vector.

SHAP value is a recently described tool that aims to increase machine learning models interpretability42. It allows us to understand how a specific feature negatively or positively participates in the target variable prediction by computing the contribution of each feature towards the prediction. This allows better estimation of the importance of each feature in the prediction process. A random variable was introduced as a predictive feature during the training in order to help differentiate useful features from the others.

Model evaluation and statistical analysis

Performances of the machine learning models were evaluated on the test set never seen by the model nor used for the hyperparameters choice, to avoid overfitting. For each machine learning approach (using low order polynomials only, and using every polynomial), the three vectors of the refraction were predicted one by one using the three machine learning models. Paraxial matching predictions were calculated using Eqs. (46)

M=G2043r2 4
J0=G2226r2 5
J45=G2226r2 6

where Gnm is the nth order LD/HD coefficient of meridional frequency m, and r is the pupillary radius. It is important to note that as high order LD/HD coefficients are devoid of low-order aberrations, this calculation is equivalent to paraxial curvature matching calculated by computing the curvature at the origin of the Zernike expansion of the Seidel formulae for defocus and astigmatism using Zernike polynomials as described by Thibos et al.7.

Mean absolute errors were calculated for each prediction method. Accuracy of the predictions for each vector was defined as the mean value of the prediction error. Precision was defined as two times the standard deviation (SD) of the prediction error7. Each prediction method was evaluated against each other. Mean absolute prediction errors and mean prediction errors were evaluated using the Wilcoxon-signed rank test with Bonferroni correction. Differences in precision were evaluated using the Levene test with Bonferroni correction. We used a similar confidence ellipse to Thibos et al. to graphically report our results7,43. Bland-Altman plots and paired t-test were conducted to study the agreement between subjective refraction and the machine learning models predictions. A p-value less than 0.05 was considered significant.

Author contributions

R.R and G.D. are joint first authors. R.R. and G.D. completed data analysis, wrote the main manuscript text, prepared the tables and figures. D.G. and J.M. designed the study and developed the polynomial basis. All authors reviewed the manuscript.

Data availability

The datasets generated during and analyzed during the current study are available from the corresponding author on reasonable request.

Competing interests

D.G. is a consultant for Nidek but received no funding for this study. G.D., R.R. and J.M. declare no potential conflict of interest.

Footnotes

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

These authors contributed equally: Radhika Rampat and Guillaume Debellemanière.

References

  • 1.Resnikoff S, Pascolini D, Mariotti SP, Pokharel GP. Global magnitude of visual impairment caused by uncorrected refractive errors in 2004. Bull. World Health Organ. 2008;86:63–70. doi: 10.2471/BLT.07.041210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bühren J, Martin T, Kühne A, Kohnen T. Correlation of aberrometry, contrast sensitivity, and subjective symptoms with quality of vision after LASIK. J Refract. Surg. 2009;25:559–568. doi: 10.3928/1081597X-20090610-01. [DOI] [PubMed] [Google Scholar]
  • 3.Pesudovs K, Parker KE, Cheng H, Applegate RA. The precision of wavefront refraction compared to subjective refraction and autorefraction. Optom. Vis. Sci. 2007;84:387–392. doi: 10.1097/OPX.0b013e31804f81a9. [DOI] [PubMed] [Google Scholar]
  • 4.Bullimore MA, Fusaro RE, Adams CW. The repeatability of automated and clinician refraction. Optom. Vis. Sci. 1998;75:617–622. doi: 10.1097/00006324-199808000-00028. [DOI] [PubMed] [Google Scholar]
  • 5.Watson AB, Ahumada AJ., Jr. Predicting visual acuity from wavefront aberrations. J. Vis. 2008;8(17):1–19. doi: 10.1167/8.4.17. [DOI] [PubMed] [Google Scholar]
  • 6.Cheng X, Bradley A, Thibos LN. Predicting subjective judgment of best focus with objective image quality metrics. J. Vis. 2004;4:310–321. doi: 10.1167/4.8.310. [DOI] [PubMed] [Google Scholar]
  • 7.Thibos LN, Hong X, Bradley A, Applegate RA. Accuracy and precision of objective refraction from wavefront aberrations. J. Vis. 2004;4:329–351. doi: 10.1167/4.4.9. [DOI] [PubMed] [Google Scholar]
  • 8.Maeda N. Clinical applications of wavefront aberrometry - a review. Clin. Experiment. Ophthalmol. 2009;37:118–129. doi: 10.1111/j.1442-9071.2009.02005.x. [DOI] [PubMed] [Google Scholar]
  • 9.Kilintari M, Pallikaris A, Tsiklis N, Ginis HS. Evaluation of image quality metrics for the prediction of subjective best focus. Optom. Vis. Sci. 2010;87:183–189. doi: 10.1097/OPX.0b013e3181cdde32. [DOI] [PubMed] [Google Scholar]
  • 10.Applegate RA, Marsack JD, Ramos R, Sarver EJ. Interaction between aberrations to improve or reduce visual performance. J. Cataract Refract. Surg. 2003;29:1487–1495. doi: 10.1016/S0886-3350(03)00334-1. [DOI] [PubMed] [Google Scholar]
  • 11.Jaskulski M, Martínez-Finkelshtein A, López-Gil N. New Objective Refraction Metric Based on Sphere Fitting to the Wavefront. J. Ophthalmol. 2017;2017:1909348. doi: 10.1155/2017/1909348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Thibos LN. Unresolved issues in the prediction of subjective refraction from wavefront aberration maps. J Refract. Surg. 2004;20:S533–6. doi: 10.3928/1081-597X-20040901-24. [DOI] [PubMed] [Google Scholar]
  • 13.Marsack JD, Thibos LN, Applegate RA. Metrics of optical quality derived from wave aberrations predict visual performance. Journal of Vision. 2004;4:8. doi: 10.1167/4.4.8. [DOI] [PubMed] [Google Scholar]
  • 14.Bennett JR, Stalboerger GM, Hodge DO, Schornack MM. Comparison of refractive assessment by wavefront aberrometry, autorefraction, and subjective refraction. J. Optom. 2015;8:109–115. doi: 10.1016/j.optom.2014.11.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hastings GD, Marsack JD, Nguyen LC, Cheng H, Applegate RA. Is an objective refraction optimized using the visual Strehl ratio better than a subjective refraction? Ophthalmic Physiol. Opt. 2017;37:317–325. doi: 10.1111/opo.12363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Lakshminarayanan V, Fleck A. Zernike polynomials: a guide. J. Mod. Opt. 2011;58:1678–1678. doi: 10.1080/09500340.2011.633763. [DOI] [Google Scholar]
  • 17.Klyce SD, Karon MD, Smolek MK. Advantages and disadvantages of the Zernike expansion for representing wave aberration of the normal and aberrated eye. J Refract. Surg. 2004;20:S537–41. doi: 10.3928/1081-597X-20040901-25. [DOI] [PubMed] [Google Scholar]
  • 18.Guirao A, Williams DR. A method to predict refractive errors from wave aberration data. Optom. Vis. Sci. 2003;80:36–42. doi: 10.1097/00006324-200301000-00006. [DOI] [PubMed] [Google Scholar]
  • 19.Applegate RA, Ballentine C, Gross H, Sarver EJ, Sarver CA. Visual acuity as a function of Zernike mode and level of root mean square error. Optom. Vis. Sci. 2003;80:97–105. doi: 10.1097/00006324-200302000-00005. [DOI] [PubMed] [Google Scholar]
  • 20.Gatinel D, Malet J, Dumas L. Polynomial decomposition method for ocular wavefront analysis. J. Opt. Soc. Am. 2018;35:2035. doi: 10.1364/JOSAA.35.002035. [DOI] [PubMed] [Google Scholar]
  • 21.Choi JY, et al. Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database. Plos One. 2017;12:e0187336. doi: 10.1371/journal.pone.0187336. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Tufail A, et al. An observational study to assess if automated diabetic retinopathy image assessment software can replace one or more steps of manual imaging grading and to determine their cost-effectiveness. Health Technol. Assess. 2016;20:1–72. doi: 10.3310/hta20920. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ting DSW, et al. Artificial intelligence and deep learning in ophthalmology. Br. J. Ophthalmol. 2019;103:167–175. doi: 10.1136/bjophthalmol-2018-313173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Asaoka R, Murata H, Iwase A, Araie M. Detecting Preperimetric Glaucoma with Standard Automated Perimetry Using a Deep Learning Classifier. Ophthalmology. 2016;123:1974–1980. doi: 10.1016/j.ophtha.2016.05.029. [DOI] [PubMed] [Google Scholar]
  • 25.Zhu H, Poostchi A, Vernon SA, Crabb DP. Detecting abnormality in optic nerve head images using a feature extraction analysis. Biomedical Optics Express. 2014;5:2215. doi: 10.1364/BOE.5.002215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Gupta K, et al. A Quantitative Severity Scale for Retinopathy of Prematurity Using Deep Learning to Monitor Disease Regression After Treatment. JAMA Ophthalmology. 2019;137:1029. doi: 10.1001/jamaophthalmol.2019.2442. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lee A, Taylor P, Kalpathy-Cramer J, Tufail A. Machine Learning Has Arrived! Ophthalmology. 2017;124:1726–1728. doi: 10.1016/j.ophtha.2017.08.046. [DOI] [PubMed] [Google Scholar]
  • 28.Varadarajan AV, et al. Deep Learning for Predicting Refractive Error From Retinal Fundus Images. Investigative Ophthalmology & Visual Science. 2018;59:2861. doi: 10.1167/iovs.18-23887. [DOI] [PubMed] [Google Scholar]
  • 29.Libralao, G, Almeida, O, Carvalho, A. Classification of ophthalmologic images using an ensemble of classifiers. Innov Appl Artif Intell., 6–13 (2005).
  • 30.Ohlendorf A, Leube A, Leibig C, Wahl S. A machine learning approach to determine refractive errors of the eye. Invest Ophthalmol Vis Sci. 2017;58:1136. [Google Scholar]
  • 31.Reinstein, I. XGBoost, a Top Machine Learning Method on Kaggle, Explained, https://www.kdnuggets.com/2017/10/xgboost-top-machine-learning-method-kaggle-explained.html.Last accessed 2/1/2020 (2017).
  • 32.Asgari S, et al. OPD-Scan III: a repeatability and inter-device agreement study of a multifunctional device in emmetropia, ametropia, and keratoconus. International Ophthalmology. 2016;36:697–705. doi: 10.1007/s10792-016-0189-4. [DOI] [PubMed] [Google Scholar]
  • 33.Hamer CA, et al. Comparison of reliability and repeatability of corneal curvature assessment with six keratometers. Clinical and Experimental Optometry. 2016;99:583–589. doi: 10.1111/cxo.12329. [DOI] [PubMed] [Google Scholar]
  • 34.Guilbert E, et al. Repeatability of Keratometry Measurements Obtained With Three Topographers in Keratoconic and Normal Corneas. Journal of Refractive Surgery. 2016;32:187–192. doi: 10.3928/1081597X-20160113-01. [DOI] [PubMed] [Google Scholar]
  • 35.McGinnigle S, Naroo SA, Eperjesi F. Evaluation of the auto-refraction function of the Nidek OPD-Scan III. Clinical and Experimental Optometry. 2014;97:160–163. doi: 10.1111/cxo.12109. [DOI] [PubMed] [Google Scholar]
  • 36.Thibos, L. N., Applegate, R. A., Schwiegerling, J. T., Webb, R. & VSIA Standards Taskforce Members. Standards for Reporting the Optical Aberrations of Eyes. Vision Science and its Applications (2000).
  • 37.Sanchis-Gimeno JA, Sanchez-Zuriaga D, Martinez-Soriano F. White-to-white corneal diameter, pupil diameter, central corneal thickness and thinnest corneal thickness values of emmetropic subjects. Surgical and Radiologic Anatomy. 2012;34:167–170. doi: 10.1007/s00276-011-0889-4. [DOI] [PubMed] [Google Scholar]
  • 38.Hashemi H, et al. Distribution of Photopic Pupil Diameter in the Tehran Eye Study. Current Eye Research. 2009;34:378–385. doi: 10.1080/02713680902853327. [DOI] [PubMed] [Google Scholar]
  • 39.Oshika T, et al. Influence of Pupil Diameter on the Relation between Ocular Higher-Order Aberration and Contrast Sensitivity after Laser In Situ Keratomileusis. Investigative Ophthalmology & Visual Science. 2006;47:1334. doi: 10.1167/iovs.05-1154. [DOI] [PubMed] [Google Scholar]
  • 40.Thibos LN, Wheeler W, Horner D. Power vectors: an application of Fourier analysis to the description and statistical analysis of refractive error. Optom. Vis. Sci. 1997;74:367–375. doi: 10.1097/00006324-199706000-00019. [DOI] [PubMed] [Google Scholar]
  • 41.Chen, T. & Guestrin, C. XGBoost: A Scalable Tree Boosting System. Proceedings of the 22nd ACM SIGKDD International conference on knowledge discovery and data mining, ACM, pp. 785–794 (2016).
  • 42.Lundberg, S. M. & Lee, S.-I. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.) 4765–4774 (Curran Associates, Inc., 2017).
  • 43.Bland JM, Altman DG. Regression Analysis. Lancet. 1986;327:908–909. doi: 10.1016/S0140-6736(86)91008-1. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated during and analyzed during the current study are available from the corresponding author on reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES