Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Feb 1.
Published in final edited form as: Optom Vis Sci. 2024 Jan 16;101(2):124–128. doi: 10.1097/OPX.0000000000002107

Clinical report: Virtual reality enables comparable contrast sensitivity measurements to in-office testing (pilot study)

Christopher P Cheng 1, Randal A Serafini 2, Margarita Labkovich 3, Andrew J Warburton 4, Vicente Navarro 5, Neha Shaik 6, Harsha Reddy 7, James G Chelnis 8
PMCID: PMC10901448  NIHMSID: NIHMS1958045  PMID: 38408310

Abstract

Significance:

Vision health disparities largely stem from inaccessibility to vision specialists. To improve patient access to vision tests and to expedite clinical workflows, it is important to assess the viability of virtual reality (VR) as a modality for evaluating contrast sensitivity.

Purpose:

This study aimed to assess the validity of a VR version of the Pelli-Robson contrast sensitivity test by comparing its results with those of the corresponding in-office test.

Methods:

28 participants (mean (± SD) age 37.3 ± 20.5 years) with corrected vision were recruited for testing on a voluntary basis with randomized administration of the in-office test followed by the VR analogue or vice versa. Nineteen participants took each test twice to assess test-retest consistency in each modality. VR tests were conducted on a commercial Pico Neo Eye 2 VR headset, which has a 4K screen resolution. The environment for both tests was controlled by participant for location and lighting.

Results:

Similar sensitivity scores were obtained between testing modalities in both the right (n = 28 participants, Wilcoxon matched pairs SR p = 0.7) and left eyes (n = 28 participants, Wilcoxon matched pairs SR p = 0.7). Additionally, similar test-retest scores were found for virtual reality (n = 19 participants, Wilcoxon matched pairs SR p = 1.0) or in-office (n = 19 participants, Wilcoxon matched pairs SR p = 1.0) tests. VR Pelli-Robson results correlated well with in-office test results in variably diseased participants (n=14 eyes from 7 participants, R2=0.93, p < 0.0001).

Conclusions:

In this pilot trial, we demonstrated that VR Pelli-Robson measurements of corrected vision align with those of in-office modalities, suggesting this may be a reliable method of implementing this test in a more interactive and accessible manner.


Vision health disparities largely stem from inaccessibility to vision specialists, including optometrists and ophthalmologists.1 While generalists, such as primary care physicians, have tools to assess vision, they are rudimentary and often miss signs of early-stage diseases.2 The introduction of portable, digital technologies, such as virtual reality headsets, allows for the opportunity to create accelerated analogues of in-office vision tests.3 Furthermore, these technologies can drastically reduce inter-operator bias, such as lighting consistency and distance of a contrast sensitivity chart from a subject.4 Results are therefore more standardized across participants, increasing the likelihood of reliable abnormality detection. Virtual reality can serve as an important new tool in both generalist and vision specialist workflows to improve accessibility and experiences.

An essential test in the fields of optometry, ophthalmology, and neurology is the Pelli-Robson contrast sensitivity test; this is regularly considered a gold standard in clinical studies that assess contrast detection deficits.5,6 Poor contrast sensitivity is a responsive indicator of a variety of eye pathologies including but not limited to cataracts,7,8 keratoconus,9,10 central epithelial corneal defects,11 cystoid macular edema12 and even central serous chorioretinopathy.13,14 Despite growing virtual reality adoption in optometry, no head-to-head clinical trials have been performed to date comparing the direct outputs of in-office Pelli-Robson testing to a virtual reality analogue, although efforts to make custom contrast tests for virtual reality have recently started.15 Clinical validations in the virtual reality environment are essential because digital factors such as scaling, illumination, and pixel density can drastically affect the accuracy of results.

In the present pilot study, our group designed a virtual reality analogue of a Pelli-Robson test and administered it to healthy and diseased participants alongside its corresponding in-office test in a clinical study at the Mount Sinai Hospital in New York City. Results from both testing modalities were compared against each other to determine agreement. This study’s preliminary validation of virtual reality contrast sensitivity testing demonstrates promise for its integration into optometry, ophthalmology, neurology, and emergency medicine workflows, yet a larger study is necessary to validate these findings, particularly in patient populations with a variety of ocular pathologies.

METHODS

This study took place at the 102nd Street ophthalmology clinic of Mount Sinai Hospital. All procedures were approved by the Icahn School of Medicine at Mount Sinai Institutional Review Board (IRB #19–00679), and the study is registered on clinicaltrials.gov (NCT04714424). As no preliminary studies exist for accurate utility of Bland-Altman sample size calculation method, we utilized a student’s t-test power analysis with significance level set to 5%, an effect size for Cohen’s d of 80%, and a statistical power of 80%. The estimated minimum number of participants needed to power this experiment was 25 participants.1518 Accordingly, 28 participants were recruited for testing on a voluntary basis with informed consent and with randomization of the sequence of modality. The inclusion criteria consisted of adults aged 18 and older who were able to provide informed consent. Exclusion criteria were any individuals unable to undergo the testing for reasons such as prior adverse reactions to using a virtual reality headset and claustrophobia. Table 1 contains detail of this study’s participant demographics and visual acuity; refractive error was not collected given the logistical constraints of recruiting within an active ophthalmology clinic. Figure 1 shows the virtual reality and in-office Pelli-Robson charts presented to study participants. Of this study’s participants, 19 were male and 9 were female; 19 were young participants (mean ± 1SD age, 24.7 ± 2.5 years) and 9 were older (mean ± 1SD, 63.7 ± 15.7 years). 11 (39%) participants were in a healthy cohort with no outstanding ocular pathology. 10 (36%) participants were in a cohort with ocular pathology that was not shown in literature to affect contrast sensitivity, of which 8 participated with corrected refractive error, 1 was recovering from an acute orbital fracture and 1 was affected with dermatochalasis. The remaining 7 (28%) participants were in a diseased cohort consisting of 3 with cataracts, 1 with keratoconus, 1 with a central epithelial corneal defect, 1 with central serous chorioretinopathy and 1 with cystoid macular edema. Among 19 of the 28 participants, the Pelli-Robson test was conducted twice in each modality to evaluate test-retest consistency.

Table 1.

Clinical trial participant demographics.

Healthy Conditions not shown to affect CS Cataracts Corneal disease Retinal disease

Age
 20–39 10 8 0 1 0
 40–59 1 1 0 1 1
 60+ 0 1 3 0 1
 Total 11 10 3 2 2

Gender
 Male 8 7 1 1 2
 Female 3 3 2 1 0

Ethnicity
 Caucasian 6 4 2 1 0
 African-American 1 1 0 0 0
 Asian 3 5 0 0 0
 Hispanic 1 0 0 0 0
 Other 0 0 1 1 2

Pelli-Robson score (mean ± SD)
 Right eye 1.67 ± 0.121 1.703 ± 0.112 1.25 ± 0.433 1.125 ± 0.955 1.575 ± 0.318
 Left eye 1.691 ± 0.091 1.74 ± 0.131 0.85 ± 0.770 0.825 ± 1.17 1.275 ± 0.530

ETDRS Visual Acuity score (mean ± SD)
 Right eye 0.023 ± 0.047 0.02 ± 0.048 0.1 ± 0.1 0 ± 0 0.05 ± 0.071
 Left eye 0.041 ± 0.054 0.04 ± 0.057 0.417 ± 0.506 0.7 ± 0.424 0.3 ± 0.424

Figure 1.

Figure 1.

Pelli-Robson virtual reality and in-office configurations.

All tests evaluated participants’ habitual corrected vision in the right eye, followed by the left eye. For each participant, each test was conducted in the same room under identical light conditions. The virtual reality Pelli-Robson test was conducted on a Pico Neo Eye 2 commercial virtual reality headset, which has a resolution of 4K 3840*2160@75Hz with 818 pixels-per-inch where 5.5 inch is equivalent to 1 VR screen. Software was written in Unity Version 2020.3.2.f1 with software development kits (SDKs) on software websites. Participants could terminate any exam at any time for any reason.

All statistics were performed on GraphPad Prism Version 7. A Wilcoxon matched-pairs sign rank test was used to compare test output values for each participant both between virtual reality and in-office modalities, as well as between Trial 1 and 2 within each device. If two tests from each modality were administered, the most inferior result from the VR Pelli-Robson (i.e., lowest Pelli-Robson log10 unit score) and in-office tests were chosen as the comparison point to conservatively account for factors such as re-test fatigue. Both eyes were analyzed separately in the inter-test comparison as this study contained a mix of healthy and diseased participants with differing levels of contrast sensitivity in each eye - this reduced any bias that could be introduced by pre-selecting an eye, while maximizing usable data for this smaller pilot study. The left or right eye of each participant was randomly selected to assess test-retest performance in the virtual reality versus in-office environments to avoid combining eyes within the same pairwise statistical comparison.

VIRTUAL REALITY TEST DESIGN AND RESULTS

The virtual reality analogue for the Pelli-Robson test was designed with an unchanging stimulus intensity at 25 apostilbs (8 cd/m2) with two sets of letter triplicates shown per line; triplicates were ordered in decreasing luminance. The participant’s eye not being examined was blocked by not electronically transmitting testing information to the aforementioned eye in the virtual reality headset but maintaining the same background color.

The in-office Pelli-Robson test was conducted at a controlled distance of three meters, as originally recommended by Pelli, Robson and Wilkins.19 For participants in the diseased cohort, the test was conducted at one meter simply for the participants’ convenience, given the interchangeability of Pelli-Robson results at either distance20 and in an attempt to avoid confounds due to poor acuity associated with certain diseases. Participants were asked to block the eye not being examined with the palm of their hand. The dimensions of the Pelli-Robson chart were 59.4 × 84.0 cm2 and the dimensions of the letters were 4.9 cm2 (Precision Vision #5014–3).

Participants were asked to read charts from left to right and top to bottom, triplicate by triplicate, until reaching a triplicate where more than one letter was read incorrectly. After each testing modality tested both eyes, participants were given one minute to rest their eyes. Both tests scored contrast sensitivity according to the greatest log10 unit where two or more out of three letters were read correctly. Participants with diseased eyes were only tested once because they were tested after their visit with an ophthalmologist and had existing test fatigue. Healthy participants received repeat testing to assess test-retest reliability.

For the inter-test comparison, right eye values were as follows: virtual reality mean=1.56 (95% C.I. 1.44–1.67) and in-office mean=1.58 (95% C.I. 1.45–1.70), as shown in Figure 2A. Bland-Altman 95% limits of agreement (LoA) for the comparison of score differences were −0.36 to 0.33 with a bias of −0.016, as shown in Figure 2D. Left eye values were as follows: virtual reality mean=1.50 (95% C.I. 1.32–1.68) and in-office mean=1.51 (95% C.I. 1.32–1.69), as shown in Figure 2B. Bland-Altman 95% LoA for the comparison of score differences were −0.26 to 0.25 with a bias of −0.0054, as shown in Figure 2E. No statistically significant difference between testing modalities was found for either the right (p=0.68) or left eyes (p=0.75) across modalities (Wilcoxon matched-pairs signed rank test). No statistically significant intra-test difference was found for either virtual reality (p=1.0; Bland-Altman 95% LoA −0.24 to 0.22, bias of −0.008, Figure 2G,H) or in-office (p=1.0; Bland-Altman 95% LoA −0.26 to 0.26, bias of 0.00, Figure 2I,J) tests with both eye values combined for this comparison.

Figure 2.

Figure 2.

Virtual reality-delivered Pelli-Robson results do not statistically significantly differ from those of in-office testing and are highly correlated. Intra-Pelli-Robson test result differences are insignificantly different for both modalities. Pelli-Robson results compared by Wilcoxon match-paired rank-sum test by testing modality for the right (A, n=28 test pairs, p = 0.7) and left (B, n=28 eyes, p = 0.7) eyes. Correlation between virtual reality and office Pelli-Robson testing results (C, n=7 test pairs, R2=.93, p < 0.0001). Pelli-Robson results assessing intra-test reliability for both eyes via Wilcoxon match-paired rank-sum test for virtual reality (G-J, n=19 test pairs, p = 1.0) and in-office (E, n=19 test pairs, p = 1.0) tests. Dots represent individual eyes, accompanied by the mean with SEM. Bland-Altman plots (D-F) show majority consensus between testing modalities for right and left eyes, both eyes together, specifically for diseased populations, and among intra-test results falling within 95% confidence intervals of one another.

In order to assess the clinical validity of this virtual reality contrast sensitivity analogue to the in-office chart, we performed a correlation analysis between the two modalities’ outputs in the diseased participant population. As shown in Figure 2C, there was a high degree of correlation (R2=0.93, p<.001; Slope 95% C.I. 0.82–1.16, Y-intercept 95% C.I. −0.18–0.25) between the two testing strategies across the limited assortment of diseases described above. Bland-Altman 95% LoA for the comparison of score differences were −0.34 to 0.30 with a bias of −0.021, as shown in Figure 2F.

DISCUSSION

The COVID-19 pandemic has resulted in a drastic reduction in patient traffic through optometry clinics,21 further worsening the issue of inconsistent patient adherence to annual visits.22 While remote visual function measuring strategies exist, such as laptop-delivered tests, their accuracy is unclear as few head-to-head clinical trials have been performed.

When providing a full range of vision tests, including perimetry, acuity, contrast sensitivity, and color deficits, the use of virtual reality systems in optometry workflows introduces the opportunity to accelerate visit times,23, 24 while also making the checkup experience more engaging. Here, we preliminarily show that virtual reality contrast sensitivity test results align with those of in-office testing for both healthy and diseased eyes. Given the utility of contrast sensitivity function in diagnosing ocular pathologies such as cataracts, keratoconus and even central epithelial corneal defects, the ability to use virtual reality to test contrast sensitivity at a wider scale has great potential to help facilitate earlier diagnosis and treatment of visual morbidities that undermine quality of life.

This study’s sample size skews towards younger, male participants without glaucoma or retinal diseases. Future work will be directed towards detection of various diseases at pre-defined stages in a demographically balanced cohort in order to assess the ability of virtual reality to detect a range of abnormalities and enable timely intervention. Another direction for further investigation is comparison of virtual reality vision testing using different headsets and devices, as well as incorporation of objective contrast sensitivity measurement methods such as using optokinetic nystagmus. Furthermore, although this study’s participants relied on habitual corrected vision to undergo testing to reflect the community settings in which they would take a contrast sensitivity vision test, further investigation into use of best-corrected visual acuity when undergoing virtual reality testing may receive great academic interest.

ACKNOWLEDGMENTS

This study has been supported by the Icahn School of Medicine CTSA grant, number UL1TR001433. The grant was provided by the National Center for Advancing Translational Sciences, National Institutes of Health. This study was also supported in part by an Alpha Omega Alpha Carolyn L. Kuckein Student Research Fellowship.

Contributor Information

Christopher P. Cheng, Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, New York.

Randal A. Serafini, Nash Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York.

Margarita Labkovich, Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, New York.

Andrew J. Warburton, Department of Anesthesiology, Perioperative, and Pain Medicine, Icahn School of Medicine at Mount Sinai, New York, New York.

Vicente Navarro, Department of Bioinformatics and Computational Biology, Weill Cornell Medical College, New York, New York.

Neha Shaik, Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, New York.

Harsha Reddy, Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, New York.

James G. Chelnis, Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, New York.

REFERENCES

  • 1.Gibson DM. The geographic distribution of eye care providers in the united states: implications for a national strategy to improve vision health. Prev Med 2015;73:30–6. [DOI] [PubMed] [Google Scholar]
  • 2.Goldzweig CL, Rowe S, Wenger NS, et al. Preventing and managing visual disability in primary care: clinical applications. JAMA 2004;291:1497–502. [DOI] [PubMed] [Google Scholar]
  • 3.Labkovich M, Paul M, Kim E, et al. Portable hardware & software technologies for addressing ophthalmic health disparities: A systematic review. Digit Health 2022;8:20552076221090042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Cooke MD, Winter PA, McKenney KC, et al. An innovative visual acuity chart for urgent and primary care settings: validation of the runge near vision card. Eye (Lond) 2019;33:1104–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Tiraset N, Poonyathalang A, Padungkiatsagul T, et al. Comparison of visual acuity measurement using three methods: standard ETDRS chart, near chart and a smartphone-based eye chart application. Clin Ophthalmol 2021;15:859–69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Gupta L, Cvintal V, Delvadia R, et al. Sparcs and Pelli-Robson contrast sensitivity testing in normal controls and patients with cataract. Eye (Lond) 2017;31:753–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Shandiz JH, Derakhshan A, Daneshyar A, et al. Effect of cataract type and severity on visual acuity and contrast sensitivity. J Ophthalmic Vis Res 2011;6:26–31. [PMC free article] [PubMed] [Google Scholar]
  • 8.Adamsons I, Rubin GS, Vitale S, et al. The effect of early cataracts on glare and contrast sensitivity. A pilot study. Arch Ophthalmol 1992;110:1081–6. [DOI] [PubMed] [Google Scholar]
  • 9.Shneor E, Pinero DP, Doron R. Contrast sensitivity and higher-order aberrations in keratoconus subjects. Sci Rep 2021;11:12971. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Lidum S, Luguzis A, Krumina G. Keratoconus stage impact on visual acuity and contrast sensitivity. Pro Biomed Opt Imag 2020;11312. [Google Scholar]
  • 11.Pflugfelder SC. Tear dysfunction and the cornea: LXVIII Edward Jackson Memorial Lecture. Am J Ophthalmol 2011;152:900–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ginsburg AP, Cheetham JK, Degryse RE, Abelson M. Effects of flurbiprofen and indomethacin on acute cystoid macular edema after cataract-surgery - functional vision and contrast sensitivity. J Cataract Refr Surg 1995;21:82–92. [DOI] [PubMed] [Google Scholar]
  • 13.Maaranen T, Mantyjarvi M. Contrast sensitivity in patients recovered from central serous chorioretinopathy. Int Ophthalmol 1999;23:31–5. [DOI] [PubMed] [Google Scholar]
  • 14.Vingopoulos F, Garg I, Kim EL, et al. Quantitative contrast sensitivity test to assess visual function in central serous chorioretinopathy. Br J Ophthalmol 2023;107:1139–43. [DOI] [PubMed] [Google Scholar]
  • 15.Labkovich M, Warburton AJ, Ying S, et al. Virtual reality hemifield measurements for corrective surgery eligibility in ptosis patients: A pilot clinical trial. Transl Vis Sci Technol 2022;11:35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.McLaughlin DE, Savatovsky EJ, O’Brien RC, et al. Reliability of visual field testing in a telehealth setting using a head-mounted device: A pilot study. J Glaucoma 2023;33:15–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Sproule D, Jacinto RF, Rundell S, et al. Characterization of visual acuity and contrast sensitivity using head-mounted displays in a virtual environment: A pilot study. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2019;63:547–51. [Google Scholar]
  • 18.Wong KA, Ang BCH, Gunasekeran DV, et al. Remote perimetry in a virtual reality metaverse environment for out-of-hospital functional eye screening compared against the gold standard Humphrey visual fields perimeter: Proof-of-concept pilot study. J Med Internet Res 2023;25:e45044. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Pelli DG, Robson JG, Wilkins AJ. The design of a new letter chart for measuring contrast sensitivity. Clin Vision Sci 1988;2:187–99. [Google Scholar]
  • 20.Vivas-Mateos G, Boswell S, Livingstone IA, et al. Screen and virtual reality-based testing of contrast sensitivity. IEEE Eng Med Biol Soc 2020;2020:6054–7. [DOI] [PubMed] [Google Scholar]
  • 21.Njeru SM, Osman M, Brown AM. The effect of test distance on visual contrast sensitivity measured using the Pelli-Robson chart. Transl Vis Sci Techn 2021;10(2):32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Nagra M, Allen PM, Norgett Y, et al. The Effect of the COVID-19 pandemic on working practices of UK primary care optometrists. Ophthalmic Physiol Opt 2021;41:378–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Irving EL, Harris JD, Machan CM, et al. Value of routine eye examinations in asymptomatic patients. Optom Vis Sci 2016;93:660–6. [DOI] [PubMed] [Google Scholar]
  • 24.Labkovich M, Warburton AJ, Okome O, et al. Virtual Reality Enables Rapid, Multi-Faceted Retinal Function Screenings. Invest Ophthalmol Vis Sci 2022;63(7):713 – F0441. [Google Scholar]

RESOURCES