Skip to main content
JAMA Network logoLink to JAMA Network
. 2019 Apr 11;137(6):690–692. doi: 10.1001/jamaophthalmol.2019.0571

Accuracy of a Popular Online Symptom Checker for Ophthalmic Diagnoses

Carl Shen 1,, Michael Nguyen 1, Alexander Gregor 2, Gloria Isaza 1, Anne Beattie 1
PMCID: PMC6567837  PMID: 30973602

Key Points

Question

What is the accuracy of a popular online symptom checker for ophthalmic diagnoses?

Findings

In this vignette-based, cross-sectional study, the top 3 diagnoses generated by the online symptom checker included the correct diagnoses in 16 of 42 (38%) cases. A substantial number of diagnoses may not be captured.

Meaning

This finding suggests that patients should exercise caution if depending on online symptom checkers to self-diagnosis ophthalmic conditions.

Abstract

Importance

Because more patients are presenting with self-guided research of symptoms, it is important to assess the capabilities and limitations of these available health information tools.

Objective

To determine the accuracy of the most popular online symptom checker for ophthalmic diagnoses.

Design, Setting, and Participants

In a cross-sectional study, 42 validated clinical vignettes of ophthalmic symptoms were generated and distilled to their core presenting symptoms. Cases were entered into WebMD symptom checker by both medically trained and nonmedically trained personnel blinded to the diagnosis. Output from the symptom checker, including the number of symptoms, ranking and list of diagnoses, and triage urgency were recorded. The study was conducted on October 13, 2017. Analysis was performed between October 15, 2017, and April 30, 2018.

Main Outcomes and Measures

Accuracy of the top 3 diagnoses generated by the online symptom checker.

Results

The mean (SD) number of symptoms entered was 3.6 (1.6) (range, 1-8). The median (SD) number of diagnoses generated by the symptom checker was 26.8 (21.8) (range, 1-99). The primary diagnosis by the symptom checker was correct in 11 of 42 (26%; 95% CI, 12%-40%) cases. The correct diagnosis was included in the online symptom checker's top 3 diagnoses in 16 of 42 (38%; 95% CI, 25%-56%) cases. The correct diagnosis was not included in the symptom checker's list in 18 of 42 (43%; 95% CI, 32%-63%) cases. Triage urgency based on the top diagnosis was appropriate in 7 of 18 (39%; 95% CI, 14%-64%) emergent cases and 21 of 24 (88%; 95% CI, 73%-100%) nonemergent cases. Interuser variability for the correct diagnosis being in the top 3 listed was at least moderate (Cohen κ = 0.74; 95% CI, 0.54-0.95).

Conclusions and Relevance

The most popular online symptom checker may arrive at the correct clinical diagnosis for ophthalmic conditions, but a substantial proportion of diagnoses may not be captured. These findings suggest that further research to reflect the real-life application of internet diagnostic resources is required.


This cross-sectional study examines the use of internet-based tools in the identification and classification of ophthalmologic symptoms inputted by patients.

Introduction

The accessibility of internet-based resources is increasing globally and searching for health information represents one of the most common uses of the internet.1 Online symptom checkers (OSCs) are widely used tools that apply computer algorithms to create differential diagnoses based on symptoms input by patient. The OSCs represent an amalgamation of modern accessibility and technology with the most fundamental aspect of clinical medicine: the patient history. As more patients are presenting with self-guided research of symptoms, it is important for ophthalmologists to be aware of the capabilities and limitations of available health information tools. To our knowledge, no study has investigated the utility of OSCs in ophthalmology; these tools represent an unvalidated means of health information delivery.

Methods

In a cross-sectional descriptive study, we generated 42 clinical vignettes from peer-reviewed sources representing the most common ophthalmic diagnoses encountered in clinical practice2,3,4,5 and validated them with practicing ophthalmologists (eTable in the Supplement). Each clinical vignette was categorized as emergent or nonemergent based on diagnosis per Channa et al.5 This study was exempted from research ethics board approval by the Hamilton Integrated Research Ethics Board.

The key symptoms and basic demographic data from these vignettes were distilled and entered in to WebMD6 by 2 individuals, 1 medical and 1 nonmedical personnel on October, 13, 2017. Analysis was performed between October 15, 2017, and April 30, 2018. Participants were masked to the correct diagnosis and provided only with the clinical vignette and core symptoms. Output consisting of the inputted symptoms, generated list of diagnoses, and triage urgency of the top diagnoses was recorded.

Results

Of the 42 vignettes, 18 cases were emergent conditions and 24 were nonemergent. The mean (SD) number of symptoms per vignette entered was 3.6 (1.6) (range, 1-8), of which mean (SD) 0.5 (0.8) (range, 0-3) were extraocular. The median (SD) number of diagnoses generated by the OSC was 26.8 (21.8) (range, 1-99). The primary diagnosis by the OSC was correct in 11 of 42 (26%; 95% CI, 12%-40%) cases. The correct diagnosis was included in the OSC’s top 3 diagnoses in 16 of 42 (40%; 95% CI, 25%-56%) cases. The correct diagnosis was not included in the OSCs list in 18 of 42 (43%; 95% CI, 32%-63%) cases. The mean position on the differential list generated by the OSC when the correct diagnosis was listed was 4.7 (8.2) (range, 1-39). The most common primary diagnosis made by the symptom checker was nearsightedness, which was made in 9 cases.

Triage urgency based on the top diagnosis was appropriate in 7 of 18 (39%; 95% CI, 14%-64%) emergent cases and 21 of 24 (88%; 95% CI, 73%-100%) nonemergent cases. Eleven of the 14 cases in which the triage urgency of the primary diagnosis was incorrect would have led to an urgent case being triaged as nonurgent. Interuser variability between the medical and nonmedical personnel for the correct diagnosis being in the top 3 listed was at least moderate based on the lower bound of the 95% CI (Cohen κ = 0.74; 95% CI, 0.54-0.95).

Discussion

Previous studies have highlighted the challenges in delivering and interpreting appropriate health information online.7,8 An additional layer of complexity is added with the possibility of self-diagnosis through various online tools. In this study, we characterize the current terrain of the most popular online symptom checker regarding its application to ophthalmic symptoms. A previous study by Semigran et al9 evaluated 23 symptom checkers using 45 clinical vignettes representing several general medical conditions. They found that, overall, the correct diagnosis was listed first in 34% of evaluations, in the first 3 diagnoses in 51%, and in the top 20 diagnoses in 58%. Other studies have examined OSCs in otolaryngology,10 orthopedics,11 and plastic surgery.12 Overall, the reported OSCs have been more accurate in their assessment than we found in our study of ophthalmic conditions. This difference may be because of the nature of ophthalmology, in which a spectrum of diseases share similar clinical manifestations, often necessitating reliance on physical examination and ancillary testing.

Semigran et al9 also found the OSCs to be more accurate for nonemergent and common conditions; however, appropriate triage advice was provided in 80% of emergent cases and 55% of nonemergent cases. This difference is likely a result of the risk-averse nature of triage information provided by OSCs that favors directing patients to medical attention rather than observation. However, our study found that most cases of inaccurate triage were a result of emergent conditions being triaged as nonemergent based on the top diagnosis. This finding may reflect the overall lower prevalence of life- and limb-threatening conditions in ophthalmology combined with the less-accurate top diagnosis in OSCs when applied to ophthalmic symptoms. Until the accuracy and information provided by OSCs are improved, there is a risk of unnecessary use of health care services and, conversely, missed opportunities for appropriate intervention when seeking care is delayed.

Strengths and Limitations

Strengths of our study include the standardized data imputation that controlled for several patient- and scenario-dependent variables, a large spectrum of clinical diagnoses, and the fact that the true diagnosis was known. Conversely, this standardization may be viewed as a weakness that does not capture the real-life utility of these tools. The association between educational level, age, technological savviness, anxiety, and other patient factors that affect the ability to decipher the presented diagnoses could not be studied. Generated vignettes were primarily devoid of comorbidities and distractor symptoms, which may have overestimated the accuracy of the OSC. However, these vignettes reflect the type of clinical scenarios used to train physicians in pattern recognition. Although the study examined only a single OSC, WebMD is the most commonly used9 and was the most robust in terms of symptom entry and differential generation compared with other OSCs that we examined for ophthalmic conditions. Given the proprietary technology surrounding WebMD and most other OSCs, it is difficult to dissect how the OSC arrives at the list of diagnoses and understand the cause of the inaccuracies. In addition, although only 2 participants were involved in entering symptoms, agreement between the 2 individuals was found. Since this study was conducted, WebMD has redesigned the interface and output of their OSC.

Conclusions

Our evaluation of the utility of the most popular online symptom checker for ophthalmic conditions suggests that, although it is possible to arrive at the correct clinical diagnosis, a substantial proportion of diagnoses may not be captured. This finding reflects an incompleteness of the underlying data set from which ophthalmic diagnoses are drawn and imperfections in the decision-making algorithms that may be used. There is room for improvement in the domain of online symptom checkers for ophthalmic symptoms. Future studies may investigate the accuracy of OSCs compared with ophthalmologists for patients in real-life settings. As the dominion of internet diagnostics expands further in medicine, there may be changes in the way that ophthalmologists practice.

Supplement.

eTable. Ophthalmic Clinical Vignettes

eReferences

References

  • 1.Fox S, Duggan M. Health Online 2013. Internet and American Life Project. Washington, DC: Pew Research Center and California Health Care Foundation; 2013:4. [Google Scholar]
  • 2.Bhopal RS, Parkin DW, Gillie RF, Han KH. Pattern of ophthalmological accidents and emergencies presenting to hospitals. J Epidemiol Community Health. 1993;47(5):382-387. doi: 10.1136/jech.47.5.382 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Pierscionek TJ, Moore JE, Pierscionek BK. Referrals to ophthalmology: optometric and general practice comparison. Ophthalmic Physiol Opt. 2009;29(1):32-40. doi: 10.1111/j.1475-1313.2008.00614.x [DOI] [PubMed] [Google Scholar]
  • 4.Hau S, Ioannidis A, Masaoutis P, Verma S. Patterns of ophthalmological complaints presenting to a dedicated ophthalmic accident & amergency department: inappropriate use and patients’ perspective. Emerg Med J. 2008;25(11):740-744. doi: 10.1136/emj.2007.057604 [DOI] [PubMed] [Google Scholar]
  • 5.Channa R, Zafar SN, Canner JK, Haring RS, Schneider EB, Friedman DS. Epidemiology of eye-related emergency department visits. JAMA Ophthalmol. 2016;134(3):312-319. doi: 10.1001/jamaophthalmol.2015.5778 [DOI] [PubMed] [Google Scholar]
  • 6.Web MD. WebMD Symptom Checker. https://symptoms.webmd.com/default.htm#/info. Accessed October 13, 2017.
  • 7.Huang G, Fang CH, Agarwal N, Bhagat N, Eloy JA, Langer PD. Assessment of online patient education materials from major ophthalmologic associations. JAMA Ophthalmol. 2015;133(4):449-454. doi: 10.1001/jamaophthalmol.2014.6104 [DOI] [PubMed] [Google Scholar]
  • 8.Narendran N, Amissah-Arthur K, Groppe M, Scotcher S. Internet use by ophthalmology patients. Br J Ophthalmol. 2010;94(3):378-379. doi: 10.1136/bjo.2009.170324 [DOI] [PubMed] [Google Scholar]
  • 9.Semigran HL, Linder JA, Gidengil C, Mehrotra A. Evaluation of symptom checkers for self diagnosis and triage: audit study. BMJ. 2015;351:h3480. doi: 10.1136/bmj.h3480 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Farmer SEJ, Bernardotto M, Singh V. How good is internet self-diagnosis of ENT symptoms using Boots WebMD symptom checker? Clin Otolaryngol. 2011;36(5):517-518. doi: 10.1111/j.1749-4486.2011.02375.x [DOI] [PubMed] [Google Scholar]
  • 11.Bisson LJ, Komm JT, Bernas GA, et al. . Accuracy of a computer-based diagnostic program for ambulatory patients with knee pain. Am J Sports Med. 2014;42(10):2371-2376. doi: 10.1177/0363546514541654 [DOI] [PubMed] [Google Scholar]
  • 12.Hageman MGJS, Anderson J, Blok R, Bossen JKJ, Ring D. Internet self-diagnosis in hand surgery. Hand (N Y). 2015;10(3):565-569. doi: 10.1007/s11552-014-9707-x [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplement.

eTable. Ophthalmic Clinical Vignettes

eReferences


Articles from JAMA Ophthalmology are provided here courtesy of American Medical Association

RESOURCES