Abstract
Purpose:
To determine the predictive value of International Classification of Diseases, 9th Revision (ICD-9) codes for identifying infantile eye diagnoses.
Methods:
Population-based retrospective cohort study of all residents of Olmsted County, Minnesota diagnosed at ≤1 year of age with an ocular disorder. The medical records of all infants diagnosed with any ocular disorder from January 1, 2005, through December 31, 2014, were identified. To assess ICD-9 code accuracy, the medical records of all diagnoses with ≥20 cases were individually reviewed and compared to their corresponding ICD-9 codes. Main outcome measures included positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of ICD-9 codes.
Results:
In a cohort of 5,109 infants with ≥1 eye-related ICD-9 code, 10 ocular diagnoses met study criteria. The most frequent diagnoses were conjunctivitis (N=1,695) and congenital nasolacrimal duct obstruction (N=1,250), while the least common was physiologic anisocoria (N=23). The PPVs ranged from 8.3%−88.0%, NPVs from 96.3%−100%, sensitivity from 3.0%−98.7%, and specificity from 72.6%−99.9%. ICD-9 codes were most accurate at identifying physiologic anisocoria (PPV: 88.0%) and least accurate at identifying preseptal cellulitis (PPV: 8.3%). In eye specialists versus non-eye specialists, there was a significant difference in PPV of ICD-9 codes for conjunctivitis (26.8% vs. 63.9%, p<0.001), pseudostrabismus (85.9% vs. 25.0%, p<0.001), and physiologic anisocoria (95.5% vs. 33.3%, p=0.002).
Conclusion:
The predictive value of ICD-9 codes for capturing infantile ocular diagnoses varied widely in this cohort. These findings emphasize the limitations of database research methodologies that solely utilize claims data to identify pediatric eye diseases.
Keywords: Pediatric ophthalmology; International Classification of Diseases, Ninth Revision; ICD-9; Positive predictive value; Negative predictive value; Sensitivity; Specificity
Introduction
The use of administrative claims data in ophthalmology database research has increased over the past decade, particularly in studies investigating ophthalmic epidemiology.1 Although their use is appealing because they generate large sample sizes relatively conveniently, busy clinicians may submit faulty data or omit relevant diagnostic codes, contributing to inaccurate or non-specific billing data.2, 3 Prior investigations of the accuracy of International Classification of Diseases, Ninth Revision (ICD-9) billing codes for ophthalmic conditions have provided contrasting findings. Claims data for common diagnoses such as cataract, glaucoma suspect, diabetic retinopathy, and non-exudative age-related macular degeneration tend to have higher predictive value,4–6 while ICD-9 codes for less frequent conditions such as neuro-ophthalmic and ocular inflammatory disease are generally less accurate.7–10
Understanding the predictive value of claims data is imperative to interpreting the findings and validity of database research studies. Claims data have been utilized in pediatric ophthalmology database research;11, 12 however, it remains unknown whether billing codes accurately capture pediatric ocular conditions. To our knowledge, no studies to date have assessed the accuracy of claims data for capturing pediatric eye diagnoses. The purpose of this study was to investigate the accuracy of ICD-9 codes for identifying infantile ocular diagnoses using a population-based cohort of infants diagnosed over a 10-year period.
Materials and Methods
The medical records of all patients ≤1 year of age residing in Olmsted County, Minnesota from January 1, 2005, through December 31, 2014, when diagnosed with any ocular condition, were retrospectively reviewed. Patients were identified using the Rochester Epidemiology Project, a medical record linkage system that tracks medical care delivered to residents of Olmsted County, Minnesota using diagnostic and surgical procedure codes.13 The patient population in Olmsted County is relatively isolated from other urban areas, and the Rochester Epidemiology Project captures virtually all medical care provided by Mayo Clinic, Olmsted Medical Group, and affiliated hospitals.14 The Institutional Review Boards of Mayo Clinic and Olmsted Medical Center approved this retrospective cohort study.
Using the Rochester Epidemiology Project, an extensive diagnostic code search utilizing 1,007 eye-related ICD-9 codes was performed to identify all potential patients diagnosed with any ocular disease in the first year of life during the 10-year study period (Supplemental eTable). Patients were excluded if they lived outside Olmsted County at the time of diagnosis, if their birth date was outside the study period, or if they were older than 12 months when diagnosed with an ocular condition. All medical records identified via the ICD-9 diagnostic code search were individually reviewed to assess diagnoses and demographic data. For provider specialty, eye specialists were defined as ophthalmologists and optometrists. Non-eye specialists were defined as all other specialties including pediatrics, family medicine, and emergency medicine.
ICD-9 code accuracy was evaluated for diagnoses in which ≥20 confirmed cases were identified during the 10-year study period (Table 1). Diagnoses identified via ICD-9 codes alone were compared to diagnoses confirmed via individual review of the medical record. ICD-9 codes were considered congruent with the medical record-confirmed diagnosis when the correct diagnosis was identified by both the ICD-9 code search and review of the medical record, as well as when the diagnosis was absent in both the ICD-9 code search and medical records. Conversely, ICD-9 codes were considered incongruent when diagnoses were reflected in the ICD-9 coding but not the medical record, and vice versa.
Table 1:
Diagnosis | Accurate ICD-9 codes |
---|---|
Conjunctivitis | 370.3–370.49, 372.0–372.39 |
Congenital nasolacrimal duct obstruction | 375.52–375.56, 743.65 |
Esotropia (all subtypes) | 378.00, 378.05, 378.35, 378.01 |
Exotropia (all subtypes) | 378.10, 378.11, 378.13, 378.15 |
Physiologic anisocoria | 379.41 |
Preseptal cellulitis | 682.0 |
Pseudostrabismus | 378.87, 378.9 |
Ptosis | 374.3, 763.61, 374.30 |
Retinopathy of prematurity | 362.20–21, 362.23–25 |
Subconjunctival hemorrhage | 372.72 |
The main outcome measures were positive predictive value (PPV) and negative predictive value (NPV) of ICD-9 codes for each diagnosis. PPV was defined as the ratio of the number of confirmed cases by review of the medical record with an accurate ICD-9 code (true positives) to the number of medical records identified via ICD-9 code search alone (true positives plus false positives). NPV was defined as the ratio of the number of cases that lacked both a medical record-confirmed diagnosis and the associated ICD-9 code (true negatives) to the number of all cases that did not hold an ICD-9 code for a particular diagnosis (true negatives plus false negatives). Sensitivity and specificity were also calculated as secondary outcome measures for each diagnosis. Data analysis was performed using SAS Version 9 (SAS Institute; Cary, North Carolina).
Results
There were 19,833 newborn births in Olmsted County, Minnesota during the 10-year study period. The initial 1,007 ICD-9 code search identified 5,109 infants in Olmsted County who potentially had an ocular diagnosis. After review of each individual medical record, 4,223 (82.7%) infants were confirmed to have an ocular diagnosis. Among the 4,223 infants with an ocular condition, 1,951 (46.2%) were female and 3,185 (75.4%) were White, compared to the demographic characteristics of the overall population which was 9,687 (48.8%) female and 13,348 (67.3%) White. Of the 1,007 ICD-9 codes initially searched, 180 (17.9%) unique ICD-9 codes were used to bill for an ocular diagnosis among the 4,223 subjects with a confirmed diagnosis after review of the medical records.
Ten diagnoses had ≥20 cases confirmed via review of the medical records: conjunctivitis (N=1,695), congenital nasolacrimal duct obstruction (CNLDO) (N=1,275), pseudostrabismus (N=173), retinopathy of prematurity (N=76), esotropia (all subtypes) (N=40), ptosis (N=39), preseptal cellulitis (N=33), exotropia (all subtypes) (N=31), subconjunctival hemorrhage (N=26), and physiologic anisocoria (N=23). The number of cases identified via ICD-9 code alone versus the number of medical record-confirmed cases is reported in Table 2.
Table 2:
Medical record data versus ICD-9 code search data | |||||
---|---|---|---|---|---|
Congruent | Incongruent | ||||
Diagnosis | Number of potential cases identified via initial ICD-9 code search | Present in both medical record and ICD-9 code search (True positives) | Absent in both medical record and ICD-9 code search (True negatives) | Present in medical record, absent in ICD-9 code search (False negatives) | Absent in medical record, present in ICD-9 code search (False positives) |
Conjunctivitis (N = 1,695) | 2,535 | 1,599 | 2,478 | 96 | 936 |
Congenital nasolacrimal duct obstruction (N = 1,275) | 1,445 | 1,184 | 3,573 | 91 | 261 |
Pseudostrabismus (N = 173) | 209 | 160 | 4,887 | 13 | 49 |
Retinopathy of prematurity (N = 76) | 87 | 75 | 5,021 | 1 | 12 |
Esotropia (all subtypes) (N = 40) | 89 | 31 | 5,011 | 9 | 58 |
Ptosis (N = 39) | 31 | 24 | 5,063 | 15 | 7 |
Preseptal cellulitis (N = 33) | 12 | 1 | 5,065 | 32 | 11 |
Exotropia (all subtypes) (N = 31) | 24 | 15 | 5,069 | 16 | 9 |
Subconjunctival hemorrhage (N = 26) | 27 | 21 | 5,077 | 5 | 6 |
Physiologic anisocoria (N = 23) | 25 | 22 | 5,083 | 1 | 3 |
Abbreviations: N = number of cases confirmed via individual review of the medical record.
For the 10 queried diagnoses in the 5,109 medical records with ≥1 eye-related ICD-9 code that were individually reviewed, positive predictive values (PPV) ranged from 8.3% (preseptal cellulitis) to 88.0% (physiologic anisocoria) (Table 3). The negative predictive values (NPV) ranged from 96.3% (conjunctivitis) to 100% (retinopathy of prematurity, physiologic anisocoria). Sensitivity ranged from 3.0% (preseptal cellulitis) to 98.7% (retinopathy of prematurity), and specificity ranged from 72.6% (conjunctivitis) to 99.9% (ptosis, subconjunctival hemorrhage, physiologic anisocoria).
Table 3:
Diagnosis | Positive Predictive Value | Negative Predictive Value | Sensitivity | Specificity |
---|---|---|---|---|
Conjunctivitis (N = 1,695) | 63.1% | 96.3% | 94.3% | 72.6% |
Congenital nasolacrimal duct obstruction (N = 1,275) | 81.9% | 97.5% | 92.9% | 93.2% |
Pseudostrabismus (N = 173) | 76.6% | 99.7% | 92.5% | 99.0% |
Retinopathy of prematurity (N = 76) | 86.2% | 100.0% | 98.7% | 99.8% |
Esotropia (all subtypes) (N = 40) | 34.8% | 99.8% | 77.5% | 98.9% |
Ptosis (N = 39) | 77.4% | 99.7% | 61.5% | 99.9% |
Preseptal cellulitis (N = 33) | 8.3% | 99.2% | 3.0% | 99.7% |
Exotropia (all subtypes) (N = 31) | 62.5% | 99.7% | 48.4% | 99.8% |
Subconjunctival hemorrhage (N = 26) | 77.8% | 99.9% | 80.8% | 99.9% |
Physiologic anisocoria (N = 23) | 88.0% | 100.0% | 95.7% | 99.9% |
Abbreviations: N = number of cases confirmed via individual review of the medical record; ICD-9 = International Classification of Diseases, 9th Revision.
When stratified by specialty of diagnosing provider, PPVs for ICD-9 codes assigned by eye specialists (e.g., ophthalmologists, optometrists) ranged from 26.8% (conjunctivitis) to 95.5% (physiologic anisocoria) (Table 4). In non-eye care providers, PPVs ranged from 12.5% (preseptal cellulitis) to 100% (exotropia, all subtypes). There was a significant difference in PPV for ICD-9 codes assigned by eye specialists versus non-eye specialists, respectively, for conjunctivitis (26.8% vs. 63.9%, p<0.001), pseudostrabismus (85.9% vs. 25.0%, p<0.001), and physiologic anisocoria (95.5% vs. 33.3%, p=0.002). No difference existed in ICD-9 code accuracy between eye specialists and non-eye specialists for CNLDO (87.2% vs. 81.6%; p=0.17), esotropia (all subtypes) (36.8% vs. 23.1%, p=0.34), ptosis (83.3% vs. 57.1%, p=0.14), exotropia (all subtypes) (60.9% vs. 100.0%, p=0.43), and subconjunctival hemorrhage (87.5% vs. 73.7%, p=0.43). Predictive value for ICD-9 codes in eye specialists versus non-eye specialists could not be compared for retinopathy of prematurity and preseptal cellulitis because no non-eye specialists made a diagnosis of retinopathy of prematurity and no eye specialists made a diagnosis of preseptal cellulitis.
Table 4:
Diagnosis | Overall PPV | PPV when diagnosed by eye specialista | PPV when diagnosed by non-eye specialist | P-valueb |
---|---|---|---|---|
Conjunctivitis (N = 1,695) | 63.1% | 26.8% (N=22) | 63.9% (N=1,673) | <0.001 |
Congenital nasolacrimal duct obstruction (N = 1,275) | 81.9% | 87.2% (N=96) | 81.6% (N=1,179) | 0.17 |
Pseudostrabismus (N = 173) | 76.6% | 85.9% (N=165) | 25.0% (N=8) | <0.001 |
Retinopathy of prematurity (N = 76) | 86.2% | 92.6% (N=76) | N/Ac | - |
Esotropia (all subtypes) (N = 40) | 34.8% | 36.8% (N=36) | 23.1% (N=4) | 0.34 |
Ptosis (N = 39) | 77.4% | 83.3% (N=30) | 57.1% (N=9) | 0.14 |
Preseptal cellulitis (N = 33) | 8.3% | N/Ad | 12.5% (N=33) | - |
Exotropia (all subtypes) (N = 31) | 62.5% | 60.9% (N=23) | 100.00% (N=8) | 0.43 |
Subconjunctival hemorrhage (N = 26) | 77.8% | 87.5% (N=9) | 73.7% (N=17) | 0.43 |
Physiologic anisocoria (N = 23) | 88.0% | 95.5% (N=21) | 33.3% (N=2) | 0.002 |
Abbreviations: N = number of cases confirmed by review of the medical record; PPV = positive predictive value.
Eye specialist was defined as an ophthalmologist or optometrist.
P-values calculated using Chi-square test comparing PPV in eye specialists versus non-eye specialists.
No cases of retinopathy of prematurity were diagnosed non-eye specialists.
No cases of preseptal cellulitis were diagnosed eye specialists.
Discussion
In this population-based cohort of 5,109 infants identified via an extensive diagnostic code search for all ocular conditions, overall positive predictive values (PPVs) of ICD-9 codes for infantile ocular diseases varied widely from 8.3% to 88.0%. ICD-9 accuracy for more common conditions such as conjunctivitis and CNLDO did not necessarily trend towards greater predictive value compared to ICD-9 codes for less frequent diagnoses such as esotropia and exotropia. When comparing claims data associated with eye specialists versus non-eye specialists, eye specialists more accurately billed for pseudostrabismus and physiologic anisocoria, while non-eye specialists more accurately billed for conjunctivitis. These findings suggest that the predictive value of ICD-9 billing codes for capturing infantile eye diseases varies widely, highlighting the potential limitations of pediatric ophthalmology database research studies that rely solely on claims data for identifying subjects.
Given the ICD-9’s expansive nature of over 12,000 diagnostic and 3,500 procedure codes, factors that contribute to imprecise claims data coding and variable ICD-9 accurate rates include unintentional and intentional coding errors (e.g., upcoding, misspecification, incorrect unbundling of codes), a lack of institutional quality control efforts, and variations in coder training and experience.3, 15 Many ICD-9 codes have broad descriptions and are used for a wide range of diagnoses, thereby rendering many of them non-specific.16 Inaccurate coding may also represent the omission of billing codes by busy providers. If a patient was diagnosed with multiple conditions, it is possible that the provider billed for only one of the diagnoses, thus claims data would fail to capture diagnoses that were not assigned ICD-9 codes.
These factors likely contributed to the variable accuracy of ICD-9 codes observed in this cohort. For example, there was no specific ICD-9 code for preseptal cellulitis, which had the lowest PPV of all diagnoses in this study. As a result, providers had to use a non-specific code, 682.0 (cellulitis and abscess of face), to bill for this condition. Providers may even assign inaccurate diagnosis codes, as 20 (60.6%) of the 33 confirmed cases of preseptal cellulitis had diagnosis codes of 376.01 (orbital cellulitis). In contrast, diagnoses with discrete ICD-9 codes tended to be associated with higher predictive values, such as retinopathy of prematurity [PPV: 86.2%; ICD-9 code 362.20 (retinopathy of prematurity, unspecified)], physiologic anisocoria [PPV: 88.0%; ICD-9 379.41 (anisocoria)], and CNLDO [PPV:81.9%; ICD-9 code 375.55 (obstruction of nasolacrimal duct, neonatal)]. Of note, strabismus-related ICD-9 code accuracy was lower despite having discrete ICD-9 codes [e.g., 378.00 (esotropia, unspecified) and 378.10 (exotropia, unspecified)]. The overall PPV for esotropia (all subtypes) was 34.8% and exotropia (all subtypes) was 62.5%. A plausible explanation for this finding is if primary care providers suspected a patient to have strabismus, they may have used these billing codes indiscriminately and referred them to pediatric eye specialists. If the pediatric ophthalmologist ruled out strabismus but failed to remove strabismus billing codes, then the patient might have had strabismus-related ICD-9 codes without a true diagnosis of strabismus. Provider variability in coding habits, non-specific codes, and incomplete data represent major limitations of database research that utilizes claims data.
Adding complementary search criteria such as specialty of diagnosing provider, diagnostic testing data, and pharmacy data may increase the validity of ICD codes.17 In an investigation of ICD code accuracy for idiopathic intracranial hypertension, Khushzad et al. reported that ICD codes had greater predictive value when assigned by relevant subspecialists (e.g., neurologists, ophthalmologists, neurosurgeons) and when associated with billing codes for appropriate diagnostic workup (e.g., lumbar puncture, neuroimaging) and treatment (e.g., acetazolamide).18 In this study, ICD-9 codes assigned by eye specialists were associated with increased predictive value for pseudostrabismus and physiologic anisocoria. A potential explanation for these findings is that pseudostrabismus and physiologic anisocoria are diagnoses of exclusion that require more specialized evaluation by eye specialists, therefore eye specialists are more likely to accurately diagnose and bill for these conditions. In contrast, conjunctivitis ICD-9 codes assigned by non-eye specialists were associated with higher predictive value. This finding may reflect the observation that primary care providers diagnose most conjunctivitis cases (98.7% cases in this cohort) and are therefore more adept at billing for this condition. Further studies should investigate whether billing codes for diagnostic workup and treatment increase pediatric eye disease diagnosis code accuracy, such as orthoptic measurements in children with strabismus.
Other alternative methodologies that may overcome the limitations of claims data in database research include utilizing ICD-9 accuracy rates as reference indices when review of individual medical records is not feasible.19 Since accuracy parameters have been estimated in studies such as the present study, PPVs could be incorporated into statistical analyses as correction factors. However, coding accuracy likely varies by institution, specialty, department, and even providers within a department, thus the generalizability of these correction factors may be limited. Another promising future direction is big data-based machine learning algorithms, which analyze both claims data and free-text from the electronic medical records to identify diagnoses. Stein et al. developed an algorithm that incorporated ICD data with free-text from the electronic medical record to identify exfoliation syndrome with a PPV of 95.0% in a cohort of over 122,000 patients.20 Future database research studies should consider incorporating these methods to enhance the ability of investigators to accurately use billing data to efficiently study patients with ocular diagnoses.
The limitations of this study included its retrospective study design, which contributed to non-standardized diagnostic criteria, ocular evaluation, data collection, and follow-up. Diagnoses may have been missed, mis-diagnosed, or omitted from documentation in the medical record as a result. Furthermore, ocular conditions in this cohort were assessed among infants in a predominately White Midwestern United States county population and might not be generalizable to other patient populations with respect to age (e.g., grade school or adolescent children) and race. This study’s main outcome measures were also highly dependent on the provider assigning the billing codes (e.g., pediatricians, family practitioners, pediatric ophthalmologists) and may not be applicable to other practices, institutions, and health care systems. Moreover, the data from this study were obtained from patients diagnosed from 2005 to 2014 and may have limited generalizability to datasets outside this timespan. Medicare documentation became more rigorous in the early 2000’s, and ICD codes rapidly evolved to keep up with billing requirements.3 These historical changes likely led to increased ICD-9 accuracy in the early 2000’s compared to the 1990’s as billing requirements became more rigorous. ICD-10 was implemented in 2015 and involved increased complexity of billing codes, which may have resulted in increased predictive value of claims data as billings codes became more specific.21 Thus, the findings of this study may be less generalizable to database research performed prior to 2005 and after 2015.
Claims data demonstrated widely variable predictive value for the most common infantile eye conditions in this cohort of 5,109 infants with at least 1 eye-related ICD-9 code. ICD-9 codes for physiologic anisocoria and pseudostrabismus were more accurate when submitted by eye care providers, while claims data for conjunctivitis was more accurate when submitted by non-eye care providers. These findings suggest that the use of claims data alone for identifying pediatric eye diseases has limited predictive value and may overestimate the true number of diagnoses within a cohort. Alternative methodologies such as incorporation of complementary search criteria and free-text from the medical record into coding algorithms should be considered to more accurately identify pediatric eye diagnoses in ophthalmology database research.
Supplementary Material
Acknowledgments
This study used the resources of the Rochester Epidemiology Project (REP) medical records-linkage system, which is supported by the National Institute on Aging (NIA; AG 058738), by the Mayo Clinic Research Committee, and by fees paid annually by REP users. The content of this article is solely the responsibility of the authors and does not represent the official views of the National Institutes of Health (NIH) or the Mayo Clinic.
Abbreviations/Acronyms
- PPV
Positive predictive value
- NPV
Negative predictive value
- CNLDO
Congenital nasolacrimal duct obstruction
Footnotes
Data Availability Statement: The data supporting the results are available upon reasonable request to the senior author.
Conflict of Interest Statement: None of the authors have any proprietary interests or conflicts of interest related to this submission.
References
- 1.Tan JCK, Ferdi AC, Gillies MC, Watson SL. Clinical Registries in Ophthalmology. Ophthalmology 2019;126(5):655–62. [DOI] [PubMed] [Google Scholar]
- 2.Stein JD, Lum F, Lee PP, et al. Use of health care claims data to study patients with ophthalmologic conditions. Ophthalmology 2014;121(5):1134–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.O’Malley KJ, Cook KF, Price MD, et al. Measuring diagnoses: ICD code accuracy. Health Serv Res 2005;40(5 Pt 2):1620–39. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Muir KW, Gupta C, Gill P, Stein JD. Accuracy of international classification of diseases, ninth revision, clinical modification billing codes for common ophthalmic conditions. JAMA Ophthalmol 2013;131(1):119–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Lau M, Prenner JL, Brucker AJ, VanderBeek BL. Accuracy of Billing Codes Used in the Therapeutic Care of Diabetic Retinopathy. JAMA Ophthalmol 2017;135(7):791–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Bearelly S, Mruthyunjaya P, Tzeng JP, et al. Identification of Patients With Diabetic Macular Edema From Claims Data: A Validation Study. Arch Ophthalmol 2008;126(7):986–9. [DOI] [PubMed] [Google Scholar]
- 7.Pimentel MA, Browne EN, Janardhana PM, et al. Assessment of the Accuracy of Using ICD-9 Codes to Identify Uveitis, Herpes Zoster Ophthalmicus, Scleritis, and Episcleritis. JAMA Ophthalmol 2016;134(9):1001–6. [DOI] [PubMed] [Google Scholar]
- 8.Palestine AG, Merrill PT, Saleem SM, et al. Assessing the Precision of ICD-10 Codes for Uveitis in 2 Electronic Health Record Systems. JAMA Ophthalmol 2018;136(10):1186–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Hamedani AG, De Lott LB, Deveney T, Moss HE. Validity of International Classification of Diseases Codes for Identifying Neuro-Ophthalmic Disease in Large Data Sets: A Systematic Review. J Neuroophthalmol 2020;40(4):514–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Uchiyama E, Faez S, Nasir H, et al. Accuracy of the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) as a research tool for identification of patients with uveitis and scleritis. Ophthalmic Epidemiol 2015;22(2):139–41. [DOI] [PubMed] [Google Scholar]
- 11.Ryu WY, Lambert SR. Incidence of strabismus and amblyopia among children initially diagnosed with pseudostrabismus using the Optum(R) dataset. Am J Ophthalmol 2019(211):98–104. [DOI] [PMC free article] [PubMed]
- 12.Repka MX, Lum F, Burugapalli B. Strabismus, Strabismus Surgery, and Reoperation Rate in the United States: Analysis from the IRIS Registry. Ophthalmology 2018;125(10):1646–53. [DOI] [PubMed] [Google Scholar]
- 13.Melton LJ, 3rd. History of the Rochester Epidemiology Project. Mayo Clin Proc 1996;71(3):266–74. [DOI] [PubMed] [Google Scholar]
- 14.Rocca WA, Yawn BP, St Sauver JL, et al. History of the Rochester Epidemiology Project: half a century of medical records linkage in a US population. Mayo Clinic Proc 2012;87(12):1202–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Hwang JC, Yu AC, Casper DS, et al. Representation of ophthalmology concepts by electronic systems: intercoder agreement among physicians using controlled terminologies. Ophthalmology 2006;113(4):511–9. [DOI] [PubMed] [Google Scholar]
- 16.Hammond WE. Call for a standard clinical vocabulary. J Am Med Inform Assoc 1997;4(3):254–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Biggerstaff KS, Frankfort BJ, Orengo-Nania S, et al. Validity of code based algorithms to identify primary open angle glaucoma (POAG) in Veterans Affairs (VA) administrative databases. Ophthalmic Epidemiol 2018;25(2):162–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Khushzad F, Kumar R, Muminovic I, Moss HE. Predictive Value of International Classification of Diseases Codes for Idiopathic Intracranial Hypertension in a University Health System. J Neuroophthalmol 2020. [DOI] [PMC free article] [PubMed]
- 19.Borkar DS, Sobrin L, Hubbard RA, et al. Techniques for improving ophthalmic studies performed on administrative databases. Ophthalmic Epidemiol 2019;26(3):147–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Stein JD, Rahman M, Andrews C, et al. Evaluation of an Algorithm for Identifying Ocular Conditions in Electronic Health Record Data. JAMA Ophthalmol 2019;137(5):491–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Cai CX, Michalak SM, Stinnett SS, et al. Effect of ICD-9 to ICD-10 Transition on Accuracy of Codes for Stage of Diabetic Retinopathy and Related Complications: Results from the CODER Study. Ophthalmol Retina 2021;5(4):374–80. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.