Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Aug 1.
Published in final edited form as: Optom Vis Sci. 2023 Jul 27;100(8):499–506. doi: 10.1097/OPX.0000000000002049

Towards a Real-World Optical Coherence Tomography Reference Database: Optometric Practices as a Source of Healthy Eyes

Donald C Hood 1,2, Mary Durbin 3, Chris Lee 4, Gabriel Gomide 5, Sol La Bruna 6, Michael Chaglasian 7, Emmanouil Tsamis 8,9
PMCID: PMC10697481  NIHMSID: NIHMS1918827  PMID: 37499165

Abstract

Significance:

The reports from optical coherence tomography (OCT) instruments depend upon a reference database (RDB) of healthy eyes. Although these RDBs tend to be relatively small, they are time consuming, and expensive, to obtain. A larger RDB should improve our ability to screen for diseases such as glaucoma.

Purpose:

To explore the feasibility of developing a large RDB from OCT scans obtained by optometrists as part of their pre-test gathering of information, we tested the hypothesis that these scans are of sufficient quality for a RDB and contain a relatively low base rate of glaucoma and other pathologies.

Methods:

OCT widefield (12mm × 9mm) scans from 400 eyes of 400 patients were randomly selected from a dataset of over 49,000 scans obtained from 4 optometry sites. Based upon a commercial OCT report and a previously validated reading-center method, two OCT graders categorized eyes as: Unacceptable (UN) to use for RDB, healthy (H), optic neuropathy consistent with glaucoma (ON-G), glaucoma suspect (GS), or Other Pathologies (OP).

Results:

Overall, 29 (7.25%) of the eyes were graded UN. Of the remaining 371 eyes, 352 (94.9%) were graded H. Although for one site 7.4% of the eligible eyes were graded ON-G, the average for the other 3 sites was 1.4%. Adjustments of the reading center criteria resulted in exclusion of over half of these ON-G and OP eyes.

Conclusions:

The OCT scans obtained from optometry practices as part of their pre-test regimen are of sufficient quality for a RDB and contain a relatively low base rate of glaucoma and other pathologies. With the suggested exclusion criteria, the scans from optometry practices that are primarily involved in refraction and medical screening services should yield a large, real-world (RW) RDB with improved specificity and a base rate of glaucoma and/or other pathologies comparable to existing RDB.


Optical coherence tomography (OCT) is becoming increasingly important for diagnosing and following glaucoma. The clinician depends heavily on summary metrics and pie charts when deciding whether a patient’s OCT scan is abnormal. The significance of these summary metrics and pie charts are color-coded based upon a “healthy” reference database (RDB).1

The current reference databases have three weaknesses. One, they tend to be relatively small (about 400 eyes). Thus, these reference databases are likely not large enough to account for the possible confounding factors that may affect OCT scans (e.g., age, gender, ethnic background, disc size, disk-to-fovea disk distance).2 Second, there are reasons to believe that these reference databases do not adequately reflect the range of healthy eyes tested in a clinic. For an eye to be included into a reference database, the fundus exam must be healthy, and the eye typically needs to have a normal and reliable 24–2 visual field test. Further, eyes with high myopia are typically excluded. Third, scan quality is likely better in the case of the reference database as compared to scans from the clinic. Repeat OCT scans are often required as part of the reference database protocol, and the scan quality must meet the criteria of the testing site, as well as the criteria of a reading center.3 This is not true for the real-world of the clinic. A very large real-world reference database of healthy eyes should mitigate some of these problems with the existing reference databases. Unfortunately, prospectively collecting a large reference database is prohibitively expensive. However, OCT scans obtained by optometrists may be a source of numerous OCT scans from healthy eyes. Increasingly, optometry practices are obtaining OCT scans as part of pre-test gathering of information before the patient sees the optometrist. The overwhelming percentage of these patients are interested in refraction services and are likely to have healthy eyes.

In this study, we explored the possibility of using the OCT scans obtained by optometrists to develop a real-world reference database (RW-RDB) of thousands of healthy eyes, and we tested the hypothesis that these scans will be of sufficient quality for a reference database and will be overwhelmingly from healthy eyes, with a low base rate of glaucoma and other pathologies. To this end, we studied a sample of 400 scans from 400 eyes, selected from over 49,000 scans. Based upon a reading center approach, our results suggest that the scans from optometry will provide the scans needed for a large real-world reference database.

METHODS

The data used in this study were from a retrospective study approved by the Advarra Institutional Review Board.

Participants

OCT scans from 400 eyes of 400 patients were randomly selected from a real-world dataset of over 49,000 scans obtained from 4 optometry sites. One hundred eyes were randomly chosen from each of the first 4 participating sites. Sites 3 and 4 were part of the same practice but were separate sites. The distributions of ages are shown in Table 1.

Table 1.

Age distribution by site

Site <20 20–29 30–39 40–49 50–59 60–69 70–79 80+ Total

1 0 25 47 19 8 1 0 0 100
2 0 2 1 39 22 20 15 1 100
3 1 13 29 23 14 11 9 0 100
4 0 23 19 16 19 16 7 0 100

Total 1 63 96 97 63 48 31 1 400

OCT Scanning and Report

OCT widefield (12×9 mm) scans were obtained using the 3D OCT-1 Maestro instruments, which allows for automated alignment and focus (Topcon, Inc.). The commercial Hood report shown in Fig. 1 was generated for each of the 400 eyes. See the figure caption and references 4 and 5.4,5

Figure 1.

Figure 1.

The commercial Hood Report containing retinal nerve fiber layer (RNFL) (upper right solid red rectangle) and ganglion cell plus inner plexiform layer (GCL+) (lower right red rectangle) probability (P−) maps. These p-maps are based upon the RNFL and GCL+ thickness maps by comparing the local values of an individual’s thickness maps to that of normative age-corrected values. Every scan was rotated to a common fovea-to-disc angle, which partially accounts for head-eye torsion and some anatomical differences.

Categorization of Scans

Excluding Unacceptable Scans

Based upon a Hood report (Fig. 1) and a foveal b-scan, two OCT graders categorized eyes as “Acceptable” or “Unacceptable” to be used for quantitative (metric) analysis. This involved first examining the circumpapillary (cp) b-scan in the upper left panel of the report. It was judged “Unacceptable” if there was evidence of clipping as in Fig. 2A (middle) and/or segmentation errors (Fig. 2A, left and right panels) that would substantially affect the total circumpapillary retinal nerve fiber layer (cpRNFL) thickness or the cpRNFL thickness of the quadrants of the disc as seen on the pie chart (arrow in Fig. 1). If the b-scan was acceptable, the macular ganglion cell plus inner plexiform layer (GCL+) thickness map was examined and deemed “Unacceptable” if there was evidence of artifacts affecting the thick donut-shaped region as seen in Fig. 2C. The two OCT graders disagreed in 18 (4.5%) cases on whether the scan was “Acceptable” or “Unacceptable”. These disagreements were resolved via adjudication. See Appendix A1 (available at [LWW insert link]) for detailed procedures of our reading center method.

Figure 2.

Figure 2.

(A) The derived circumpapillary b-scan images for 3 of the scans judged “Unacceptable” for use in a real-world (RW) reference database (RDB). (B) Same for 3 scans judged “Acceptable” for use in a RDB. (C) The GCL+ thickness maps for 3 of the scans judged “Unacceptable” for use in a RW-RDB. (D) Same for 3 scans judged “Acceptable” for use in a RDB.

Labelling the Acceptable Scans

Using a previously validated method,6,7 the OCT graders labelled the Acceptable scans as either Healthy (H), Optic Neuropathy consistent with Glaucoma (ON-G), glaucoma suspects (GS), or Other Pathologies (OP). There was disagreement between the two OCT graders in 23 (6.2%) of the Acceptable scans, although in no case did it involve a disagreement between ON-G and healthy judgments. All disagreements were resolved via adjudication and 21 eyes were labelled as H, 1 as GS, and the last as OP. In 18 cases, the Hood report for the other (non-study) eye was used to aid the adjudication. See Appendix A1, available at [LWW insert link], for details.

Labelling the Scans for the Commercial Maestro Reference Database

To compare the number of ON-G eyes and OP eyes in the existing Maestro reference database, the same method was used to grade the Hood reports from the 398 eyes in the commercial reference database. The development and characteristics of the Maestro commercial reference database are described in Chaglasian et al.3

RESULTS

Unacceptable Scans

Table 2 shows the results for all 400 of the real-world scans, while Table 3 shows the details for each site. Overall, 29 (7.25%) of the 400 scans were graded unacceptable to use for a reference database (Table 2). Site 4 had the most, 12%, while the other 3 sites had 5% (site 3) and 6% (sites 1 and 2) (Table 3). Figure 2A,C shows examples of unacceptable b-scans and GCL thickness maps, and Appendix A2, available at [LWW insert link], contains the Hood reports (Fig. 1) for all 29.

Tables 2.

Labels for all 4 sites with and without poor scans.

All Scans Without Poor Scans

Grade # % Grade # %
HC 352 88.00% HC 352 94.90%
OCT-S 2 0.50% OCT-S 2 0.50%
ON-G 11 2.80% ON-G 11 3.00%
Other Pathology 6 1.50% Other Pathology 6 1.60%
Poor Scan 29 7.25%
400 100% 371 100%

Table 3.

Labels by site.

All Scans Without Poor Scans

Grade # % Grade # %

Site 1 HC 93 93% Site 1 HC 93 98.9%
OCT-S 0 0% OCT-S 0 0%
ON-G 1 1% ON-G 1 1.1%
Other Pathology 0 0% Other Pathology 0 0%
Poor Scan 6 6%
100 100% 94 100%

Site 2 HC 82 82% Site 2 HC 82 87.2%
OCT-S 1 1% OCT-S 1 1.1%
ON-G 7 7% ON-G 7 7.4%
Other Pathology 4 4% Other Pathology 4 4.3%
Poor Scan 6 6%
100 100% 94 100%

Site 3 HC 91 91% Site 3 HC 91 95.8%
OCT-S 1 1% OCT-S 1 1.1%
ON-G 2 2% ON-G 2 2.1%
Other Pathology 1 1% Other Pathology 1 1.1%
Poor Scan 5 5%
100 100% 95 100%

Site 4 HC 86 86% Site 4 HC 86 97.7%
OCT-S 0 0% OCT-S 0 0%
ON-G 1 1% ON-G 1 1.1%
Other Pathology 1 1% Other Pathology 1 1.1%
Poor Scan 12 12%
100 100% 88 100%

The image quality, a commercial signal-to-noise metric, is not a good indicator of whether a scan is unacceptable. Figure 3 shows the image quality values for all 400 eyes, with the image quality values for the 29 Unacceptable scans in red. Notice it is not possible to distinguish acceptable from unacceptable scans based upon image quality. See also the image quality values for the examples in Fig. 2A,C.

Figure 3.

Figure 3.

The Image Quality from the commercial software for all 400 scans. The green symbols are the scans judged to be “Unacceptable” and the open symbols the scans judged to be “Acceptable”. Symbols are displaced along the y-axis for clarity.

Healthy Eyes

As expected, most of the scans were graded as healthy (Table 2, left). Overall, 352 of 400 (88.0%) scans were graded as healthy, which represents 94.9% of the acceptable scans (Table 2, right). (Unacceptable scans were not evaluated for glaucoma.) Interestingly, the percentage of the acceptable scans graded as healthy scans varied by site. For sites 1, 3 and 4, it was between 95.8 and 99%, while for site 2 it was 87.2% (Table 3, right). The reason for this difference is discussed below.

Optic Neuropathy Consistent with Glaucoma

As the primary objective here is to develop a reference database to screen for glaucoma, it is important to understand the prevalence of optic neuropathy consistent with glaucoma (ON-G). Across all 4 sites, 11 (2.8%) of the scans were graded as ON-G. Interestingly, 7 of these scans were from one site, site 2, while the other sites had only 1 or 2 ON-G eyes. Figure 4 contains the Hood Report for 4 of these eyes, 2 from site 2 (top row) and 2 from the other sites. The Hood Report for all 11 can be found in Appendix A3, available at [LWW insert link]. In the Discussion, we consider the possible reasons for the differences among sites, as well as ways to minimize the inclusion of ON-G eyes in a reference database.

Figure 4.

Figure 4.

The Hood Report for 4 of the 11 eyes judged to be optic neuropathy consistent with glaucoma (ON-G). A, B. from site 2. C, D. from the other sites.

Glaucoma Suspects

There were only 2 eyes categorized as a Glaucoma Suspects (GS). For these eyes, we could not rule out ON-G. However, if ON-G were present, then it was very subtle and would not affect the metrics of a reference database. The Hood Report for these eyes can be found in Appendix A3, available at [LWW insert link].

Other Pathologies

There were 6 eyes in the Other Pathologies (OP) category. Like the ON-G eyes, a disproportionate number, 4, were from site 2. Of the 6, 2 had extensive epiretinal membrane peels, and the others had a paravascular inner retinal defect, a macular hole, a macular schisis, or an indication of possible age-related macular degeneration. The Hood Report for these eyes can be found in Appendix A3, available at [LWW insert link].

Comparison to Existing Research Database

To test the hypothesis that the specificity of the existing commercial Maestro reference database will be worse than expected when tested with a real-world collection of healthy eyes, we determined the number of the 352 real-world healthy eyes meeting a commonly employed OCT criteria for abnormal (i.e., glaucoma).814 In particular, an healthy eye was said to be abnormal if either the superior or inferior quadrant of the cpRNFL pie chart (arrow in Fig. 1) was red. The expected value for healthy eyes should be between 1 to 2%, as an eye meets the criterion if either quadrant meets the criterion. As expected, when the individual scans from the existing commercial reference database were compared against itself, 2% met the red criterion. However, when the real-world healthy eyes were assessed based upon the commercial reference database, the false positive rate, 4.8% [(2.6%, 7.0%) 95% confidence limit)] was over twice as high compared to the expected 1 to 2%

Finally, when the Hood reports from the 398 eyes in the commercial research database were graded as described in Methods, 5 were graded as ON-G, and one as OP (paravascular inner retinal defect), for a total of 6 (1.5%).

DISCUSSION

Our objective was to explore the possibility of using OCT scans obtained from optometry sites to develop a reference database of thousands of healthy eyes. To this end, we examined reports from OCT scans from a sample of 400 eyes obtained from 4 optometry sites, which obtained OCT scans as part of pre-testing information. Overall, the quality of the scans was good. Of the 400 scans, 371 (92.8%) were graded of sufficient quality to be used for an reference database.

Of importance is the proportion of eyes with ON-G, as well as other pathologies that may mislead the clinician. To develop a large real-world reference database, the presence of these eyes must be minimized. Here we tested the hypothesis that the scans from optometry sites will show a relatively low base rate of glaucoma and other pathologies. We found that after removing the unacceptable scans, our sample of 371 eyes contained 3% ON-G and 1.6% other pathologies. Below we describe a simple modification of our reading center exclusion criteria that markedly reduces the percentage of these eyes. First, let’s consider the implications of our findings for the creation of a real-world reference database.

Implications for a Real-world Reference Database

In general, to create a reference database of “healthy” eyes, two decisions must be made. The first involves the definition of “acceptable scan quality” and the second concerns a definition of a “healthy eye”. Let’s consider scan quality first.

Typically, the reference database protocol will include removing unacceptable scans at the level of the technician, the testing site, and finally at an independent reading center.3 At each stage, the exclusion is based largely upon “subjective” decisions. For example, in the case of the commercial reference database used here, the reading center excluded scans based upon image quality score, presence of eye blinks, eye motion, clipping, local weak signal, and feature centration.3 The only objective measure is the image quality index. Our approach was similar to that taken by reading centers for OCT scans and fundus photos. We eliminated scans with clear errors. However, importantly, by errors we mean artifacts that would affect cpRNFL thickness and/or GCL thickness measures, the measures used for the most common summary metrics. As seen in Figs. 2 and 3, the image quality score is a poor predictor of the unacceptable scans in our study.

Whatever criteria are used for scan exclusion, care is required not to eliminate scans of eyes with RNFL and GCL+ thicknesses at the lower end of healthy. Scans of eyes at the lower end of normal thickness can be confused with eyes with diffuse damage.15 If these eyes are eliminated, specificity will suffer when screening real-world healthy eyes. In any case, the specificity of the commercial reference database, when assessed using the real-world healthy eyes, was poorer than predicted. For example, using a common definition of abnormal (i.e., a red superior or inferior quadrant on the RNFL pie chart), we found a specificity of 95.2% as opposed to an expected value of ≥98%. This represents over twice as many false positives than expected, and a large number of healthy eyes evaluated at optometry sites every year.

Forming existing reference databases also requires certifying that an eye is healthy. For the commercial reference database in this study, it involved a medical history, a fundus photo and/or exam, and often a visual field.3 We do not have this information available for our real-world data. To create a real-world reference database from the scans from optometry practices, there are at least three strategies that can be used to minimize the number of eyes with ON-G and/or other pathologies. One, a reading center could perform a complete review of all scans as we did for the 400 scans in the present study. This would take considerable time and is not necessary as the next two strategies yield a sufficiently low number of eyes with ON-G and/or other pathologies. Two, we can choose optometry sites that are primarily involved in refraction and medical screening services. This should ensure a low base rate of glaucomatous eyes or eyes with other pathologies. A post hoc analysis of the 4 sites indicated that site 2 is involved in disease management, including glaucoma, age-related macular degeneration, as well as myopia. Note the other 3 sites combined had ON-G and other pathology rates of 1.4 and 0.7%, respectively, which are within the rate currently in existing OCT reference databases. As indicated above, 1.5% of the existing commercial reference database showed evidence of ON-G or OP. In addition, we found 2% of a different commercial reference database were ON-G.16 Finally, a third strategy involves expanding our “Unacceptable” category. Presently, eyes with GCL+ thickness maps with clear artifacts are excluded (see Fig. 2 and Appendix A2, available at [LWW insert link]). This criterion could be expanded so that all eyes with incomplete circles of GCL+ thickness, i.e., eyes with glaucomatous or other retinal loss, are also excluded, not just those with artifacts. Figure 4B,D shows 2 examples where the GCL+ thickness map has an “incomplete circle” and thus the eyes would be excluded. This criterion would exclude 6 of the 11 ON-G and 4 of the 6 other pathologies resulting in a prevalence of 1.3% and 0.5% overall, and 0.7% and 0.4% in the sample without site 2. In any case, the resulting inclusion of ON-G eyes and other pathologies is no greater than in the existing reference database, or in the general population. We recommend a combination of strategies 2 and 3.

Limitations

The eyes in the current commercial OCT reference databases are defined as healthy based upon a clinician’s fundus exam, a best-corrected visual acuity equal to or better than 20/40, and often a “normal” and reliable visual field as well. Thus, it could be argued our lack of these requirements is a serious limitation. We have previously shown that the labels provided by our OCT-only approach are in excellent agreement with those supplied by a clinician who had the full range of information.17 However, we submit that our reference standard, which is based only upon the OCT, is more appropriate than the one typically used. First, there is considerable disagreement among clinicians concerning the diagnosis based upon a fundus photo and clinical exam;1822 the 24–2 visual fields are known to miss glaucomatous damage23,24; and 24–2 reliability indices can be misleading.2530 Second, clinicians do not use the OCT by itself to diagnose glaucoma. They combine the OCT information with other information (e.g., family history, IOP, visual fields, etc.). Thus, an OCT expert should be the reference standard for the evaluation of the OCT information, and the reference standard should not be a clinician’s judgment using some subset of the information such as fundus photos, 24–2 visual fields, and best-corrected visual acuity. For an analogy, suppose we had an automated method that was designed to replace the radiologist’s reading of computerized axial tomography scans from patients screened for lung cancer, and assume that its output is a label of either “healthy” or “radiopaque mass” consistent with lung cancer. We suggest the reference standard for this study should be expert radiologists making the same judgment, and not clinicians using other information to validate the presence of lung cancer.

A second limitation of this study is the age distribution of the eyes included. Only one eye was older than 80 years of age, and 31 (7.8%) were between 70 and 80 as compared to 24% between both 30–39 and 40–49 (see Table 1). Because the incidence of glaucoma and other retinal pathologies increases with age, our estimates of these abnormalities would be higher if we had, for example, the same number of eyes per decade. However, the effects on our results are likely relatively small as indicated by site 4, which had the best-balanced sample and a relatively small number of abnormal scans. In any case, given the extremely large number of scans available from optometric sites, a real-world reference database can be formed for a wide range of age distributions, including one with an equal number of eyes by decade from the 20s to the 80s, or even by year.

A third possible limitation is the time required to evaluate thousands of scans. In fact, if a reading center had to label every acceptable scan as H, ON-G, G-suspect or OP, as was done in the present study, this would take considerable time and expertise. However, the two strategies mentioned above should allow a reading center to screen thousands of OCT reports in a reasonable amount of time. Given the relatively low percentage of Unacceptable scans, if we only use our expanded “Unacceptable” category for screening, we estimate that a grader could identify 500 healthy scans in an hour. As such, 10,000 healthy scans could be labelled in about 20 hours. In addition, we believe that the two strategies described above are relatively easy to teach to others. Of course, a reading center could perform a complete review of all scans, although this would result in an even smaller number of ON-G and OP eyes, but it would take considerably longer to develop.

A fourth limitation concerns the proportion of abnormal scans in a real-world reference database based upon scans from optometry sites. For example, individuals going to an optometrist are more likely to have pathology. As indicated in Table 2, after removing the poor scans, over 5% of the eyes in this study were labelled OCT-suspects, glaucoma, or other optic neuropathies. However, in the section above we argued that using two easy to implement strategies it should be possible to reduce the percentage of abnormal scans to a value equal to or less than that of existing commercial reference database. A fifth, and arguably the most important, limitation is the possibility that the real-world will decrease sensitivity. For example, if we only increased the specificity by lowering the criterion thickness associated with the 1% limits of the RNFL and GCL thickness, more eyes with ON-G may be missed and sensitivity will be decreased. Even in this case, the reference database should still be of value as a screening method to minimize false positives at the expense of missing borderline glaucomatous damage. Presumably, if the real-world healthy eyes in the current study were evaluated with a new real-world reference database, ≤2% of these eyes would be false positives based upon the red quadrant criteria, as opposed to 4.8% observed using the commercial reference database. On the other hand, a very large real-world reference database would offer the possibility of separate reference databases based upon gender, ethnic groups, refractive error, and/or anatomical parameters such as blood vessel location, fovea-to-disc distance or angle. Finally, a large real-world reference database should be of value in training AI models, as well as understanding patterns of RNFL and GCL thickness and probability/deviation maps seen in healthy and glaucomatous eyes.

In any case, as a next step, a validation real-world reference database study is needed with a prospective real-world clinical dataset that includes both healthy and glaucomatous eyes.

CONCLUSIONS

To summarize, we propose that a very large real-world reference database can be created by using OCT scans from optometry sites that are primarily involved in refraction and medical screening services. Scans are excluded if they contain clear artifacts affecting the cpRNFL b-scan, and/or artifacts or pathology disrupting the circular region of thick ganglion cells around fixation. These are artifacts that will affect the common metrics and pie charts used to categorize healthy eyes. We hypothesize that this method should produce a large real-world reference database with optimal specificity for a screening population. A study with a large real-world sample is needed to test this hypothesis. Finally, this real-world reference database should be a valuable source of information for methods that use AI and/or pattern recognition.

Supplementary Material

Appendix 1 final
Appendix 2 final
Appendix 3 final

APPENDICES

Appendix A1, available at [LWW insert link]. The procedures used as part of our reading center method are described for labelling the quality of the scan, and determining the presence, or absence, of glaucomatous optic neuropathy (ON-G), as well as signs of other pathologies. Appendix A1-1, available at [LWW insert link]: the R shiny web application developed for the purposes of grading OCT reports and images that appeared on the right panel (green rectangle). A systematic approach with a series of questions (left panel, blue rectangle) was developed to assess the quality of the scan and categorize each scan in the following 5 categories: 1) Healthy, 2) OCT-Suspect, 3) Optic Neuropathy consistent with Glaucoma (ON-G), 4) Other Pathologies, and 5) Unacceptable Scan. Appendix A1-2, available at [LWW insert link]: examples of Hood Widefield Reports from the 4 different quality classifications. A) Q1: ‘Good Scan’; B) Q2: ‘Scan allows for metrics and clinical judgement’; C) Q3: ‘Scan allows for clinical judgement only’; and D) Q4: ‘Poor Scan’. Appendix A1-3, available at [LWW insert link]: the quality of this scan was initially judged as Q2, “Scan allows for metrics and clinical judgement”, by one expert, and Q3, “Scan allows only for clinical judgement” by the other expert. After adjudication, it was decided that the nasal region of the circumpapillary RNFL is sufficiently affected by scan artifacts to affect the pie charts and therefore the final adjudicated quality classification was Q3.

Appendix A2, available at [LWW insert link]. The Hood reports are shown for all 29 unacceptable scans.

Appendix A3, available at [LWW insert link]. The Hood reports are shown for the 11 ON-G eyes, the 2 glaucoma suspects (GS), and the 6 Other Pathologies.

Contributor Information

Donald C. Hood, Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, NY; Department of Psychology, Columbia University, New York, NY.

Mary Durbin, Topcon Healthcare, Oakland, New Jersey.

Chris Lee, Topcon Healthcare, Oakland, New Jersey.

Gabriel Gomide, Columbia Vagelos College of Physicians and Surgeons, New York, NY.

Sol La Bruna, UPMC, Pittsburgh, PA.

Michael Chaglasian, Illinois College of Optometry, Chicago, Illinois.

Emmanouil Tsamis, Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, NY; Department of Psychology, Columbia University, New York, NY.

REFERENCES

  • 1.Realini T, Zangwill L, Flanagan J, et al. Normative Databases for Imaging Instrumentation. J Glaucoma 2015;24:480–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Chong GT, Lee RK. Glaucoma versus Red Disease: Imaging and Glaucoma Diagnosis. Curr Opin Ophthalmol 2012;23:79–88. [DOI] [PubMed] [Google Scholar]
  • 3.Chaglasian M, Fingeret M, Davey PG, et al. The Development of a Reference Database with the Topcon 3D OCT-1 Maestro. Clin Ophthalmol 2018;12:849–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hood DC, Raza AS. On Improving the Use of OCT Imaging for Detecting Glaucomatous Damage. Br J Ophthalmol 2014;98(Suppl. 2):ii1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hood DC. Improving our Understanding, and Detection, of Glaucomatous Damage: An Approach Based upon Optical Coherence Tomography (OCT). Prog Retin Eye Res 2017;57:46–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Liebmann JM, Hood DC, de Moraes CG, et al. Rationale and Development of an OCT-Based Method for Detection of Glaucomatous Optic Neuropathy. J Glaucoma 2022;31:375–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hood DC, La Bruna S, Tsamis E, et al. Detecting Glaucoma with Only Oct: Implications for the Clinic, Research, Screening, and Ai Development. Prog Retin Eye Res 2022;90:101052. [DOI] [PubMed] [Google Scholar]
  • 8.Bowd C, Zangwill LM, Berry CC, et al. Detecting Early Glaucoma by Assessment of Retinal Nerve Fiber Layer Thickness and Visual Function. Invest Ophthalmol Vis Sci 2001;42:1993–2003. [PubMed] [Google Scholar]
  • 9.Nouri-Mahdavi K, Hoffman D, Tannenbaum DP, et al. Identifying Early Glaucoma with Optical Coherence Tomography. Am J Ophthalmol 2004;137:228–35. [DOI] [PubMed] [Google Scholar]
  • 10.Hougaard JL, Heijl A, Bengtsson B. Glaucoma Detection by Stratus OCT. J Glaucoma 2007;16:302–6. [DOI] [PubMed] [Google Scholar]
  • 11.Shin CJ, Sung KR, Um TW, et al. Comparison of Retinal Nerve Fibre Layer Thickness Measurements Calculated by the Optic Nerve Head Map (NHM4) and RNFL3.45 Modes of Spectral-domain Optical Coherence Tomography (RTVue-100). Br J Ophthalmol 2010;94:763–7. [DOI] [PubMed] [Google Scholar]
  • 12.Wollstein G, Kagemann L, Bilonick RA, et al. Retinal Nerve Fibre Layer and Visual Function Loss in Glaucoma: The Tipping Point. Br J Ophthalmol 2012;96:47–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lee WJ, Oh S, Kim YK, et al. Comparison of Glaucoma-diagnostic Ability between Widefield Swept-source OCT Retinal Nerve Fiber Layer Maps and Spectral-domain OCT. Eye 2018;32:1483–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Iyer JV, Boland MV, Jefferys J, et al. Defining Glaucomatous Optic Neuropathy Using Objective Criteria from Structural and Functional Testing. Br J Ophthalmol 2021;105:789–93. [DOI] [PubMed] [Google Scholar]
  • 15.Zemborain ZZ, Tsamis E, La Bruna S, et al. Distinguishing Healthy From Glaucomatous Eyes With Optical Coherence Tomography Global Circumpapillary Retinal Nerve Fiber Thickness in the Bottom 5th Percentile. J Glaucoma 2022;31:529–39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.La Bruna S, Rai A, Mao G, et al. The OCT RNFL probability map and artifacts resembling glaucomatous damage. Transl Vis Sci Technol 2022;11:18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hood DC, Raza AS, De Moraes CG, et al. Evaluation of a One-Page Report to Aid in Detecting Glaucomatous Damage. Transl Vis Sci Technol 2014;3:8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Lichter PR. Variability of Expert Observers in Evaluating the Optic Disc. Trans Am Ophthalmol Soc 1976;74:532–72. [PMC free article] [PubMed] [Google Scholar]
  • 19.Cooper R, Alder V, Constable I. Measurement vs. Judgement of Cup-disc Ratios: Statistical Evaluation of Intraobserver and Interobserver Error. Glaucoma 1982;4:169–76. [Google Scholar]
  • 20.Tielsch JM, Katz J, Quigley HA, et al. Intraobserver and Interobserver Agreement in Measurement of Optic Disc Characteristics. Ophthalmology 1988;95:350–6. [DOI] [PubMed] [Google Scholar]
  • 21.Abrams LS, Scott IU, Spaeth GL, et al. Agreement among Optometrists, Ophthalmologists, and Residents in Evaluating the Optic Disc for Glaucoma. Ophthalmology 1994;101:1662–7. [DOI] [PubMed] [Google Scholar]
  • 22.Zangwill LM, Shakiba S, Caprioli J, et al. Agreement between Clinicians and a Confocal Scanning Laser Ophthalmoscope in Estimating Cup/Disk Ratios. Am J Ophthalmol 1995;119:415–21. [DOI] [PubMed] [Google Scholar]
  • 23.Grillo LM, Wang DL, Ramachandran R, et al. The 24–2 Visual Field Test Misses Central Macular Damage Confirmed by the 10–2 Visual Field Test and Optical Coherence Tomography. Transl Vis Sci Technol 2016;5:15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.De Moraes CG, Hood DC, Thenappan A, et al. 24–2 Visual Fields Miss Central Defects Shown on 10–2 Tests in Glaucoma Suspects, Ocular Hypertensives, and Early Glaucoma. Ophthalmology 2017;124:1449–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Bickler-Bluth M, Trick GL, Kolker AE, et al. Assessing the Utility of Reliability Indices for Automated Visual Fields. Testing Ocular Hypertensives. Ophthalmology 1989;96:616–9. [DOI] [PubMed] [Google Scholar]
  • 26.Katz J, Sommer A, Witt K. Reliability of Visual Field Results over Repeated Testing. Ophthalmology 1991;98:70–5. [DOI] [PubMed] [Google Scholar]
  • 27.Birt CM, Shin DH, Samudrala V, et al. Analysis of Reliability Indices from Humphrey Visual Field Tests in an Urban Glaucoma Population. Ophthalmology 1997;104:1126–30. [DOI] [PubMed] [Google Scholar]
  • 28.Gardiner SK, Swanson WH, Goren D, et al. Assessment of the Reliability of Standard Automated Perimetry in Regions of Glaucomatous Damage. Ophthalmology 2014;121:1359–69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Rao HL, Yadav RK, Begum VU, et al. Role of Visual Field Reliability Indices in Ruling Out Glaucoma. JAMA Ophthalmol 2015;133:40–4. [DOI] [PubMed] [Google Scholar]
  • 30.Heijl A, Patella VM, Flanagan JG, et al. False Positive Responses in Standard Automated Perimetry. Am J Ophthalmol 2022;233:180–8. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 1 final
Appendix 2 final
Appendix 3 final

RESOURCES