Skip to main content
Neurology® Neuroimmunology & Neuroinflammation logoLink to Neurology® Neuroimmunology & Neuroinflammation
. 2025 Aug 29;12(6):e200458. doi: 10.1212/NXI.0000000000200458

Influence of OSCAR-IB Criteria on Test-Retest Reliability of Cirrus HD-OCT Retinal Thickness Measurements in People With Multiple Sclerosis

Anna Bacchetti 1, Ting-Yi Lin 1, Brenna McCormack 1, Omar Ezzedin 1, Rozita Doosti 1, Gelareh Ahmadi 1, Nicole Pellegrini 1, Evan Johnson 1, Simidele Davis 1, Elle Lawrence 1, Gabriel Otero-Duran 1, Ernest Lievers 1, Madeline Inserra 1, Sooyeon Park 1, Devon Bonair 1, Anna Kim 1, Ananya Gulati 1, Kathryn C Fitzgerald 1, Elias S Sotirchos 1, Peter A Calabresi 1, Shiv Saidha 1,
PMCID: PMC12401565  PMID: 40882162

Abstract

Background and Objectives

Optical coherence tomography (OCT) allows evaluation of inter-eye differences (IEDs) in macular ganglion cell-inner plexiform layer (GCIPL) and peripapillary retinal nerve fiber layer (pRNFL) thicknesses to identify unilateral optic nerve involvement (UONI). UONI supports dissemination in space (DIS) as part of the 2024 revised McDonald diagnostic criteria for multiple sclerosis (MS). The OSCAR-IB quality control (QC) criteria identify suboptimal-quality OCT scans, which could potentially result in false-positive or false-negative UONI identification. We aimed to determine the influence of scans fulfilling OSCAR-IB criteria (SFO) and not fulfilling (SNFO) on test-retest reliability of pRNFL and GCIPL thicknesses/IEDs, with a commonly used OCT platform (Cirrus HD-OCT).

Methods

A total of 509 participants, including 397 people with MS, underwent Cirrus HD-OCT, with acquisition of 2 macular and optic disc scans per eye. Each scan was classified as either SFO or SNFO. There were no clinical or demographic exclusions in order to reflect a real-world clinical setting. Reproducibility was evaluated with intravisit intraclass correlation coefficients (ICCs) and coefficients of variation (COVs). IED consistency was assessed with difference-in-differences (DiDs) and probabilities of agreement (POA) for specific IED thresholds (GCIPL </≥4; pRNFL </≥6 μm).

Results

A total of 1,143 macular scan pairs (1,100 SFO and 42 SNFO) for GCIPL and 1,108 optic disc scan pairs (1,003 SFO and 105 SNFO) for pRNFL were analyzed. SFO demonstrated superior reliability, as compared to SNFO for GCIPL (SFO: ICC = 0.998, COV = 0.40%; SNFO: ICC 0.353, COV 10.14%) and pRNFL (SFO: ICC = 0.989, COV = 1.18%; SNFO: ICC = 0.852, COV = 3.94%) thicknesses. DiDs were lower for SFO (GCIPL 0.64 ± 0.67 μm, pRNFL: 2.00 ± 1.72 μm), as compared to SNFO (GCIPL: 10.17 ± 13.87 μm, pRNFL: 4.78 ± 5.51 μm). POA of IED thresholds (GCIPL: </≥4; pRNFL: </≥6 μm) was higher for SFO than for SNFO (GCIPL: 95.58% vs 47.83%; pRNFL: 86.89% vs 71.67%).

Discussion

GCIPL and pRNFL thicknesses/IEDs demonstrated markedly inferior reliability in SNFO, relative to SFO. Failure to fulfill OSCAR-IB criteria influenced pRNFL measurements and, in particular, GCIPL measurements, highlighting the importance of thorough QC in the interpretation of OCT to correctly identify UONI and accurately support DIS for the diagnosis of MS.

Introduction

Multiple sclerosis (MS) is an inflammatory, demyelinating, and neurodegenerative disorder of the central nervous system.1 Optic nerve involvement (whether symptomatic or asymptomatic) is a nearly universal feature of the disorder.2-4 The 2024 revised McDonald diagnostic criteria for MS will include optic nerve involvement as a fifth topographical region for dissemination in space (DIS).5 This inclusion is based on evidence that demyelinating optic nerve injury, detectable by various techniques, including retinal optical coherence tomography (OCT), is often present early in the MS disease course, and improves the diagnostic performance of the 2017 revised McDonald criteria for MS.5-7

An important method for detecting unilateral optic nerve involvement (UONI) and establishing DIS is OCT-derived inter-eye differences (IEDs) in ganglion cell-inner plexiform layer (GCIPL) and/or peripapillary retinal nerve fiber layer (pRNFL) thicknesses.8-11 Regardless of using Cirrus HD-OCT or Spectralis, 2 of the most commonly used OCT platforms,12 optimal IEDs that have been identified for distinguishing MS eyes with, versus without, evidence of UONI, are ≥4 µm for GCIPL and/or ≥6 µm for pRNFL thicknesses. These IEDs either approximate or exceed the 95th percentile of IEDs in healthy controls (HCs) and are, therefore, unlikely to be observed by chance alone.10,13

Previous studies demonstrate excellent test-retest reliability and reproducibility of spectral-domain OCT measurements in HCs and those with ophthalmologic disorders such as glaucoma.14-18 However, OCT image segmentation or acquisition errors can affect the reliability of pRNFL measurements, regardless of signal strength.19 Factors affecting the accuracy of pRNFL measurements include segmentation errors,20 suboptimal centration,21 and epiretinal membranes (ERMs),22 among others. Along these lines, the OSCAR-IB criteria are fundamental for identifying poor-quality optic disc OCT scans.23,24 They were originally developed for application in MS research, and intended to be applied by trained personnel in OCT reading centers. The OSCAR-IB criteria involve assessing the OCT images acquired in the optic nerve scan for the following: (O) obvious problems, (S) poor signal strength, (C) centration of scan, (A) algorithm failure, (R) retinal pathology other than MS related, (I) illumination, and (B) beam placement.23 The OSCAR-IB criteria, validated in a multicenter study, demonstrate substantial inter-rater reliability,23 ensuring consistent quality control (QC) assessment.

For macular OCT scans, from which GCIPL thicknesses are derived, the OSCAR-IB criteria are also commonly applied for QC assessment in MS research, although their use in macular scans is less well studied. This process requires reviewing all the OCT images acquired in the macular scan for violations of the OSCAR-IB criteria. This is a fundamental component of the QC process because numerous factors impairing the quality of macular scans may not be evident on the report generated by the OCT device. Only reviewing the OCT device–generated report may fail to identify relevant QC issues, potentially leading to errors in clinical interpretation, as has been suggested in glaucoma.25,26 It is plausible that similar risks likely exist in MS, and that without adequate QC of both the macular and optic disc OCT scans, false-positive or false-negative IEDs in GCIPL and/or pRNFL thicknesses may be identified, thereby either incorrectly supporting or failing to support the diagnosis of MS, respectively.

Numerous studies have examined the prevalence and impact of individual QC parameters such as the presence of ERMs and retinal layer segmentation failures on OCT measurements.19,20,22,27,28 However, there has been limited assessment of the test-retest variability of GCIPL or pRNFL measurements in scans that do or do not meet the OSCAR-IB criteria. The importance of this gap is underpinned by the recommendation in the 2024 revised McDonald criteria that OCT IEDs only be utilized for demonstrating DIS after the application of OSCAR-IB QC criteria. In this study, we aimed to determine the influence of OSCAR-IB criteria on OCT reliability in eyes of a large cohort of PwMS, in a real-world clinical setting. This is critical for determining the importance of OCT QC and validating the role of the OSCAR-IB QC criteria in clinical practice for the interpretation of GCIPL and/or pRNFL IEDs to identify UONI and accurately support DIS in MS diagnosis.

Methods

Participants

PwMS and people with other neurological disorders (PwONDs) were prospectively recruited from the Johns Hopkins MS Center between 2017 and 2024. HCs were also recruited from among Johns Hopkins staff and patients' partners. The diagnosis of MS was confirmed by the treating neurologist according to the 2017 revised McDonald criteria.29 According to the Lublin classification,30 participants with MS were categorized as having relapsing-remitting MS (RRMS), secondary progressive MS (SPMS), or primary progressive MS (PPMS). Some participants attended multiple visits during the study period, with each visit considered as an independent test-retest assessment. Demographic and clinical data were collected, including disease duration, history of optic neuritis (ON) with date and side of occurrence, as well as 100% and 2.5% contrast visual acuity (VA).

No exclusion criteria were applied to the demographic or clinical characteristics of the participants, to assess a study population that is reflective of a real-world clinical setting. Data collection was standardized to ensure consistency.

Optical Coherence Tomography 

All participants underwent spectral-domain Cirrus HD-OCT (model 5,000, software version 8.1; Carl Zeiss Meditec, Dublin, CA). Macular scans were acquired using the Macular Cube 512 x 128 protocol to assess GCIPL thicknesses generated by the Cirrus HD-OCT–incorporated automated segmentation. Optic disc scans were acquired using the Optic Disc Cube 200 x 200 protocol to evaluate pRNFL thicknesses, similarly using the automated segmentation algorithm incorporated within the Cirrus HD-OCT device.31 We used the Cirrus HD-OCT's built-in segmentation for real-world applicability, prioritizing efficiency and generalizability over the potentially greater reliability of custom-built in-house methods.32

Experienced technicians obtained 2 scans per eye under low-light conditions. After the initial acquisition of macular and optic nerve scans from the right and then the left eye, participants were instructed to sit back for a brief moment before all scans were repeated.

A test-retest pair was classified as scans fulfilling the OSCAR-IB criteria (SFO) if both scans met the QC requirements and had a signal strength of 7 or higher. Signal strength, an image quality parameter in Cirrus HD-OCT, ranges from 0 (lowest) to 10 (highest). For the retinal pathology (R) criteria, we did not exclude scans from people with concurrent conditions such as glaucoma, which also causes pRNFL thinning independently of MS.33 We aimed to also include incidental/concurrent pathologies that may interfere with retinal imaging quality and potentially affect reproducibility. If either scan did not fulfill the OSCAR-IB criteria (SNFO), the entire pair was classified as SNFO. Finally, if both test-retest scans were SNFO, the pair was excluded from analyses. This exclusion was applied because our central aim was to determine the differences in reproducibility between SFO and SNFO.

IEDs were classified as SFO only if all 4 scans were SFO. If one of the 4 scans was classified as SNFO, the IED was deemed SNFO, and if more than 1 of the 4 scans was classified as SNFO, all were excluded from the analyses for the same reasons given above. Similarly, when evaluating the probability of agreement (POA) with combined GCIPL and pRNFL thresholds (GCIPL: <4/≥4; pRNFL: <6/≥6 μm), all 8 scans had to be classified as SFO for the IEDs to be considered SFO. If one of the 8 scans was classified as SNFO, the POA was deemed SNFO, and if more than 1 scan was classified as SNFO, all scans were excluded from the analysis.

Reporting of OCT methods and results adhered to the Advised Protocol for OCT Study Terminology and Elements recommendations.34,35

Modified OSCAR-IB Criteria for Macular Scans

The OSCAR-IB criteria were originally developed for optic disc scan QC in clinical trials. In this study, we applied the following overlapping, albeit slightly modified, OSCAR-IB rules for macular scan evaluation, focusing on the GCIPL (Figures 1 and 2):

  • 1. (O) Obvious Problems: includes floaters, motion artifacts, and out-of-register artifacts.

  • 2. (S) Signal Strength: requires a signal strength of 7 or higher.

  • 3. (C) Centration: ensures that the macula is centered in the thickness and deviation maps of the GCIPL.

  • 4. (A) Algorithm Failure: assesses failures in the GCIPL algorithm.

  • 5. (R) Retinal Pathology: identifies retinal pathologies affecting GCIPL segmentation.

  • 6. (I) Illumination: evaluates fundus illumination quality.

  • 7. (B) Beam Placement: ensures that the beam is centered with homogeneous retinal reflectivity in the outer nuclear layer (ONL).

Figure 1. Illustrations of Modified OSCAR-IB Violations of OCT Macular Scans Evident on Routine OCT Reports/Readouts.

Figure 1

(A) Obvious problems (O) and illumination (I): presence of a floater (black arrow) affecting the illumination (I) of the fundus. (B) Obvious problems (O): blurry fundus with motion artifacts (black arrows), reducing clarity. (C) Macula centration (C): the inner circle of the macular grid is misaligned with the fovea in the thickness map (black arrow). (D) Macula centration (C) and obvious problems (O): the macula is not centered in the deviation map, with its inferior portion excluded from the square (downward black arrow). Additional issues include motion artifacts (O) at the top (upward black arrow) and incomplete fundus illumination (I). OCT = optical coherence tomography.

Figure 2. Illustrations of Modified OSCAR-IB Violations of OCT Macular Images.

Figure 2

(A) Obvious problems (O): presence of an out-of-register artifact, causing the scan to shift superiorly and resulting in a loss of information in the temporal region (white arrow). An out-of-register artifact occurs when retinal misalignment during image acquisition causes truncation of the superior, inferior, or lateral edges.36 (B) Algorithm failure (A): misidentification of the GCIPL, evident from the incorrect boundary delineation (white arrow). In addition, the image shows a lack of beam centering, as indicated by the uneven reflectivity across the scan. (C) Retinal pathology (R): presence of an epiretinal retinal epithelial membrane affecting the segmentation of the GCIPL (white arrow). (D) Beam placement (B): the beam is not centered, resulting in suboptimal retinal reflectivity. The left arrow points to a dark gray region while the other points to a lighter gray region. If there is too much difference in the color of the ONL itself, a scan is rejected. Cirrus-acquired OCT scans with reduced ONL reflectivity often show decreased reflectivity across all layers, likely due to differences in image quality compared with Spectralis-acquired OCT scans. This contributes to algorithm failure as well. GCIPL = ganglion cell-inner plexiform layer; OCT = optical coherence tomography; ONL = outer nuclear layer.

Statistical Analyses

Comparisons of demographic and clinical variables were performed using the χ2 test for categorical variables, analysis of variance (ANOVA) for normally distributed continuous variables, and the Kruskal-Wallis test for non-normally distributed continuous variables. The mean absolute differences (MADs) and the mean absolute percentage differences (MAPDs), along with their SDs, were calculated for all groups to summarize the variability of GCIPL and pRNFL thicknesses.

The intravisit reproducibility of GCIPL and pRNFL thicknesses was assessed using intravisit intraclass correlation coefficients (ICCs) and coefficients of variation (COVs). The ICC was calculated using a two-way mixed-effects model with absolute agreement, with values greater than 0.9 indicating excellent reproducibility. The COV was defined as the SD divided by the average thickness of each set, expressed as a percentage. Bland-Altman analyses were performed to compare the differences in thickness measurements, illustrating the mean differences (agreement at the cohort level) and the limits of agreement (LOAs) with a 95% CI, reflecting variability and agreement at the individual level.36

IED consistency was evaluated by analyzing the difference-in-differences (DiDs) between eyes and the POA. DiDs were calculated by taking the absolute difference between the IEDs measured in 2 separate tests. The POA assessed whether the IEDs consistently met or exceeded specific thresholds (GCIPL <4/≥4 μm and/or pRNFL <6/≥6 μm), across both test and retest measurements.

To evaluate the impact of scan quality on the use of IEDs for detecting UONI, we analyzed a subset of PwMS with known previous unilateral clinical ON or no known history of ON (non-ON). We compared the sensitivity and specificity of IEDs in sessions where both eyes met OSCAR-IB criteria (SFO) vs sessions where 1 or both eyes did not (SNFO).

All analyses were conducted separately for SFO and SNFO. Correlation analyses were performed to evaluate potential relationships between the MADs/MAPDs and the number of criteria not fulfilled in SNFO for GCIPL and pRNFL measurements, between MADs/MAPDs and mean signal strength, and between the MAPDs and thickness values, as well as between MADs/MAPDs, and 100% and 2.5% contrast VA. These correlations were adjusted for age and sex using a linear mixed-effects regression model.

Statistical analyses were performed using STATA version 18 (StataCorp, College Station, TX). Statistical significance was defined as p < 0.05.

Standard Protocol Approvals, Registrations, and Patient Consents

Johns Hopkins University Institutional Review Board approved the study, and written informed consent was obtained from all participants.

Data Availability

Data related to this study are available based on reasonable request and subject to proper inter-institutional data sharing agreements.

Results

Baseline Characteristics of Study Participants

A total of 509 participants were enrolled, with 507 people (991 eyes) included in the test-retest analyses (Figure 3).

Figure 3. Flowchart of Inclusion/Exclusion of OCT Scans and Test-Retest Sessions.

Figure 3

CONSORT flowchart illustrating participant inclusion and exclusion criteria. Scans were grouped as follows: test-retest pairs (2 scans for 1 eye in a single session), IED test-retest sets of 4 (2 scans for both eyes in a single session), and sets of 8 (2 macular and optic disc scans for both eyes in a single session). For macular scans, 802 eyes had a single test-retest session and 156 had multiple sessions (132 eyes with 2 sessions, 20 with 3, and 4 with 4). For optic disc scans, 784 eyes had a single session and 148 had multiple sessions (126 eyes with 2 sessions, 16 with 3, and 6 with 4). For IED test-retest sessions, 457 participants had a single session for macular scans and 72 had multiple sessions (60 participants with 2 sessions, 10 with 3, and 2 with 4). For optic disc scans, 366 participants had a single session and 66 had multiple sessions (56 participants with 2 sessions, 8 with 3, and 2 with 4). For IED combined test-retest sessions, 347 participants had a single session and 62 had multiple sessions (52 participants with 2 sessions, 8 with 3, and 2 with 4). GCIPL = ganglion cell-inner plexiform layer; IED = inter-eye difference; pRNFL = peripapillary retinal nerve fiber layer.

The cohort consisted of 395 PwMS, 65 HCs, and 47 PwONDs (Table 1). Among the PwMS, there were 330 with RRMS, 17 with PPMS, and 48 with SPMS. The PwOND group included people with aquaporin-4 seropositive neuromyelitis optica spectrum disorder (n = 14), myelin oligodendrocyte glycoprotein antibody–associated disease (n = 12), previous idiopathic ON (n = 10), neurosarcoidosis (n = 3), acute idiopathic ON (n = 2), stiff person syndrome (n = 2), chronic relapsing inflammatory optic neuropathy (n = 1), cobalamin C deficiency (n = 1), epilepsy (n = 1), and transverse myelitis (n = 1).

Table 1.

Baseline Demographics and Clinical Characteristics

MS RRMS PPMS SPMS ONDs HCs p Value (MS vs ONDs vs HCs) p Value (RRMS vs PPMS vs SPMS)
Participants, n (eyes) 395 (772) 330 (647) 17 (32) 48 (93) 47 (90) 65 (129)
Visits, mean (eye visits) 1.22 (1.22) 1.23 (1.23) 1.24 (1.25) 1.15 (1.15) 1 (1) 1.14 (1.14)
Age, y mean (SD) 46.33 (12.12) 44.33 (11.65) 57.04 (7.99) 56.24 (9.53) 44.53 (20.09) 37.24 (14.74) <0.001 a <0.001 a
Female, n (%) 312 (79) 271 (82.1) 9 (52.9) 32 (66.7) 31 (66) 38 (58.5) 0.001 b 0.001 b
Ethnicity, n (%) <0.001 b 0.09b
White American 310 (78.5) 257 (77.9) 17 (100) 36 (75) 25 (53.2) 42 (64.6)
Black American 72 (18.2) 60 (18.2) 0 (0) 12 (25) 13 (27.7) 12 (18.5)
Other 13 (3.3) 13 (3.9) 0 (0) 0 (0) 9 (19.1) 11 (16.9)
Optic neuritis history, eyes (%) 225 (29.1) 203 (31.4) 2 (6.3) 20 (21.5) 0.002 b
Disease duration y, mean (SD) 10.5 (7.5) 9.8 (7.1) 10.4 (6.8) 16 (8.4) <0.001 c
Monocular 100% VA, mean (SD) 57.4 (9.8) 58.5 (8.5) 50.5 (16.8) 52.9 (11.9) 48.5 (17.1) 59.5 (8) <0.001 c <0.001 c
Monocular 2.5% VA, mean (SD) 28.7 (12) 29.8 (11.4) 24.9 (12.7) 23 (13.6) 20.1 (15.3) 33.5 (10.1) <0.001 c <0.001 c

Abbreviations: HCs = healthy controls; MS = multiple sclerosis; ONDs = other neurologic disorders; PPMS = primary progressive multiple sclerosis; RRMS = relapsing-remitting multiple sclerosis; SPMS = secondary progressive multiple sclerosis; VA = visual acuity.

a

One-way ANOVA.

b

χ² test.

c

Kruskal-Wallis test. Statistically significant differences are given in bold.

The mean age was highest in PwMS, followed by PwONDs, and lowest in HCs (p < 0.001). Among the PwMS subtypes, the highest mean age was observed in the PPMS and SPMS groups, relative to the RRMS cohort (p < 0.001). Female participants predominated in all groups, with PwMS showing a higher proportion of female participants compared to HCs (p = 0.001). As expected, SPMS had the longest disease duration compared with RRMS and PPMS (p < 0.001).

A total of 1,143 macular scan pairs (SFO = 1,100; SNFO = 42) and 1,108 optic disc scan pairs (SFO = 1,003; SNFO = 105) were analyzed. The IED test-retest analysis included 543 macular scan sets (SFO = 520; SNFO = 23) and 510 optic disc scan sets (SFO = 450; SNFO = 60), with 483 sets (SFO = 425; SNFO = 58) analyzed using the combined thresholds.

Effect of Fulfilling vs Not Fulfilling OSCAR-IB Criteria on OCT Measures and Reliability

GCIPL

The mean signal strength of the macular scans from which GCIPL thicknesses are derived was 9.48 (SD 0.76) in the SFO, as compared to 7.95 (SD 1.58) in SNFO. For macular scans, the percentage of scan pairs classified as SNFO was similar across races: 0.94% in Black Americans, 1.05% in White Americans, and 1.41% in other races.

The MADs of the GCIPL thicknesses (Table 2) were minimal in SFO (0.41 μm; SD 0.53 μm), relative to SNFO (7.98 μm; SD 14.92 μm), in which it was substantially higher (p < 0.001). The ICCs for GCIPL thickness were high between SFO, supporting excellent reliability (0.998), whereas the ICCs between SNFO were extremely low, indicating poor reliability (0.353). The COVs for GCIPL thickness were excellent in SFO (0.40%), whereas they were poor in SNFO (10.14%). In subgroup analyses, reliability measures were consistent across PwMS, PwONDs, and HCs in SFO (eTable 1). In SNFO, OCT reliability seemed to be substantially better in HCs, as compared to PwMS and PwONDs (eTable 1), although this analysis was extremely underpowered. Among 10 eyes with recent acute ON (mean time since ON = 2.2 months, range = 0–5months), 9 scan pairs were classified as SFO while 1 pair was excluded because of both scans failing OSCAR-IB criteria. The overall test-retest variability among acute ON eyes was low, with a COV of 0.64%. Among eyes with a history of non-acute ON (mean = 127 months, range = 6–439 months), the GCIPL COV was also comparably low at 0.62% (n = 317).

Table 2.

OCT GCIPL/pRNFL Thicknesses and Reliability Measures for SFO and SNFO

GCIPL SFO GCIPL SNFO GCIPL p Value pRNFL SFO pRNFL SNFO pRNFL p Value
Scan pairs, n 1,100 42 1,003 105
Thickness, mean µm (SD) 74.33 (9.99) 69.10 (10.61) 0.002 a 85.68 (12.74) 81.50 (12.57) <0.001 a
MADs, µm (SD) 0.41 (0.53) 7.98 (14.92) <0.001 a 1.41 (1.21) 3.61 (5.93) <0.001 a
COV % 0.40 10.14 1.18 3.24
ICC (95% CI) 0.998 (0.997–0.998) 0.353 (0.071–0.587) 0.989 (0.988–0.991) 0.852 (0.790–0.897)

Abbreviations: COV = coefficient of variation; GCIPL = ganglion cell-inner plexiform layer; ICC = intraclass correlation coefficient; MADs = mean absolute differences; n = number; pRNFL = peripapillary retinal nerve fiber layer; SFO = scans fulfilling OSCAR-IB criteria; SNFO = scans not fulfilling OSCAR-IB criteria.

a

Mann-Whitney test. Statistically significant differences are given in bold.

As the number of failed OSCAR-IB criteria for SNFO increased (Table 3), MADs and COVs increased dramatically (COV = 4.02% when 1 OSCAR-IB criterion not fulfilled vs 27.91% when 4 criteria were not fulfilled) while the ICCs decreased substantially (0.86 when 1 OSCAR-IB criterion was not fulfilled vs −0.13 when 4 criteria were not fulfilled).

Table 3.

Effect of the Number of OSCAR-IB Criteria Not Fulfilled on GCIPL/pRNFL Thickness Reliability

Criteria failed, n GCIPL scan pairs GCIPL MADs, µm (SD) GCIPL ICC (95% CI) GCIPL COV, % pRNFL scan pairs pRNFL MADs, µm (SD) pRNFL ICC (95% CI) pRNFL COV, %
1 10/42 3.4 (5.40) 0.862 (0.536–0.964) 4.02 23/105 2.86 (1.89) 0.956 (0.899–0.981) 2.41
2 17/42 4.88 (11.05) 0.656 (0.288–0.854) 6.66 48/105 2.31 (1.76) 0.972 (0.950–0.984) 2.03
3 10/42 10.30 (16.63) 0.284 (−0.462–0.766) 13.27 28/105 5.21 (9.35) 0.678 (0.412–0.838) 4.90
4 5/42 23 (26.77) −0.162 (−0.476–0.632) 27.91 4/105 9.50 (13.89) 0.323 (−0.477–0.929) 8.52
5 0/42 2/105 9 (11.31) −1.132 (N/A) 7.89
6 0/42
7 0/42

Abberviations: COV = coefficient of variation; GCIPL = ganglion cell-inner plexiform layer; ICC = intraclass correlation coefficient; MADs = mean absolute differences; n = number; pRNFL = peripapillary retinal nerve fiber layer.

N/A indicates that CIs could not be calculated for certain negative ICC values.

Scans failing 1 OSCAR-IB criterion often failed others, limiting the analysis of individual effects on reliability. The criterion most strongly influencing GCIPL reliability (eTable 2) was retinal pathology affecting GCIPL segmentation (R) (n = 2/42, COV = 33.15%, ICC = −0.557). However, only 2 scans violated this criterion. The OSCAR-IB criteria most commonly violated and affecting GCIPL reliability were poor illumination (I) (n = 27/42, COV = 12%, ICC = 0.178), algorithm failure (A) (n = 20/42, COV = 17.06%, ICC = 0.195), and misalignment of beam (B) (n = 25/42, COV = 14.86%, ICC = 0.213). Figure 4A shows GCIPL differences in the same eye with and without algorithm failure (A). The least influential criterion based on GCIPL COVs was signal strength (S) (n = 11/42, COV = 1.62%, ICC = 0.944).

Figure 4. Effect of OSCAR-IB Violations on GCIPL and pRNFL Thicknesses and IEDs.

Figure 4

Example of OSCAR-IB failures in assessing GCIPL (Figure A) and pRNFL (Figure B) thicknesses from test and retest scans. A. The test macular scan for OD presents an algorithm failure (A) (white arrow) and misalignment of the beam (B), leading to an inaccurate thickness measurement of 45 µm. B. The retest optic disc scan for OS shows an algorithm failure (A) (white arrows), leading to an inaccurate thickness measurement of 61 µm. In both scenarios, the abnormal measurements result in the false/inaccurate presence of significant IEDs, which could lead to inaccurate interpretation and accordingly incorrect identification of UONI. GCL + IPL = ganglion cell-inner plexiform layer; OD = oculus dexter (right eye); OS = oculus sinister (left eye); RNFL = retinal nerve fiber layer.

pRNFL

The mean signal strength of optic disc scans was 8.75 (SD 0.95) in SFO and 6.90 (SD 1.33) in SNFO. For optic disc scans, the percentage of scan pairs classified as SNFO was similar across races: 4.46% in Black Americans, 3.83% in White Americans, and 2.86% in other races. The MADs (Table 2) were minimal in SFO (1.41 μm; SD 1.21 μm) and higher in SNFO (3.61 μm; SD 5.93 μm; SFO vs SNFO, p < 0.001). The ICCs for pRNFL thickness were high in SFO (0.989), while the ICCs in SNFO were lower, consistent with being less reliable (0.852). The COVs for pRNFL thickness were lower in SFO (1.18%), relative to SNFO (3.24%). Reliability measures were consistent across subgroups in SFO and SNFO (eTable 3). In the 10 acute ON eyes, 9 scan pairs were classified as SFO and 1 as SNFO. The pRNFL COV in this group was 1.23%. Among eyes with a history of non-acute ON, the pRNFL COV was 1.42% (n = 303).

When the number of failed OSCAR-IB criteria of optic disc scans exceeded 2, pRNFL MADs and COVs increased substantially and the ICCs dropped sharply, indicating a substantial reliability decline (Table 3).

The criterion with the greatest effect on pRNFL reliability (eTable 2) was the presence of obvious problems (O) (n = 30/105, COV = 5.45%, ICC = 0.503). Figure 4B shows pRNFL differences in the same scan with and without algorithm failure (A). The least influential OSCAR-IB criteria on pRNFL reliability were retinal pathology (R) (n = 8/105, COV = 2.55%, ICC = 0.954) and beam misalignment (B) (n = 46/105, COV = 2.55%, ICC = 0.962).

Bland-Altman Analyses

Bland-Altman analyses (eFigure 1) revealed minimal test-retest mean differences in SFO at 0.03 μm for GCIPL thickness and 0.05 μm for pRNFL thickness, indicating excellent agreement at the cohort level. In contrast, the mean differences for SNFO were −4.74 μm for GCIPL thickness and −0.58 μm for pRNFL thickness, consistent with poor agreement at the cohort level, particularly for GCIPL thickness.

In SFO, the LOA for GCIPL thickness ranged from −1.29 to 1.35 μm, and for pRNFL thickness, the LOA ranged from −3.99 to 4.14 μm, demonstrating excellent individual-level agreement in both cases. In contrast, in SNFO, the LOA for GCIPL thickness were considerably wider, ranging from −36.63 to 27.15 μm, whereas for pRNFL thickness, the LOA ranged from −14.16 to 13.00 μm, consistent with extremely poor individual-level agreement for both measures, especially for GCIPL thickness.

Inter-Eye Difference Reliability

Inter-eye absolute DiDs were lower in SFO, with values of 0.64 μm (SD 0.67 μm) for GCIPL thickness and 2.00 μm (SD 1.72 μm) for pRNFL thickness, as compared to 10.17 μm (SD 17.03 μm) and 4.78 μm (SD 7.51 μm), respectively, in SNFO (eTable 4).

POA of IEDs (eTable 4) was higher in SFO (GCIPL: 95.58% at <4/≥4 μm; pRNFL: 86.89% at <6/≥ 6 μm) than in SNFO (GCIPL: 47.83% at <4/≥ 4 μm; pRNFL: 71.67% at <6/≥ 6 μm) (p < 0.001 for all). Combined IED thresholds (GCIPL<4/≥4 μm; pRNFL<6/≥6 μm) showed higher POA in SFO (71.53%) than in SNFO (56.90%) (p = 0.019).

For IEDs to detect UONI, sensitivity for GCIPL IEDs≥4 μm decreased from 65.8% in SFO (n = 312, ON = 120) to 41.7% in SNFO (n = 31, ON = 12) and specificity declined from 87.5% to 79.0% (eTable 5). For pRNFL IEDs≥6 μm, sensitivity decreased from 61.3% in SFO (n = 261, ON = 93) to 51.7% in SNFO (n = 70, ON = 29) and specificity dropped from 89.9% to 73.2%.

Influence of OSCAR-IB Violations, Absolute Retinal Layer Thickness Measurements, Age, and Visual Function on GCIPL and pRNFL Reliability

Regression analyses were conducted to evaluate the effect of OSCAR-IB violations and individual clinical factors in patients on OCT reliability (eTable 6). Each additional OSCAR-IB criterion failure was associated with an increase of 5.90 μm in GCIPL MADs [standard error (SE) = 2.39, p = 0.01]. For pRNFL SNFO, no significant relationship was observed between MADs and the number of failed criteria.

GCIPL and pRNFL MAPDs (%) showed very slight increases with lower OCT signal strength when analyzing SFO and SNFO together (β = −0.91, SE = 0.21, p < 0.001 for GCIPL; β = −0.25, SE = 0.11, p = 0.02 for pRNFL), and a similar trend was observed for MADs (μm). However, no significant trends were observed when examining SFO and SNFO separately.

Similarly, age was associated with greater GCIPL test-retest variability (β = 0.09, p = 0.002) when analyzing SFO and SNFO together, but not pRNFL variability. When analyzing SFO and SNFO separately, the association with age was no longer significant for GCIPL variability, raising the possibility that older age may modestly contribute to lower OCT scan quality or increased variability, possibly because of retinal and/or other age-related factors.

No significant relationships were observed between retinal thicknesses and MAPDs. In addition, no significant associations were observed between MAPDs and either 100% contrast or 2.5% contrast VA for GCIPL and pRNFL measurements when considering SFO and SNFO together. No significant correlation was found in SNFO for either pRNFL or GCIPL MAPD and VA. In SNFO, a minimal GCIPL MAPD (%) increase resulted in lower 100% and 2.5% contrast VA (β = −0.01, p < 0.001), while no significant correlation was found for pRNFL MAPDs.

Discussion

This large study demonstrates that OCT measurements of GCIPL and pRNFL thicknesses are highly reliable only when scans meet the OSCAR-IB QC criteria, especially for GCIPL thicknesses. In SFO, GCIPL thickness reliability seems to be superior to that of pRNFL thickness, consistent with previous studies. Once a violation of OSCAR-IB criteria occurred, pRNFL thicknesses became unreliable and GCIPL thicknesses became extremely unreliable. Our findings indicate that without proper QC of optic nerve head and, in particular, macular OCT scans, there may be a high likelihood of false-positive identification of UONI, which could result in erroneous diagnoses of MS. It is therefore critical in clinical practice that OCT is only used in the demonstration of DIS for the diagnosis of MS provided that there is sufficient expertise to properly QC and interpret the OCT images acquired.

SFO showed consistent and reliable results with minimal variability for GCIPL and pRNFL thicknesses. Studies assessing GCIPL reliability using Cirrus HD-OCT have reported COVs and ICCs in the region of 0.67%–0.82% and 0.987–0.996, respectively.14,37 Similarly, pRNFL thickness assessments have also shown comparable COVs (1.2%–1.9%) and ICCs (0.975–0.994).14,18,38 Although these studies did not specifically include PwMS, the high agreement with our findings underscores the reliability of OCT measurements in general when QC criteria are fulfilled. OCT test-retest variability for GCIPL and pRNFL thicknesses was low during the acute (within 6 months of ON onset) and non-acute (occurred ≥ 6 months prior) phases of ON. Notably, this cohort only included 10 acute ON eyes, highlighting the need for further studies to assess OCT reliability early during acute ON.

SNFO showed significantly poorer reliability than SFO. GCIPL thickness, in particular, as well as pRNFL thickness in SNFO, had high MADs, low ICCs, and high COVs. Our results suggest that lower scan quality substantially reduces the sensitivity and specificity of GCIPL and pRNFL IEDs for identifying UONI, which may potentially result in false-negative or false-positive identification of UONI. However, these findings should be interpreted with caution due to the small sample size of the subset included in the IED sensitivity and specificity analysis, and the inclusion of PwMS with potential ophthalmologic or neurologic comorbidities.

Our findings highlight the influence of OSCAR-IB QC violations on the reliability of GCIPL thicknesses. GCIPL reliability showed a steep step-wise decline as the number of violated criteria increased. In contrast, pRNFL reliability remained relatively high in the presence of minor OSCAR-IB QC violations, although dropped sharply when 3 or more criteria were violated. These findings highlight that while some minor compromises in quality might limit impact (especially for pRNFL measurements), multiple QC violations notably impair scan reliability.

It is important to note that in instances where 1 OSCAR-IB criterion was violated, often other QC criteria were also violated, limiting our ability to assess the effects/importance of individual QC criteria relative to 1 another on scan reliability because of insufficient data. Further studies are needed to evaluate the effects of each OSCAR-IB criterion individually on test-retest reliability. The retinal pathology (R) criterion had a notable influence on GCIPL reliability (COV = 33.15%), although it was not met in only 2 scan pairs. We only detected 2 macular scans failing the (R) criterion because it is unlikely for a retinal pathology to affect segmentation in just 1 scan but not both, highlighting the consistency of retinal pathologies in changing segmentation across repeated scans. High COVs for GCIPL measures were also observed for algorithm failures (A) and beam misalignment (B) on macular OCT scans. Notably, artifacts from ERMs27 and GCIPL segmentation errors28 have been reported to be the most common artifacts encountered in macular OCT scans, with reported error rates ranging from 6%27 to 28.2%.25 These artifacts are also associated with lower GCIPL reliability. The criterion with the least influence on GCIPL reliability was signal strength (S), likely due to its minimal effect, because all SNFO had a signal strength ≥6, despite no set cutoff for SNFO. MAPDs for both GCIPL and pRNFL measurements decreased as the mean signal strength increased. However, these trends were not observed when analyzing SFO and SNFO separately, suggesting that other factors may influence variability. Although not all OSCAR-IB criteria had the same impact on the reliability of OCT measures, each criterion did influence reliability, reinforcing the need for full OCT QC adherence in both research and clinical care settings. While strict QC compliance is essential for research, some flexibility (such as for signal strength) has been suggested to be potentially acceptable in clinical assessments focused on cross-sectional IEDs. However, the findings of this study suggest that even minor violations of OSCAR-IB criteria can affect OCT reliability, justifying a low-tolerance approach for the use of OCT in routine clinical practice, both cross-sectionally and longitudinally. Overall, the findings of this study largely support the revised 2024 McDonald criteria recommendations that OCT should only be used in clinical practice to support the diagnosis of MS, provided that OSCAR-IB QC criteria have been applied to OCT scans.

For pRNFL reliability, the OSCAR-IB criteria that had the greatest effect on reliability were obvious problems (O) (COV = 5.45%) and centration of scans (C) (COV = 5.98%). This finding aligns with the literature, which indicates that displacement of the optic disc center substantially affects overall pRNFL thickness measurements.21,39 The failure to meet the retinal pathology (R) and/or beam misalignment (B) criteria in optic disc (pRNFL) scans resulted in elevated COVs (2.55%), although lower than for these same violations in macular (GCIPL) scans. These results support existing literature that precise beam alignment is crucial because off-axis placement can lead to erroneous pRNFL thickness measurements.40 The differing effects of the retinal pathology (R) criterion on GCIPL and pRNFL reliability may be explained by the prevalence of retinal pathologies, such as ERMs, microcystoid macular changes, and macular holes, which predominantly affect the macular region rather than the optic disc. In this cohort, retinal pathology had a minimal effect on the pRNFL segmentation algorithm, as compared to the GCIPL segmentation. Consistent with this, a study found that, in eyes with ERMs, the ICC for repeated pRNFL measurements was notably higher (ICC = 0.973) than for GCIPL measurements (ICC = 0.881).41 The ICC for GCIPL in that study was higher than in ours, partly because of their larger sample size and the inclusion of pathologies that may not affect the segmentation algorithm. By contrast, our study classified only pathologies that interfered with GCIPL segmentation as SNFO while those that did not interfere were classified as SFO.

In addition, inter-eye GCIPL DiDs were higher in SNFO than in SFO, with a similar but smaller trend for pRNFL DiDs. POA evaluated whether the differences between eyes consistently met or exceeded GCIPL and pRNFL IED thresholds (GCIPL: <4/≥4 µm; pRNFL: <6/≥6 µm) proposed in the 2024 revised McDonald criteria for identifying UONI to demonstrate DIS. POA was higher in SFO, as compared to SNFO, for pRNFL thickness and, in particular, for GCIPL measurements. Low-quality macular scans have a greater influence on GCIPL IEDs than low-quality optic disc scans on pRNFL IEDs, although both resulted in notable changes in GCIPL and pRNFL IEDs, respectively. If QC violations are detected, it is advisable to assess an alternative scan acquired. Because either GCIPL or pRNFL IEDs can identify UONI, a macular scan may be used if the optic disc scan fails QC, and vice versa.

Our study has several limitations worthy of discussion. This cohort primarily included PwMS, mostly White Americans and fewer Black Americans, limiting generalizability to other racial groups and clinical settings. Our study focused on the influence of OSCAR-IB criteria on test-retest reliability of GCIPL and pRNFL thicknesses using Cirrus HD-OCT. The combination of Cirrus HD-OCT and Spectralis OCT represents most OCT platforms used in the assessment of PwMS,12 but broader platform assessments could improve generalizability of QC impact on measurement reliability. It is important to note that while poor fixation from vision loss or lens/media opacities, among numerous other reasons, may have reduced quality in both the test and retest scan pairs, resulting in their exclusion from analyses, eyes were excluded solely based on OCT QC criteria, without systematically recording the reasons for poor acquisition. Documenting these challenges could provide further insights into the reasons underlying QC failures of scans, and these should be addressed in future studies. In addition, IED thresholds are well validated but inconsistently applied, especially for pRNFL thicknesses, where both ≥5 µm (Spectralis) and ≥6 µm (Cirrus) have been used.10,11 This being said, the 2024 McDonald criteria recommend a device agnostic pRNFL IED of ≥6 µm for identifying UONI. Moreover, all scans were acquired at the Johns Hopkins MS Center, where experienced technicians routinely optimize scan quality during acquisition. This monocentric setting may not reflect practices in less experienced sites, and further studies are needed to assess the applicability of IED criteria in broader clinical settings. Future studies should explore custom-built segmentation techniques to improve OCT scan quality, reduce low-quality scans, and increase scans meeting QC criteria for UONI assessment. In this context, artificial intelligence–based QC strategies may help ensure reliable measurements even in less experienced centers.

In conclusion, this study confirms Cirrus HD-OCT as a valuable diagnostic tool for assessing IED and identifying UONI for DIS as part of the 2024 revised McDonald diagnostic criteria for MS when no better explanation can be found. However, the reliability of GCIPL and pRNFL measurements is high only when scans meet necessary QC standards. Both the number and the type of OSCAR-IB criteria failed influence the reliability of GCIPL and pRNFL measurements. When QC criteria are not met, GCIPL reliability seems to be more greatly affected than pRNFL reliability, although GCIPL measurements are more reliable than pRNFL measurements when QC criteria are met. Careful QC of the entire macular and optic disc scans (not just a printout of the meridian) is essential, emphasizing the need for OCT technicians to have expertise in verifying scan QC. If this is not feasible, clinicians or other responsible personnel should ensure that OCT scan QC is assessed and met, before proceeding with the interpretation of OCT-derived GCIPL and pRNFL IEDs in the identification of UONI for demonstrating DIS in the diagnosis of MS. We recommend performing QC during and immediately after OCT acquisition, in order to detect and correct artifacts and/or segmentation errors in real time. This approach is more likely to ensure high-quality OCTs, improve efficiency and workflow, and reduce the need to recall patients for repeated imaging.

Glossary

ANOVA

analysis of variance

COV

coefficient of variation

DiDs

difference-in-differences

ERMs

epiretinal membranes

GCIPL

ganglion cell-inner plexiform layer

HCs

healthy controls

ICC

intraclass correlation coefficient

IED

inter-eye difference

LOA

limits of agreement

MAD

mean absolute difference

MAPD

mean absolute percentage difference

MS

multiple sclerosis

n

sample size

non-ON

no known history of optic neuritis

OCT

optical coherence tomography

ONDs

other neurologic disorders

ONL

outer nuclear layer

pRNFL

peripapillary retinal nerve fiber layer

POA

probability of agreement

PPMS

primary progressive multiple sclerosis

PwMS

people with multiple sclerosis

PwONDs

people with other neurological disorders

RRMS

relapsing-remitting multiple sclerosis

SNFO

scans not fulfilling OSCAR-IB criteria

SPMS

secondary progressive multiple sclerosis

Author Contributions

A. Bacchetti: drafting/revision of the manuscript for content, including medical writing for content; major role in the acquisition of data; study concept or design; analysis or interpretation of data. T-Y. Lin: drafting/revision of the manuscript for content, including medical writing for content; study concept or design. B. McCormack: drafting/revision of the manuscript for content, including medical writing for content; study concept or design. O. Ezzedin: major role in the acquisition of data. R. Doosti: major role in the acquisition of data. G. Ahmadi: major role in the acquisition of data. N. Pellegrini: major role in the acquisition of data. E. Johnson: major role in the acquisition of data. S. Davis: major role in the acquisition of data. E. Lawrence: major role in the acquisition of data. G. Otero-Duran: major role in the acquisition of data. E. Lievers: Major role in the acquisition of data. M. Inserra: major role in the acquisition of data. S. Park: major role in the acquisition of data. D. Bonair: major role in the acquisition of data. A. Kim: major role in the acquisition of data. A. Gulati: major role in the acquisition of data. K.C. Fitzgerald: drafting/revision of the manuscript for content, including medical writing for content; study concept or design. E.S. Sotirchos: drafting/revision of the manuscript for content, including medical writing for content; study concept or design. P.A. Calabresi: drafting/revision of the manuscript for content, including medical writing for content; study concept or design. S. Saidha: drafting/revision of the manuscript for content, including medical writing for content; study concept or design.

Study Funding

This work was supported by the National Institute of Health (grant R01NS082347) and the National Multiple Sclerosis Society (grants RG-1606-08768 and RG-1907-34405).

Disclosure

P.A. Calabresi is PI on a grant from Genentech to Johns Hopkins University (JHU) and has received consulting fees from Novartis, Lilly, and Biogen for serving as a scientific advisor. S. Saidha has received consulting fees from Medical Logix for the development of CME programs in neurology and has served on scientific advisory boards for Biogen, EMD Serono, Novartis, Genentech Corporation, Amgen, Horizon therapeutics, Clene Pharmaceuticals, and ReWind therapeutics. He has performed consulting for Novartis, Genentech Corporation, JuneBrain LLC, Innocare Pharma, Kiniksa pharmaceuticals, Lapix therapeutics, and Setpoint Medical. He is the PI of investigator-initiated studies funded by Genentech Corporation, Biogen, and Novartis. He previously received support from the Race to Erase MS foundation. He has received equity compensation for consulting from JuneBrain LLC and Lapix therapeutics. He was also the site investigator of trials sponsored by MedDay Pharmaceuticals and Clene Pharmaceuticals and is the site investigator of trials sponsored by Novartis, as well as Lapix therapeutics. E.S. Sotirchos has consulted for Alexion, Amgen, TG Therapeutics, and Roche/Genentech; is the site principal investigator for studies funded by Alexion, Roche/Genentech, UCB, and Ad Scientiam; and is the principal investigator for an investigator-initiated study funded by Astoria Biologica. The other authors report no relevant disclosures. Go to for full disclosures.

References

  • 1.Reich DS, Lucchinetti CF, Calabresi PA. Multiple sclerosis. N Engl J Med. 2018;378(2):169-180. doi: 10.1056/NEJMra1401483 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Graham SL, Klistorner A. Afferent visual pathways in multiple sclerosis: a review. Clin Exp Ophthalmol. 2017;45(1):62-72. doi: 10.1111/ceo.12751 [DOI] [PubMed] [Google Scholar]
  • 3.Ikuta F, Zimmerman HM. Distribution of plaques in seventy autopsy cases of multiple sclerosis in the United States. Neurology. 1976;26(6 PT 2):26-28. doi: 10.1212/wnl.26.6_part_2.26 [DOI] [PubMed] [Google Scholar]
  • 4.Toussaint D, Périer O, Verstappen A, Bervoets S. Clinicopathological study of the visual pathways, eyes, and cerebral hemispheres in 32 cases of disseminated sclerosis. J Clin Neuroophthalmol. 1983;3(3):211-220. [PubMed] [Google Scholar]
  • 5.Montalban X. Revisions of the McDonald criteria. ECTRIMS. 2024. 18-20. ectrims.eu/mcdonald-diagnostic-criteria
  • 6.Vidal-Jordana A, Rovira A, Arrambide G, et al. Optic nerve topography in multiple sclerosis diagnosis: the utility of visual evoked potentials. Neurology. 2021;96(4):e482-e490. doi: 10.1212/WNL.0000000000011339 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Bsteh G, Hegen H, Altmann P, et al. Diagnostic performance of adding the optic nerve region assessed by optical coherence tomography to the diagnostic criteria for multiple sclerosis. Neurology. 2023;101(8):e784-e793. doi: 10.1212/WNL.0000000000207507 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Coric D, Balk LJ, Uitdehaag BMJ, Petzold A. Diagnostic accuracy of optical coherence tomography inter-eye percentage difference for optic neuritis in multiple sclerosis. Eur J Neurol. 2017;24(12):1479-1484. doi: 10.1111/ene.13443 [DOI] [PubMed] [Google Scholar]
  • 9.Patil SA, Joseph B, Tagliani P, et al. Longitudinal stability of inter-eye differences in optical coherence tomography measures for identifying unilateral optic nerve lesions in multiple sclerosis. J Neurol Sci. 2023;449:120669. doi: 10.1016/j.jns.2023.120669 [DOI] [PubMed] [Google Scholar]
  • 10.Nolan-Kenney RC, Liu M, Akhand O, et al. Optimal intereye difference thresholds by optical coherence tomography in multiple sclerosis: an international study. Ann Neurol. 2019;85(5):618-629. doi: 10.1002/ana.25462 [DOI] [PubMed] [Google Scholar]
  • 11.Bsteh G, Hegen H, Altmann P, et al. Validation of inter-eye difference thresholds in optical coherence tomography for identification of optic neuritis in multiple sclerosis. Mult Scler Relat Disord. 2020;45:102403. doi: 10.1016/j.msard.2020.102403 [DOI] [PubMed] [Google Scholar]
  • 12.Balcer LJ, Balk LJ, Brandt AU, et al. The international multiple sclerosis visual system consortium: advancing visual system research in multiple sclerosis. J Neuroophthalmol. 2018;38(4):494-501. doi: 10.1097/WNO.0000000000000732 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Xu SC, Kardon RH, Leavitt JA, Flanagan EP, Pittock SJ, Chen JJ. Optical coherence tomography is highly sensitive in detecting prior optic neuritis. Neurology. 2019;92(6):e527-e535. doi: 10.1212/WNL.0000000000006873 [DOI] [PubMed] [Google Scholar]
  • 14.Wadhwani M, Bali SJ, Satyapal R, et al. Test-retest variability of retinal nerve fiber layer thickness and macular ganglion cell-inner plexiform layer thickness measurements using spectral-domain optical coherence tomography. J Glaucoma. 2015;24(5):e109-e115. doi: 10.1097/IJG.0000000000000203 [DOI] [PubMed] [Google Scholar]
  • 15.Parravano M, Oddone F, Boccassini B, et al. Reproducibility of macular thickness measurements using Cirrus SD-OCT in neovascular age-related macular degeneration. Invest Ophthalmol Vis Sci. 2010;51(9):4788-4791. doi: 10.1167/iovs.09-4976 [DOI] [PubMed] [Google Scholar]
  • 16.Wong E, Yoshioka N, Kalloniatis M, Zangerl B. Cirrus HD-OCT short-term repeatability of clinical retinal nerve fiber layer measurements. Optom Vis Sci. 2015;92(1):83-88. doi: 10.1097/OPX.0000000000000452 [DOI] [PubMed] [Google Scholar]
  • 17.Almidani L, Sabharwal J, Shahidzadeh A, et al. Biological and methodological variability in retinal nerve fiber layer OCT: the framingham heart study. Ophthalmol Sci. 2024;4(6):100549. doi: 10.1016/j.xops.2024.100549 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Mwanza JC, Chang RT, Budenz DL, et al. Reproducibility of peripapillary retinal nerve fiber layer thickness and optic nerve head parameters measured with cirrus HD-OCT in glaucomatous eyes. Invest Ophthalmol Vis Sci. 2010;51(11):5724-5730. doi: 10.1167/iovs.10-5222 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Yohannan J, Cheng M, Da J, et al. Evidence-based criteria for determining peripapillary OCT reliability. Ophthalmology. 2020;127(2):167-176. doi: 10.1016/j.ophtha.2019.08.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Li C, Yuan Y, Kong X, et al. Segmentation errors and off-center artifacts in SS-OCT: insight from a population-based imaging study. Curr Eye Res. 2023;48(10):949-955. doi: 10.1080/02713683.2023.2223869 [DOI] [PubMed] [Google Scholar]
  • 21.Shin JW, Shin YU, Uhm KB, et al. The effect of optic disc center displacement on retinal nerve fiber layer measurement determined by spectral domain optical coherence tomography. PLoS One. 2016;11(10):e0165538. doi: 10.1371/journal.pone.0165538 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Lee SY, Kwon HJ, Bae HW, et al. Frequency, type and cause of artifacts in swept-source and cirrus HD optical coherence tomography in cases of glaucoma and suspected glaucoma. Curr Eye Res. 2016;41(7):957-964. doi: 10.3109/02713683.2015.1075219 [DOI] [PubMed] [Google Scholar]
  • 23.Tewarie P, Balk L, Costello F, et al. The OSCAR-IB consensus criteria for retinal OCT quality assessment. PLoS One. 2012;7(4):e34823. doi: 10.1371/journal.pone.0034823 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Schippling S, Balk LJ, Costello F, et al. Quality control for retinal OCT in multiple sclerosis: validation of the OSCAR-IB criteria. Mult Scler. 2015;21(2):163-170. doi: 10.1177/1352458514538110 [DOI] [PubMed] [Google Scholar]
  • 25.Asrani S, Essaid L, Alder BD, Santiago-Turla C. Artifacts in spectral-domain optical coherence tomography measurements in glaucoma. JAMA Ophthalmol. 2014;132(4):396-402. doi: 10.1001/jamaophthalmol.2013.7974 [DOI] [PubMed] [Google Scholar]
  • 26.Li A, Thompson AC, Asrani S. Impact of artifacts from optical coherence tomography retinal nerve fiber layer and macula scans on detection of glaucoma progression. Am J Ophthalmol. 2021;221:235-245. doi: 10.1016/j.ajo.2020.08.018 [DOI] [PubMed] [Google Scholar]
  • 27.Awadalla MS, Fitzgerald J, Andrew NH, et al. Prevalence and type of artefact with spectral domain optical coherence tomography macular ganglion cell imaging in glaucoma surveillance. PLoS One. 2018;13(12):e0206684. doi: 10.1371/journal.pone.0206684 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Alshareef RA, Dumpala S, Rapole S, et al. Prevalence and distribution of segmentation errors in macular ganglion cell analysis of healthy eyes using cirrus HD-OCT. PLoS One. 2016;11(5):e0155319. doi: 10.1371/journal.pone.0155319 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Thompson AJ, Banwell BL, Barkhof F, et al. Diagnosis of multiple sclerosis: 2017 revisions of the McDonald criteria. Lancet Neurol. 2018;17(2):162-173. doi: 10.1016/S1474-4422(17)30470-2 [DOI] [PubMed] [Google Scholar]
  • 30.Lublin FD, Reingold SC, Cohen JA, et al. Defining the clinical course of multiple sclerosis: the 2013 revisions. Neurology. 2014;83(3):278-286. doi: 10.1212/WNL.0000000000000560 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Syc SB, Warner CV, Hiremath GS, et al. Reproducibility of high-resolution optical coherence tomography in multiple sclerosis. Mult Scler. 2010;16(7):829-839. doi: 10.1177/1352458510371640 [DOI] [PubMed] [Google Scholar]
  • 32.Lang A, Carass A, Al-Louzi O, et al. Combined registration and motion correction of longitudinal retinal OCT data. Proc SPIE Int Soc Opt Eng. 2016;9784:97840X. doi: 10.1117/12.2217157 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Sriram P, Graham SL, Wang C, Yiannikas C, Garrick R, Klistorner A. Transsynaptic retinal degeneration in optic neuropathies: optical coherence tomography study. Invest Ophthalmol Vis Sci. 2012;53(3):1271-1275. doi: 10.1167/iovs.11-8732 [DOI] [PubMed] [Google Scholar]
  • 34.Aytulun A, Cruz-Herranz A, Aktas O, et al. APOSTEL 2.0 recommendations for reporting quantitative optical coherence tomography studies. Neurology. 2021;97(2):68-79. doi: 10.1212/WNL.0000000000012125 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Cruz-Herranz A, Balk LJ, Oberwahrenbrock T, et al. The APOSTEL recommendations for reporting quantitative optical coherence tomography studies. Neurology. 2016;86(24):2303-2309. doi: 10.1212/WNL.0000000000002774 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1(8476):307-310. [PubMed] [Google Scholar]
  • 37.Francoz M, Fenolland JR, Giraud JM, et al. Reproducibility of macular ganglion cell-inner plexiform layer thickness measurement with cirrus HD-OCT in normal, hypertensive and glaucomatous eyes. Br J Ophthalmol. 2014;98(3):322-328. doi: 10.1136/bjophthalmol-2012-302242 [DOI] [PubMed] [Google Scholar]
  • 38.Leung CK, Cheung CY, Weinreb RN, et al. Retinal nerve fiber layer imaging with spectral-domain optical coherence tomography: a variability and diagnostic performance study. Ophthalmology. 2009;116(7):1257-1263.e12632. doi: 10.1016/j.ophtha.2009.04.013 [DOI] [PubMed] [Google Scholar]
  • 39.Gabriele ML, Ishikawa H, Wollstein G, et al. Optical coherence tomography scan circle location and mean retinal nerve fiber layer measurement variability. Invest Ophthalmol Vis Sci. 2008;49(6):2315-2321. doi: 10.1167/iovs.07-0873 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Balk LJ, de Vries-Knoppert WA, Petzold A. A simple sign for recognizing off-axis OCT measurement beam placement in the context of multicentre studies. PLoS One. 2012;7(11):e48222. doi: 10.1371/journal.pone.0048222 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Lee HJ, Kim MS, Jo YJ, Kim JY. Thickness of the macula, retinal nerve fiber layer, and ganglion cell layer in the epiretinal membrane: the repeatability study of optical coherence tomography. Invest Ophthalmol Vis Sci. 2015;56(8):4554-4559. doi: 10.1167/iovs.15-16949 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data related to this study are available based on reasonable request and subject to proper inter-institutional data sharing agreements.


Articles from Neurology® Neuroimmunology & Neuroinflammation are provided here courtesy of American Academy of Neurology

RESOURCES