Abstract
INTRODUCTION
Widespread use of retinal optical coherence tomography angiography (OCT‐A) imaging requires methodological and analytical consensus to ensure reproducible and accurate results in epidemiological studies on Alzheimer's disease and related dementias (ADRD).
METHODS
A consensus framework for assessment of fovea‐centered 3 × 3‐mm and 6 × 6‐mm OCT‐A image quality was developed, and reproducibility was reported. Agreement was assessed for overall image quality and image quality relevant for vessel density (VD) quantification and foveal avascular zone (FAZ) metrics. An analytic framework was also developed.
RESULTS
Intergrader agreements for overall 3 × 3 mm image quality and quantitative assessment of VD and FAZ ranged between 71% and 87% (52 images). Intragrader agreements ranged between 82% and 93% (n = 27 images). Intergrader agreements were similar for 6 × 6‐mm images. Three analytic scenarios were developed to account for bias that may result from image quality and ocular comorbidities.
DISCUSSION
These recommendations provide a framework for working with OCT‐A imaging in epidemiological studies on ADRD.
Highlights
Multicentric consensus on quality criteria for use of optical coherence tomography angiography (OCT‐A) images in population studies was achieved, considering commonly used protocols acquired with commercially available devices (Heidelberg, Zeiss, and Topcon).
Considerable inter‐ and intrarater agreement was achieved for assessment of OCT‐A image quality.
A framework for harmonized data analyses was designed. Standardized analyses may accelerate the development of scalable retinal imaging biomarkers for ADRD.
Keywords: accelerated cognitive ageing, Alzheimer's disease and related dementias, epidemiology, harmonization, imaging, microvascular dysfunction, neurodegenerative disease, optical coherence tomography angiography, population‐based study, retinal imaging
1. BACKGROUND
There is a clinical need for scalable biomarkers for Alzheimer's disease and related dementias (ADRD), given the dramatic increase in prevalence anticipated in the upcoming 25 years. 1 , 2 Scalable biomarkers are needed to mitigate the burden of ADRD on public health systems. Biomarkers can be used to optimize referral to memory clinics and identify individuals at risk for ADRD in an early stage, when interventions with disease‐modifying treatments may be optimally timed. 2 , 3
Retinal imaging may provide scalable biomarkers for ADRD, which can be complementary to blood biomarkers. 4 The retina is part of the central nervous system and allows for in vivo imaging of microvascular structures in a non‐invasive, rapid, and accurate manner. 5 Imaging of the microvasculature may have an added predictive value to blood biomarkers as microvascular changes cover alternative or complementary biological pathways than blood biomarkers that reflect amyloid beta and phosphorylated tau (e.g., p‐tau217) or neurodegeneration (e.g., neurofilament light chain or glial fibrillary acidic protein). 2 , 6
A promising and novel technique for imaging the retinal microvasculature is retinal optical coherence tomography angiography (OCT‐A). 7 , 8 OCT‐A–based measures of the superficial and deep retinal capillary networks can be quantified in vivo, non‐invasively, and with high accuracy (up to a semi‐histological resolution). 7 Common measures extracted from OCT‐A images are capillary density measures (e.g., vessel density [VD] and vessel skeleton density) and foveal avascular zone (FAZ) area or perimeter based measures. 7 Lower capillary density and larger FAZ area reflect worse (systemic) capillary health. 7 , 8
The increasing availability of OCT‐A imaging data in population‐based studies will enable large‐scale analyses of OCT‐A imaging as a biomarker for ADRD in the near future. Indeed, the acquisition of OCT‐A images is ongoing in multiple large cohort studies, including the Maastricht Study (the Netherlands), Rotterdam Study (the Netherlands), Rhineland study (Germany), Framingham Heart Study (United States) 9 , and South Indian Genetics of Diabetic Retinopathy (SIGNATR) Study (India). Such studies allow the large‐scale study of chronic diseases with a microvascular origin, including ocular diseases (such as glaucoma, 10 diabetic retinopathy, 11 and age‐related macular degeneration 12 ) and non‐ocular diseases (such as neurodegenerative diseases, 5 Alzheimer's disease, 13 stroke, 14 and heart failure 15 ). The biological grounding for studying non‐ocular diseases is that retinal capillary health (in part) reflects systemic capillary health. 15
A standardized approach for working with OCT‐A data in population studies is needed to promote the use of optimal and similar methods across cohorts internationally. 11 Such an approach should ideally consider bias that may arise due to variations in OCT‐A image quality across studies, for example, information bias and selection bias. 11 , 16 For example, if less healthy individuals more often have poor image quality, and are therefore more likely to be excluded from analyses, an underestimation of an association may occur. 16
No methodological framework for working with OCT‐A images in population studies has yet been widely adopted. One study developed criteria for grading OCT‐A image quality in neurological studies (OSCAR‐MP criteria), 17 but further development of criteria is required for use in population studies. Limitations of this previous study are that (1) no criteria specific for commonly used OCT‐A measures such as VD and FAZ area were developed, (2) criteria were only developed on one type of scanning protocol from one OCT‐A imaging device, and (3) performance was only evaluated among patients with neurological diseases (and not in the general population). 17
In view of the above, researchers from the European Eye Epidemiology consortium (The Maastricht Study, Rotterdam Study, Rhineland Study) initiated a shared effort to develop recommendations for working with OCT‐A images in population‐based studies. Invited were also researchers from ongoing collaborations, including researchers from the Framingham Heart Study, SIGNATR, and the Adolphe de Rothschild Foundation Hospital. The overarching aim was to develop a reliable and reproducible framework for assessment of OCT‐A image quality and analyses with OCT‐A–derived variables in population‐based studies.
RESEARCH IN CONTEXT
Systematic review: PubMed was searched until December 1, 2024, for studies on optical coherence tomography angiography (OCT‐A) and image quality. One study proposed quality criteria, but did not consider several commonly used, commercially available imaging devices and protocols. Also, no methodological framework for the harmonization of analyses was provided.
Interpretation: This study provides expert‐based recommendations for quality assessment of OCT‐A images acquired with Heidelberg, Zeiss, and Topcon devices, for both 3 × 3‐mm and 6 × 6‐mm images. Criteria were developed using multicentric data from the Netherlands, United States, Germany, France, and India.
Future directions: Recommendations provided in this study can contribute to the harmonization of studies on Alzheimer's disease and related dementias. Future studies are needed to compare the quantitative impact of differences in software programs commonly used to calculate OCT‐A features.
2. METHODS
2.1. Part 1: design of quality criteria for grading fovea‐centered OCT‐A images
We aimed to develop a minimum set of image quality criteria that could simply and quickly be assessed with high reproducibility (i.e., high inter‐ and intragrader agreement) across large datasets from population‐based studies. We consider the quality of the overall image and, more specifically, the usability of an image to determine capillary density and FAZ measures as explained below.
2.1.1. Study population and design
Prospectively collected data from the following four observational cohort studies were used: Maastricht Study (the Netherlands), 18 Rotterdam Study (the Netherlands), 19 Framingham Heart Study (United States), 9 , 20 and SIGNATR (India). 21 General characteristics of the study populations among which OCT‐A imaging was conducted are shown in Table 1. Mean age ± standard deviation and % men are: 68 ± 8 years and 51% men in Maastricht Study (n = 2567 [collected by May 1, 2024]), 75 ± 7 years and 43% men in Rotterdam Study (n = 2136 [collected by July 17, 2024]), 75 ± 7 years and 41% men in Framingham Heart Study (n = 962 [collected by August 26, 2024]), and 57 ± 10 years and 54% men in SIGNATR (n = 348 [collected before July 1, 2024]). In addition, data collected from five healthy individuals (1 man, 4 women, age range 30–62 years) at the Rhineland Study, and four healthy women (age range 28–39 years) from Adolphe de Rothschild Foundation Hospital were used. For all cohort studies, medical ethical approval was obtained. The use of images for the purpose of this paper was in line with the consent provided by participants. All participants provided informed consent. More details on cohort studies are presented in the supporting information.
TABLE 1.
Characteristics of the studies and research centers from which OCT‐A images were used.
| Maastricht Study | Rotterdam Study | Framingham Heart Study | Rhineland study | SIGNATR | Adolphe de Rothschild Foundation Hospital | |
|---|---|---|---|---|---|---|
| Study characteristics | ||||||
| Country | The Netherlands | The Netherlands | United States | Germany | India | France |
| Design | Population study (oversampling of type 2 diabetes) | Population study | Population study | Population study | Type 2 diabetes study | Clinical patients from the ophthalmology hospital department |
| Total sample with OCT‐A images a | 2567 | 2136 | 962 | 3472 | 348 | NA |
| Mean age (SD) | 68 (8) | 75 (7) | 75 (7) | 59 (13) | 57 (10) | NA |
| Men | 51% | 43% | 41% | 43% | 54% | NA |
| OCT‐A image characteristics | ||||||
| Device brand | Heidelberg Engineering | Topcon | Zeiss | Heidelberg | Zeiss | Zeiss |
| Model | Spectralis | Triton | Angioplex | Spectralis | Angioplex | Angioplex |
| Protocol | 3 × 3‐mm | 3 × 3‐mm | 3 × 3‐mm | 3 × 3‐mm | 6 × 6‐mm |
3 × 3‐mm; 6 × 6‐mm |
| Software | Heidelberg Eye Explorer 1.11.2.0 | IMAGEnet6 v1.04E | Cirrus Review 11.5.2.54532 | Heidelberg Eye Explorer 1.11.2.0 | Cirrus Review 11.5.2.54532 | Cirrus Review 11.5.2.54532 |
| Number of B‐scans for 3 × 3‐mm protocol | 512 | 320 | 245 | 512 | 245 | NA |
| Number of B‐scans for 6 × 6‐mm protocol | NA | NA | NA | NA | 350 | 350 |
| Images used for image quality criteria development and validation | ||||||
| Number of images used | ||||||
| Preliminary discussion | 3 | 3 | 3 | 3 | 3 | 3 |
| Criteria development | 10 | 10 | 10 b | 10 | 10 | 10b |
| Intergrader agreement c | 12 | 10 | 10b | 10 | 10 | 10 b |
| Intragrader agreement c | 9 | 4 | 3 | 6 | NA | NA |
Abbreviations: NA, not applicable; OCT‐A, optical coherence tomography angiography; SD, standard deviation.
aNumbers available up to August 2024.
bIndicates the same images were used.
cFor inter‐ and intragrader agreement, the same images were used.
2.1.2. OCT‐A image acquisition
Four OCT‐A devices were used to acquire fovea‐centered images: the Spectralis (Heidelberg Engineering) was used in the Maastricht Study and Rhineland Study; the Triton (Topcon) was used in the Rotterdam Study; the Cirrus 5000 (Angioplex, Carl Zeiss Meditec, Inc.) was used in SIGNATR and at the Adolphe de Rothschild Foundation Hospital; and the Cirrus 6000 (AngioPlex; Carl Zeiss Meditec, Inc.) was used in the Framingham Heart Study. All these are commercially available devices and therefore allow development of criteria that can be applied broadly. Fovea‐centered OCT‐A scans with dimensions of 3 × 3‐mm were available in all studies except for SIGNATR. 6 × 6‐mm OCT‐A images were available only in the SIGNATR cohort and at the Adolphe de Rothschild Foundation Hospital. We used superficial retinal layer or full‐thickness slab OCT‐A images (showing both superficial and deep capillary networks) for all assessments. Exemplary OCT‐A images from all centers are shown in Figures S1 and S2 in supporting information.
The following number of B‐scans were acquired (software version) for 3 × 3‐mm OCT‐A images per manufacturer protocol from each device: 512 (Heidelberg Eye Explorer 1.11.2.0) in the Maastricht Study and Rhineland Study, 320 (IMAGEnet6 v1.04E) in the Rotterdam Study, 245 (Cirrus Review 11.5.2.54532) in the Framingham Heart Study, and 245 (Cirrus Review 11.5.2.54532) at Adolphe de Rothschild Foundation Hospital. The number of B‐scans acquired (software version) for 6 × 6‐mm OCT‐A images was 350 (Cirrus Review 11.5.2.54532) in both SIGNATR and at Adolphe de Rothschild Foundation Hospital.
2.1.3. Multidisciplinary team
Our recommendations were developed by a multicenter, multidisciplinary team consisting of ophthalmologists, optometrists, and researchers with backgrounds in clinical epidemiology, optical physics, and medical image data science. This approach enables consideration of key technical, biological, and epidemiological analyses‐related factors. Ophthalmologists include: A.H.K. and L.S. (Framingham Heart Study), W.D.R. (Rotterdam Study), H.H. (Rhineland Study), and S.B. (Adolphe de Rothschild Foundation Hospital); clinical epidemiological researchers are: F.C.T.v.d.H. (Maastricht Study and Adolphe de Rothschild Foundation Hospital), V.A.V. (Rotterdam Study), L.S. (Framingham Heart Study), and A.C. (Framingham Heart Study); researcher in the field of optical physics is T.B. (Maastricht Study); and medical image data scientists are D.A.J. and L.S.B. (Rotterdam Study). Contributions of all other authors are listed in the Author Contributions section.
2.1.4. Development of image quality criteria
Criteria were initially developed using 3 × 3‐mm OCT‐A scans because it was the most commonly used across cohorts and is the highest resolution scan protocol on all devices. Overall image quality was assessed using a three‐level grading system recently described in the Framingham Heart Study (“excellent,” “usable,” “unusable”). 9 In addition, usability of the image for quantification of commonly used VD and FAZ measures 7 was additionally separately graded using a two‐level system (“usable,” “unusable”). For simplicity, we refer to all commonly used measures of capillary density (vessel density and vessel skeleton density) as VD throughout this article. Our rationale for quality assessment of the overall image is to exclude images that are very unlikely to be useful for any kind of quantitative analyses (“unusable”). We also aim to label images that are of the best possible quality (“excellent”). We realize that investigators may currently have measures (or may develop unique measures in the future) that can be derived from portions of the OCT‐A images. For example, FAZ‐based measures are currently used by some investigators as one metric of capillary health in OCT‐A images. It is possible that FAZ measures can be useful in images that are of generally poor quality if the FAZ is spared from artefacts. Therefore, our rationale for a separate assessment of FAZ measures is to be as inclusive as possible for images that contain useful FAZ data but may otherwise not be useful for global assessment of capillary density measures. In the future, we anticipate that investigators may create similar quality criteria for novel OCT‐A measures that can be incorporated into this analysis paradigm as well.
Development of quality criteria took place via a three‐stage process during bimonthly teleconferences involving researchers from all centers over a period of 6 months. In the first stage, a preliminary categorization of images was made, classifying into “excellent,” “usable,” and “unusable,” similar to criteria that were recently implemented and validated in the Framingham Heart Study. 9 For this purpose, individual researchers from each center independently provided three deidentified images from their data that were most representative of these quality scores. In the second stage, the entire group reviewed all such exemplary images for each category of image quality to reach consensus. A definition for each category of image quality was adopted, similar to that used in the Framingham Heart Study by Collazo Martinez et al. 9
Fifty images from all centers were used for the second stage, with each center providing 10 randomly selected images. In the third stage, we evaluated the reliability of the definitions from the second stage by quantifying inter‐ and intragrader agreement among and between five graders using 52 and 27 images, respectively. The same images from Adolphe de Rothschild Foundation Hospital and Framingham Heart Study were used for both the second and third stage (20 images); and different (randomly selected) images from the Maastricht Study, Rhineland Study, and Rotterdam Study were used in the second and third stages (30 for the second stage and 32 for the third stage). OCT‐A images were presented in a random order (i.e., not per center or imaging device). For assessment of intragrader agreement, 12 of the 27 images that were presented a second time were randomly rotated by 90 or 180 degrees to reduce the chance that graders would recognize the image. Agreement was quantified as percentage agreement. 16 For three‐level classification “excellent”, “usable,” “unusable,” we made the following comparisons: (1) “unusable” versus “usable” or “excellent”; (2) “usable” versus “unusable” or “excellent”; and (3) “excellent” versus “unusable” or “usable.”
2.1.5. Validation of image quality criteria on a 6 × 6‐mm OCT‐A image
We recognize that different scan protocols may be used across different studies. Therefore, we incorporated one larger scan pattern that was available among study sites to assess the generalizability of our image quality criteria. Intergrader agreement among the same five graders was assessed using 21 randomly selected 6 × 6 mm images from SIGNATR and Adolphe de Rothschild Foundation Hospital. Intergrader agreement was quantified as percentage agreement. 16 ,
FIGURE 1.

Exemplary images per category of optical coherence tomography angiography image quality (3 × 3‐mm images; full thickness slab, that is, displaying both deep and superficial networks). Please see Panel 1 for an explanation of quality criteria.
2.1.6. Secondary analyses
For comprehensiveness and comparison to literature, 17 we also calculated agreement using Cohen kappa as a secondary outcome. 22 , 23 We did not use Cohen kappa as the main measure of agreement because percentage agreement depends on grader agreement only, but Cohen kappa depends on both grader agreement and distribution of samples across categories. As the use of small sample sizes increases the risk of unequal distributions across subgroups, Cohen kappa may provide a less robust measure of intergrader agreement than percentage agreement. 22 , 23
2.2. Part 2: framework for data analyses
We aimed to develop a framework for working with OCT‐A images in population‐based studies. Requirements were that this framework should enable insight into how suboptimal OCT‐A image quality may affect size estimations (while considering both information and selection bias) and specify how to deal with common ocular comorbidities. 16
3. RESULTS
3.1. Part 1: design of quality criteria for grading of fovea‐centered OCT‐A images
3.1.1. Criteria development
After preliminary discussions (stage one), exemplary images were selected and definitions for grading image quality were developed (stage two). Figure 1 shows exemplary OCT‐A images for each category of image quality, and Panel 1 shows image quality descriptions. Figure 2 shows an example of a vessel broken by full width (explained in Panel 1). Figures S3 and S4 in supporting information show examples of excellent, usable, and unusable images for each imaging device.
FIGURE 2.

Exemplary 3 × 3‐mm optical coherence tomography angiography image showing a vessel broken by full width (inset; full thickness slab, that is, displaying both deep and superficial networks). Other movement artefacts are also visible.
PANEL 1: “Excellent” image quality
Excellent discrimination of the “capillary vasculature” throughout the entirety of the OCT‐A scan and clear demarcation of the FAZ edges. The fovea is positioned in the center of the image (i.e., the geometric center of the fovea is within 1 foveal diameter of the image field‐of‐view center).
“Usable” image quality for quantification of vessel density
Discrimination of “capillary vasculature” may be less distinct than “excellent” but still of sufficient quality for acquisition of reliable data in the majoritya of the image. Some image artefacts can be accepted (e.g., non‐continuousb arterioles/venules). The fovea is positioned in the center of the image as above.
“Usable” image quality for quantification of FAZ area
Discrimination of FAZ edges may be less distinct than “Excellent” but still of suitable quality to assess FAZ area or perimeter (there should be no [motion] artefacts in the ring). The inner capillary ring is manually traceable with minimal subjective interpolation needed.
“Unusable” image quality
Discrimination of the “capillary vasculature” is not distinct in the largest parta of the image, and FAZ edges are not clear (demarcation of the edges may not be clear due to motion artefacts, projection artefacts, or any other reason).
Footnotes:
aLargest part of the image refers to a subjective assessment that at least 80% of the image, assessed upon visual inspection, is of “usable” quality. This cut‐off was chosen as some researchers expressed their wish for a clear cut‐off, but acknowledged that purely objective quantification may be too labor intensive for practical implementation in large studies.
bNon‐continuous indicates “large vessel that is broken by full width” (an example is provided in Figure 2).
3.1.2. Assessment of inter‐ and intragrader agreement for 3 × 3‐mm OCT‐A scans
Agreements for grading are shown in Table 2. For the intergrader agreement, a total of 52 OCT‐A images were assessed. The mean intergrader agreement for the overall image quality classified as “unusable,” “usable,” and “excellent” image quality was 82.3%, 70.8%, and 87.1%, respectively. Concerning the VD and FAZ area, the mean intergrader agreements for “unusable” were 80.0% and 81.9%, respectively.
TABLE 2.
Inter‐ and intragrader agreement for “unusable,” “usable,” and “excellent” image quality assessment of 3 × 3‐mm OCT‐A images.
| Intergrader agreement, n = 52 images | |||
|---|---|---|---|
| Unusable | Usable | Excellent | |
| Mean % agreement (SD) | Mean % agreement (SD) | Mean % agreement (SD) | |
| Overall | 82.3 (8.5) | 70.8 (7.9) | 87.1 (4.8) |
| VD | 80.0 (4.3) | – | – |
| FAZ area | 81.9 (8.5) | – | – |
| Intragrader agreement, n = 27 images | |||
|---|---|---|---|
| Mean % agreement (SD) | Mean % agreement (SD) | Mean % agreement (SD) | |
| Overall | 91.1 (6.2) | 82.2 (7.1) | 81.9 (18.5) |
| VD | 89.6 (9.9) | – | – |
| FAZ area | 92.6 (2.6) | – | – |
Note: Inter‐ and intragrader agreements among five graders are shown.
Abbreviations: FAZ, foveal avascular zone; OCT‐A, optical coherence tomography angiography; SD, standard deviation; VD, vessel density.
For the intragrader agreement, a total of 27 OCT‐A images were assessed twice by each grader. The mean intragrader agreements for overall “unusable,” “usable,” and “excellent” image quality were 91.1%, 82.2%, and 81.9%, respectively. Concerning the VD and FAZ area, the mean intragrader agreements for “unusable” were 89.6% and 92.6%, respectively.
3.1.3. Assessment of intergrader reliability for 6 × 6‐mm OCT‐A scans
A total of 21 OCT‐A images were assessed. Agreements for grading are shown in Table 3. The mean intergrader agreement for the overall image quality classified as “unusable,” “usable,” and “excellent” image quality was 73.3%, 57.1%, and 83.8%, respectively. Concerning the VD and FAZ area, the mean intergrader agreements for “unusable” were 60.0% and 84.8%, respectively.
TABLE 3.
Intergrader agreement for “unusable,” “usable,” and “excellent” image quality assessment of 6 × 6‐mm OCT‐A images.
| Intergrader agreement, n = 21 images | |||
|---|---|---|---|
| Unusable | Usable | Excellent | |
| Mean % agreement (SD) | Mean % agreement (SD) | Mean % agreement (SD) | |
| Overall | 73.3 (9.3) | 57.1 (9.5) | 83.8 (9.8) |
| VD | 60.0 (23.1) | – | – |
| FAZ area | 84.8 (6.7) | – | – |
Note: Intergrader agreements among five graders are shown.
Abbreviations: FAZ, foveal avascular zone; OCT‐A, optical coherence tomography angiography; SD, standard deviation; VD, vessel density.
3.1.4. Secondary analyses
Results expressed as mean Cohen kappa are shown in Tables S1 and S2 in supporting information. Intergrader agreements for overall “unusable,” “usable,” and “excellent” image quality assessment for 3 × 3‐mm OCT‐A scans, expressed as mean Cohen kappa, were 0.64, 0.42, and 0.35, respectively (n = 52 images). Intergrader agreements for “unusable” VD and FAZ area of 3 × 3‐mm OCT‐A scans were 0.60 and 0.64, respectively. Intragrader agreements for overall “unusable,” “usable,” and “excellent” image quality assessment for 3 mm× 3‐mm OCT‐A scans were 0.77, 0.61, and 0.28, respectively (n = 27 images). Intragrader agreements for “unusable” VD and FAZ area of 3 × 3‐mm OCT‐A scans were 0.78 and 0.83, respectively.
Intergrader agreements for overall “unusable,” “usable,” and “excellent” image quality assessment for 6 × 6‐mm OCT‐A scans were 0.17, 0.06, and 0.28, respectively (n = 21 images). Intragrader agreements for “unusable” VD and FAZ area of 6 × 6‐mm OCT‐A scans were 0.30 and 0.63, respectively.
3.2. Part 2: recommendations on epidemiological analyses using OCT‐A imaging data
We developed a three‐scenario framework that provides insight into selection and information bias due to suboptimal OCT‐A image quality and ocular comorbidities (Panel 2). 10 , 11 , 12 We also provide recommendations for addressing potential selection bias.
PANEL 2: Three suggested analytic scenarios for OCTA data analyses in population studies
| Description | |
| Scenario 1 | Minimal set of least exclusive image quality criteria to allow maximum study sample size. |
| Scenario 2 | More stringent image quality criteria than scenario 1, using two of the following four options:
|
| Scenario 3 | Consider the impact of ocular comorbidities on image quality, using both the following strategies:
|
3.2.1. Scenario one
In the most common scenario, we recommend using a minimum set of OCT‐A image quality criteria that are the least strict in terms of image quality assessment. This will maximize the size of the study population for analyses while minimizing selection bias.
For this scenario, we propose to use the image quality grading criteria developed within this paper and implemented in a recently published population‐based study from our consortium. 9 For this analysis, we propose to include data with “excellent” and “usable” gradings (i.e., excluding “unusable” images, as defined in Panel 1). We note that using manufacturer‐determined image quality indices such as “signal strength” or “signal‐to‐noise‐ratio” thresholds is also necessary but is not a substitute for the subjective grading criteria that we propose in this scenario or any other scenario below.
For this scenario we recommend using OCT‐A images acquired from both eyes (if available) rather than restricting the analyses to only the right or left eye arbitrarily. This will maximize the inclusion of participants and statistical power. 16 Researchers may combine results from both eyes by averaging OCT‐A results from the left and right eye. This approach may reduce the impact of measurement error. The biological grounding for this approach is that systemic risk factors that contribute to the pathobiology of many ocular diseases are likely to have similar effects in both eyes although the magnitude of the effect may vary in each eye. 9 , 15 An alternative approach to averaging data from both eyes is a multilevel analysis. 24 Multilevel analysis accounts for the correlation between eyes within one individual.
For example, this type of analysis may be used when identifying which (potentially modifiable) factors are determinants of vascular density and/or FAZ area; or when examining to what extent retinal capillary damage is associated with neural health (e.g., presence of dementia) or ocular health (e.g., presence of glaucoma). Although combining data from both eyes may be a suitable strategy for many research questions, we acknowledge that in certain situations (dependent on the research question) researchers may choose to consider data from only one eye. This may be the case when substantial differences are expected between eyes; for example, if the microvasculature has been severely disrupted in one eye due to a disorder that does not affect both eyes equally and at the same time (this could be the case in eyes with history of retinal detachment for example). This may also be the case if the order of image acquisition between the eyes of one individual is thought to introduce a bias.
3.2.2. Scenario two
In a second scenario, we recommend the use of more stringent image quality criteria that reduce the chance of measurement error, (i.e., information bias), but may increase the risk of selection bias.
For this scenario, we suggest four possible analytical strategies. We recommend use of these strategies after excluding “unusable” images, as defined in Panel 1. First, we suggest using only the best quality image from one eye of each subject. Second, we suggest excluding all images with signal strength measures below the manufacturer recommended thresholds. If possible, we suggest using the highest possible thresholds for signal strength. Although such thresholds are likely arbitrary from one device to another, in general, signal strength measures correlate with image quality and are a good adjunct to the other quality control measures proposed here. Third, we suggest adjusting for signal strength by including it as a covariate in statistical models. 25 Adjustment for signal strength may improve precision of the estimate but may also lead to overadjustment, as signal strength is associated with OCT‐A image quality, but not necessarily with the exposure or outcome variable. 25 , 26 Last, a fourth strategy is to exclude images of poor quality as defined by the OSCAR‐MP criteria. 17 We recommend researchers use at least two, and preferably more, of these proposed strategies. The robustness of research findings will be demonstrated if similar results are obtained when using multiple strategies. For example, findings derived from the largest possible set of data can be further validated by restricting the analysis to “excellent” images as illustrated in Collazo Martinez et al. 9
3.2.3. Scenario three
In a third scenario, we recommend evaluating the impact of ocular comorbidities on image quality measures because ocular comorbidities may negatively impact retinal capillary health and may indirectly predispose to poor quality OCT‐A imaging.
For this scenario, we also recommend excluding “unusable” images (as defined in Panel 1). In addition, we recommend conducting two analyses that provide insight into the impact of ocular comorbidities on OCT‐A measures. Ocular comorbidities that should, at minimum, be considered are age‐related macular degeneration, glaucoma, retinopathy of any etiology, myopia, cataract, and corneal disease. We recommend conducting additional analyses in which (1) individuals with ocular comorbidities are excluded and (2) ocular comorbidities are entered into the model as covariates. In our opinion it is important to conduct both analyses as they have different methodological underpinnings. 16 Exclusion of individuals with ocular comorbidities may reduce measurement error but may induce selection bias. 16 Adjustment for ocular comorbidities may reduce the chance of measurement error, but may (also) induce overadjustment bias (this may occur when ocular and systemic diseases with a shared pathobiology are entered in a model). 26 We recommend researchers use both proposed strategies.
3.2.4. Evaluation of scenarios and additional recommendations for addressing selection bias
A comparison of results from the different scenarios above will reveal the potential impact of selection bias due to poor OCT‐A image quality. If results are similar across all analytical scenarios the impact of information and selection bias due to OCT‐A measurement quality may be minimal.
In addition to the above, we provide the following three strategies to address selection bias. A first approach is to evaluate characteristics of the individuals included and excluded in the different scenarios. 27 If general characteristics of the populations differ, some degree of selection bias may have occurred. A second approach is conducting analyses between non‐missing variables (e.g., age or sex) and outcome(s) in the entire study population and in selected populations (i.e., populations with sufficient OCT‐A image quality). Comparison of results will provide insight into whether selection bias may have occurred. 27 A third approach concerns use of inverse probability weighting, which allows estimation of associations while accounting for selectively missing data. This method uses weights (developed for prediction of non‐missingness in the analytic sample) to account for selectively missing data. These weights are entered in the statistical model. More details are provided in Chesnaye et al. 28
3.2.5. Reporting
We recommend reporting results of all three analytical scenarios and addressing selection bias using one or more of the above strategies.
4. DISCUSSION
In this article researchers from the European Eye Epidemiology consortium and collaborators used an expert‐based approach to develop a framework for assessment of OCT‐A image quality and conducting analyses with OCT‐A–derived variables in population‐based studies. There are three main findings: First, the quality grading system developed using 3 × 3‐mm OCT‐A images and adapted from one recent study 9 showed substantial inter‐ and intragrader agreement for assessment of “unusable” overall image quality, VD, and FAZ measures when applied to consortium data from different populations and devices. Second, the grading system showed substantial intergrader agreement for assessment of “unusable” FAZ area on 6 × 6‐mm scans, but lower agreement for assessment of overall quality and VD. Third, a methodological framework consisting of three scenarios to quantify the impact of selection bias, information bias, and ocular comorbidities was developed.
This is the most comprehensive framework for working with OCT‐A images in population‐based cohort studies to date. This work has an important added value to the already published OSCAR‐MP criteria in several ways. 17 First, we developed image quality criteria that are valid on commercial devices from three common OCT‐A manufacturers (i.e., Heidelberg Engineering, Topcon, and Zeiss). In the previous study, only images acquired with a Heidelberg Engineering device were considered. Second, we developed criteria to determine whether images could be used for the assessment of VD and FAZ metrics. Third, we evaluated grading criteria on 6 × 6‐mm OCT‐A images. Fourth, this study demonstrated the validity of the grading system in the general population. The previous study only showed validity among patients with neurological diseases. Last, this study evaluated intragrader agreement for image quality assessment.
Intergrader agreement for the rejection of images with “unusable” quality was similar when using our criteria (Cohen kappa 0.64) and the OSCAR‐MP criteria (Cohen kappa 0.67). This is likely because both grading systems are in part similar. 17 Both grading systems require the majority of “usable” (or “excellent”) images to be free of artefacts. Our system is different from the OSCAR‐MP on several points, including: (1) the centering of the image is not considered in our criteria for assessment of image quality when the variable of interest is the FAZ area. This criterion is designed to be more inclusive of data for particular analyses where overall image quality may not be relevant and thereby increase sample size. (2) The source of the error (i.e., type of artefact) is not specified in our paradigm. (3) No separate grading of retinal pathology is required. These features of our analysis paradigm are designed to enable more rapid image assessment as fewer details are required to be documented.
Validation of the criteria on 6 × 6‐mm images showed that our grading system, which was developed on 3 × 3‐mm field‐of‐view images, performed worse for the assessment of “unusable” overall image quality, and for the assessment of “unusable” VD on 6 × 6‐mm images. This highlights the need for scan protocol–specific quality criteria. A possible explanation for the difference we observe is that the scan resolution for 6 × 6‐mm images is generally lower than 3 × 3‐mm images. This makes it difficult to assess capillary level detail (e.g., 6 × 6‐mm images cover a larger retinal area than 3 × 3‐mm images, while fewer or the same number of B‐scans are used to capture a 6 × 6‐mm image than a 3 × 3‐mm image). 7 , 29 This lower scan resolution is less likely to significantly impact the assessment of FAZ measures, as the “absence” of vasculature is used to assess FAZ area. 7 , 29 Indeed, intergrader agreement for usability of FAZ area was similar for 3 × 3‐mm and 6 × 6‐mm images (82% and 85%, respectively).
Although substantial percentage agreement was observed for “excellent” quality images (82%–87%), relatively worse agreements were observed when Cohen kappa was calculated (0.28–0.35). This may be due to an imbalanced distribution of image quality gradings, as both rater agreement and prevalence are used in the calculation of Cohen kappa. 22 , 23 Indeed, “excellent” images are relatively less common than “usable” or “unusable” images. This phenomenon has previously been described and is known as the “prevalence paradox.” 22
Our findings have implications for future research. First, recommendations can be used to promote consistency in statistical analyses across population studies. Second, during the process of data collection, exemplary images from this paper can be used to guide image quality assessments with frequent real‐time feedback to imaging technicians. For example, the recent work from the Framingham Heart Study demonstrated the best OCT‐A data retention rates by implementing a quality control process contemporaneous with data collection. 9 Third, our recommendations can be used for the development of artificial intelligence models for fully automatic assessment of OCT‐A image quality. 30 Fourth, given relatively lower intergrader agreement for assessment of VD in 6 × 6‐mm images, developers of artificial intelligence algorithms may aim to develop automatic models for this. 30 Open availability of a “benchmark” dataset containing images acquired using different protocols and on different devices would assist the development of such algorithms. Fifth, future researchers with an interest in deriving other measures than VD and FAZ from OCT‐A images may expand the proposed framework with a two‐level grading (“usable,” “unusable”) framework for each additional metric. Sixth, future studies may develop fully automatic artificial intelligence algorithms for assessing OCT‐A image quality for separate retinal regions or sectors, for example, per sector of the early treatment of diabetic retinopathy (ETRDS) grid.
This study has certain strengths. One, OCT‐A images collected in multiple countries and using different protocols and commercially available devices were used for the development of the image quality criteria. This implies that images of individuals from different ethnicities were considered (e.g., individuals of European and Indian descent). 16 Also, this implies that the proposed image criteria can be used on images acquired from different devices with differences in resolution. In addition, a relatively simple and practical grading system was developed, allowing for rapid assessment of OCT‐A image quality. This is in particular relevant when large numbers of images are involved, such as in population studies. We also considered a three‐level approach for assessment of overall image quality, allowing us to distinguish between “excellent” and “usable” images. Advantages of this approach are that (1) a goal image quality standard is provided during data acquisition and (2) in statistical analyses, it is possible to only analyze those images free of measurement errors.
Our study does have limitations. We used an expert‐based assessment to determine the usability of OCT‐A images. This method presumes that image graders have some baseline level of expertise reading OCT‐A images and this takes time and experience to acquire. While this is a limitation of our study it is also a limitation of any study that proposes to use OCT‐A data, especially at scale. After intense debate, our group settled on a subjective cut‐off that 80% of the image be free of artefacts. Future studies may aim to provide quantitative insight into the impact of using differing thresholds as this threshold was reached by consensus for the purpose of the current study. The quantitative impact of artefacts on associations of OCT‐A metrics is particularly helpful to know when studying risk factors for capillary deterioration; or associations with eye or systemic diseases. We did not consider individuals of all ethnicities in this study (e.g., no individuals of African or Chinese descent were included). Whether findings are also valid in other populations requires further study. 16 We did not quantify intragrader agreement for the assessment of 6 × 6‐mm OCT‐A images. 16 There was some overlap in the 3 × 3‐mm images used for development and validation of image quality criteria and it may be possible that a learning effect may have occurred. The impact of any learning bias was minimized due to the presentation of images in a randomized order and due to rotating or inverting of images that were presented twice to the same grader. We did not evaluate the impact of experience in grading OCT‐A images prior to grading image quality in this study. However, all investigators in the consortium are currently collecting or have collected thousands of OCTA images in population studies. Last, we developed image quality criteria using either superficial or full thickness images and future studies may seek to further stratify these criteria on images depicting deep networks separately. However, with the exception of projection artefacts (Collazo Martinez et al. 9 ), there is no reason to believe that image artefacts will differentially impact superficial and deep layers.
In conclusion, in this study, an expert‐based approach was used to develop practical recommendations for quality assessment and analytical use of OCT‐A images in population‐based studies. These recommendations provide a framework for future studies and aim to promote the harmonization of analyses across studies. Uniform analyses across population studies may accelerate the identification and development of scalable retinal imaging biomarkers for ADRD.
AUTHOR CONTRIBUTIONS
Frank C.T. van der Heide drafted the initial manuscript. Frank C.T. van der Heide and Amir H. Kashani contributed to the design, coordination, analyses, and interpretation of the data; revised the manuscript critically for important intellectual content; and provided final approval of the version to be published. Frank C.T. van der Heide is also the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analyses. Tos T.J.M. Berendschot, Sophie Bonnin, Lucia Sobrin, Danilo Andrade De Jesus, Ana Collazo Martinez, Wishal D. Ramdas, Victor A. de Vries, Haifan Huang, and Amir H. Kashani contributed to the development of image quality criteria, revised the manuscript critically for important intellectual content, and provided final approval of the version to be published. Tos T.J.M. Berendschot initiated the development of a joint statement. All other authors contributed to data collection, revised the manuscript critically for important intellectual content, and provided final approval of the version to be published.
CONFLICT OF INTEREST STATEMENT
No potential conflicts of interest relevant to this article were reported. Author disclosures are available in the supporting information.
CONSENT STATEMENT
All human subjects provided informed consent.
Supporting information
Supporting Information
Supporting Information
ACKNOWLEDGMENTS
Maastricht Study: The authors would like to acknowledge ZIO foundation (Vereniging Regionale HuisartsenZorg Heuvelland) for their contribution to the Maastricht Study. The researchers are indebted to all participants for their willingness to participate in the study. Rotterdam Study: The authors are grateful to the Rotterdam Study participants, the staff from the Rotterdam Study, and the participating general practitioners and pharmacists. Rhineland Study: The authors would like to acknowledge the efforts of all study staff and participants in obtaining the OCT‐A data. Framingham Heart Study: The authors would like to acknowledge the efforts of all study staff and participants in obtaining the OCT‐A data, including Shu Jie Ting, Anoush Shahidzadeh, Jared Zucker, Brinda Vaidya, and Tim Kowalczyk. SIGNATR: The authors would like to acknowledge the SIGNATR Study Team: Gayatri Susarla, A. Rizza, Ashley Li, Sam Han, Rehana Khan, Weilin Chan, Ines Lains, Atitaya Apivatthakakul, Kim Brustoski, Vikas Khetan, Robert Igo, Sudha K. Iyengar, and Sinnakaruppan Mathavan. We would also like to acknowledge the participants who enrolled in the SIGNATR Study. Adolphe de Rothschild Foundation Hospital: The authors would like to acknowledge Anne‐Caroline Le Fur, Ozlem Erol, Justine Pineau, and Justine Lafolie, from the French Image Reading Center, for their contributions to data collection. The Maastricht Study: This study was supported by the European Regional Development Fund via OP‐Zuid, the Province of Limburg, the Dutch Ministry of Economic Affairs (grant 31O.041), Stichting De Weijerhorst (Maastricht, the Netherlands), the Pearl String Initiative Diabetes (Amsterdam, the Netherlands), the Cardiovascular Center (CVC, Maastricht, the Netherlands), CARIM School for Cardiovascular Diseases (Maastricht, the Netherlands), CAPHRI School for Public Health and Primary Care (Maastricht, the Netherlands), NUTRIM School for Nutrition and Translational Research in Metabolism (Maastricht, the Netherlands), Stichting Annadal (Maastricht, the Netherlands), Health Foundation Limburg (Maastricht, the Netherlands), Perimed (Järfälla, Sweden), and by unrestricted grants from Janssen‐Cilag B.V. (Tilburg, the Netherlands), Novo Nordisk Farma B.V. (Alphen aan den Rijn, the Netherlands), and Sanofi‐Aventis Netherlands B.V. (Gouda, the Netherlands). Rotterdam Study: The Rotterdam Study is supported by the Algemene Nederlandse Vereniging ter Voorkoming van Blindheid, Oogfonds, Stichting voor Ooglijders, Stichting voor Blindenhulp, Henkes stichting, Rotterdams Stichting voor Blindenbelangen, and Landelijke Stichting voor Blinden en Slechtzienden. Additional support was given by the Erasmus Medical Center, Erasmus University, Netherlands Organization for the Health Research and Development (ZonMw), the Research Institute for Diseases in the Elderly, the Ministry of Education, Culture and Science, the Ministry for Health, Welfare and Sports, the European Commission (DG XII), and the Municipality of Rotterdam. Rhineland Study: The Rhineland Study (P.I. Breteler) is primarily supported by DZNE core funding. The DZNE is funded by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the German State of North Rhine‐Westphalia. Framingham Heart Study: National Institutes of Health R01AG066524 (A.H.K., S.S.), FHS contract 75N92019D00031, and P30AG066546. SIGNATR: This work was supported by: National Eye Institute under R01 EY027134 and Government of India Department of Biotechnology under Grant BT/PR22701/MED/15/166/2016.
Group Collaborators list
For the Maastricht Study: Carroll A. W ebers, MD PhD3; Freekje van Asten, MD PhD3
For the Framingham Heart Study: Alexandra Beiser, PhD13,14; Sudha Seshadri, MD PhD14,15
For the Rotterdam Study: Caroline Klaver, MD PhD10
For the Rhineland Study: Monique Breteler, MD PhD11,16; Robert Finger, MD PhD11,17
For the SIGNATR Study: Rehana Kahn, OD8,9; Renee Liu, BA5; Gayatri Susarla, MD5; A.N. Rizza8; Ashley Li, BS5; Samuel Han, BS5; Weilin Chan, MD5; Ines Lains, MD, PhD5; Janine Yang, MD5; Kim Brustoski, PhD18; Vikas Khetan, MD8; Sudha K Iyengar, PhD18; Penelope Benchek, PhD18; Sinnakaruppan Mathavan, PhD8; Ricky Chan, PhD18
Affiliations
3University Eye Clinic Maastricht, P. Debyelaan 25, 6229 HX, Maastricht University Medical Center+, Maastricht, the Netherlands
5Department of Ophthalmology, Harvard Medical School, 243 Charles st MA 02114, Massachusetts Eye and Ear, Boston, USA
8Vision Research Foundation, no 1 college road 60000, Sankara Nethralaya, Chennai, Tamil Nadu, India
9School of Optometry and Vision Science, Kensington NSW 2033 University of New South Wales, Sydney, Australia
10Departments of Ophthalmology and Epidemiology, Erasmus University Medical Center, Molenwaterplein 40 3015 GD, Rotterdam, The Netherlands
11Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Venusberg‐campus 1/99, 53127, Bonn, Germany
13Department of Biostatistics, Boston University School of Public Health, Boston, MA, USA
14Department of Neurology, Boston University School of Medicine, 677 Huntington Avenue MA02155, Boston, MA, USA
15Glenn Biggs Institute for Alzheimer's & Neurodegenerative Diseases, 8300 Floyd Curl Dr TX 78229, UT Health San Antonio, San Antonio, TX, USA
16Institute for Medical Biometry, Informatics and Epidemiology, Faculty of Medicine, University of Bonn, Venusberg‐campus 1/99, 53127, Bonn, Germany
17Department of Ophthalmology, Medical Faculty Mannheim, University of Heidelberg, Grabengasse 1, 69117 Heidelberg, Germany
18Department of Population and Quantitative Health Sciences, Case Western Reserve University, Wood building 2109 Adelbert Rd OH44106, Cleveland, OH, USA
Kashani AH, Berendschot TTJM, Bonnin S, et al. Retinal optical coherence tomography angiography imaging in population studies for study of microvascular dysfunction in Alzheimer's disease and related dementias. Alzheimer's Dement. 2025;21:e70252. 10.1002/alz.70252
Amir H. Kashani, Tos T. J. M. Berendschot, Sophie Bonnin, Lucia Sobrin, Danilo Andrade De Jesus, Haifan Huang, Luisa Sanchez Brea, Ana Collazo Martinez, Victor A. de Vries, and Frank C. T. van der Heide contributed equally to this work.
For group Collaborators list see ACKNOWLEDGMENTS Section
Contributor Information
Amir H. Kashani, Email: akashan1@jhmi.edu.
Frank C. T. van der Heide, Email: fvanderheide@for.paris.
DATA AVAILABILITY STATEMENT
Data are available for any researcher who meets the criteria for access to confidential data; the corresponding author may be contacted to request data.
REFERENCES
- 1. Collaborators GBDDF . Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the Global Burden of Disease Study 2019. Lancet Public Health. 2022;7(2):e105‐e125. doi: 10.1016/S2468-2667(21)00249-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Mielke MM, Anderson M, Ashford JW, et al. Recommendations for clinical implementation of blood‐based biomarkers for Alzheimer's disease. Alzheimers Dement. 2024;20(11):8216‐8224. doi: 10.1002/alz.14184 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Zhang Y, Chen H, Li R, Sterling K, Song W. Amyloid beta‐based therapy for Alzheimer's disease: challenges, successes and future. Signal Transduct Target Ther. 2023;8(1):248. doi: 10.1038/s41392-023-01484-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. van der Heide FCT, van Sloten TT, Willekens N, Stehouwer CDA. Neurovascular coupling unit dysfunction and dementia: retinal measurements as tools to move towards population‐based evidence. Front Endocrinol. 2022;13:1014287. doi: 10.3389/fendo.2022.1014287 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Kashani AH, Asanad S, Chan JW, et al. Past, present and future role of retinal imaging in neurodegenerative disease. Prog Retin Eye Res. 2021;83:100938. doi: 10.1016/j.preteyeres.2020.100938 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Jali V, Zhang Q, Chong JR, et al. Diagnosis of cognitive impairment and dementia: blood plasma and optical coherence tomography. Brain Commun. 2025;7(1):fcae472. doi: 10.1093/braincomms/fcae472 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Kashani AH, Chen CL, Gahm JK, et al. Optical coherence tomography angiography: a comprehensive review of current methods and clinical applications. Prog Retin Eye Res. 2017;60:66‐100. doi: 10.1016/j.preteyeres.2017.07.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Javed A, Khanna A, Palmer E, et al. Optical coherence tomography angiography: a review of the current literature. J Int Med Res. 2023;51(7):3000605231187933. doi: 10.1177/03000605231187933 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Collazo Martinez AD, Ting SJ, Shahidzadeh A, et al. OCT angiography‐derived retinal capillary perfusion measures in the Framingham Heart Study. Ophthalmol Sci. 2024;5:100696. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Werner AC, Shen LQ. A review of OCT angiography in glaucoma. Semin Ophthalmol. 2019;34(4):279‐286. doi: 10.1080/08820538.2019.1620807 [DOI] [PubMed] [Google Scholar]
- 11. Courtie E, Kirkpatrick JRM, Taylor M, et al. Optical coherence tomography angiography analysis methods: a systematic review and meta‐analysis. Sci Rep. 2024;14(1):9643. doi: 10.1038/s41598-024-54306-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Spaide RF, Fujimoto JG, Waheed NK, Sadda SR, Staurenghi G. Optical coherence tomography angiography. Prog Retin Eye Res. 2018;64:1‐55. doi: 10.1016/j.preteyeres.2017.11.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Ibrahim Y, Xie J, Macerollo A, et al. A systematic review on retinal biomarkers to diagnose dementia from OCT/OCTA images. J Alzheimers Dis Rep. 2023;7(1):1201‐1235. doi: 10.3233/ADR-230042 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Zhang JF, Wiseman S, Valdes‐Hernandez MC, et al. The application of optical coherence tomography angiography in cerebral small vessel disease, ischemic stroke, and dementia: a systematic review. Front Neurol. 2020;11:1009. doi: 10.3389/fneur.2020.01009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Kellner RL, Harris A, Ciulla L, et al. The eye as the window to the heart: optical coherence tomography angiography biomarkers as indicators of cardiovascular disease. J Clin Med. 2024;13(3):829. doi: 10.3390/jcm13030829 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Ahlbom A. Modern Epidemiology, 4th edition. TL Lash, TJ VanderWeele, S Haneuse, KJ Rothman. Wolters Kluwer, 2021. Eur J Epidemiol. 2021;36(8):767‐768. doi: 10.1007/s10654-021-00778-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Wicklein R, Yam C, Noll C, et al. The OSCAR‐MP consensus criteria for quality assessment of retinal optical coherence tomography angiography. Neurol Neuroimmunol Neuroinflamm. 2023;10(6):e200169. doi: 10.1212/NXI.0000000000200169 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Schram MT, Sep SJ, van der Kallen CJ, et al. The Maastricht Study: an extensive phenotyping study on determinants of type 2 diabetes, its complications and its comorbidities. Eur J Epidemiol. 2014;29(6):439‐451. doi: 10.1007/s10654-014-9889-0 [DOI] [PubMed] [Google Scholar]
- 19. Ikram MA, Brusselle G, Ghanbari M, et al. Objectives, design and main findings until 2020 from the Rotterdam Study. Eur J Epidemiol. 2020;35(5):483‐517. doi: 10.1007/s10654-020-00640-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Tsao CW, Vasan RS. Cohort profile: the Framingham Heart Study (FHS): overview of milestones in cardiovascular epidemiology. Int J Epidemiol. 2015;44(6):1800‐1813. doi: 10.1093/ije/dyv337 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Susarla G, Rizza AN, Li A, et al. Younger age and albuminuria are associated with proliferative diabetic retinopathy and diabetic macular edema in the South Indian GeNetics of DiAbeTic Retinopathy (SIGNATR) Study. Curr Eye Res. 2022;47(10):1389‐1396. doi: 10.1080/02713683.2022.2091148 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Byrt T, Bishop J, Carlin JB. Bias, prevalence and kappa. J Clin Epidemiol. 1993;46(5):423‐429. doi: 10.1016/0895-4356(93)90018-v [DOI] [PubMed] [Google Scholar]
- 23. Delgado R, Tibau XA. Why Cohen's Kappa should be avoided as performance measure in classification. PLoS One. 2019;14(9):e0222916. doi: 10.1371/journal.pone.0222916 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Hofoss D, Veenstra M, Krogstad U. Multilevel analysis in health services research: a tutorial. Ann Ist Super Sanita. 2003;39(2):213‐222. [PubMed] [Google Scholar]
- 25. Yu JJ, Camino A, Liu L, et al. Signal strength reduction effects in OCT angiography. Ophthalmol Retina. 2019;3(10):835‐842. doi: 10.1016/j.oret.2019.04.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Schisterman EF, Cole SR, Platt RW. Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology. 2009;20(4):488‐495. doi: 10.1097/EDE.0b013e3181a819a1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Hernan MA, Hernandez‐Diaz S, Robins JM. A structural approach to selection bias. Epidemiology. 2004;15(5):615‐625. doi: 10.1097/01.ede.0000135174.63482.43 [DOI] [PubMed] [Google Scholar]
- 28. Chesnaye NC, Stel VS, Tripepi G, et al. An introduction to inverse probability of treatment weighting in observational research. Clin Kidney J. 2022;15(1):14‐20. doi: 10.1093/ckj/sfab158 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Ho J, Dans K, You Q, Nudleman ED, Freeman WR. Comparison of 3 mm x 3 mm versus 6 mm x 6 mm optical coherence tomography angiography scan sizes in the evaluation of non‐proliferative diabetic retinopathy. Retina. 2019;39(2):259‐264. doi: 10.1097/IAE.0000000000001951 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103(2):167‐175. doi: 10.1136/bjophthalmol-2018-313173 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supporting Information
Supporting Information
Data Availability Statement
Data are available for any researcher who meets the criteria for access to confidential data; the corresponding author may be contacted to request data.
