Mohammadpour 2022.
Study characteristics | |||
Patient Sampling | Prospective, diagnostic test accuracy study including 217 eyes of 212 people aged 17–49 years who were referred to the Keratoconus Clinic or were refractive surgery candidates at the Refractive Surgery Unit. Exclusion criteria: a history of ocular surgery, corneal cross‐linking, or ring implantation; corneal hydrops or scarring; signs and symptoms of dry eye or ocular diseases other than keratoconus; connective tissue diseases; systemic diseases affecting the eyes; corneal haze; pregnancy; and contact lens use in the previous month. |
||
Patient characteristics and setting | The study included people already diagnosed with keratoconus or suspected keratoconus. | ||
Index tests | The algorithm combines Placido and Scheimpflug technologies to provide complete information on the anterior and posterior corneal surfaces. Sirius (Costruzione Strumenti Oftalmici, Florence, Italy) takes 25 Scheimpflug images and 1 Placido image in < 1 second. Height, slope, and curvature data are then calculated with an arc‐step method. This system provides comprehensive information on the entire cornea and classifies keratoconus via the Phoenix software through a neural network process. The study performed a comparison of existing algorithms, which are already validated. |
||
Target condition and reference standard(s) | Participants were grouped based on the clinical diagnosis of 2 independent experienced corneal specialists (M. Mohammadpour, K. Amanzadeh), through slit‐lamp biomicroscopy, retinoscopy, corrected distance visual acuity (CDVA) measurement with a Snellen chart, and evaluation of the Pentacam Refractive 4 Maps. The specialists were blinded to classification reports. Diagnostic discrepancies were resolved by a third expert examiner (A. Moghaddasi) for a definitive diagnosis. | ||
Flow and timing | All cases were included in the reference standard and index test. All data were included in a 2 × 2 table. | ||
Comparative | Not applicable | ||
Notes | The study authors received no financial support for the research, authorship, or publication of the article. | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | No | ||
Did the study avoid inappropriate exclusions? | No | ||
Could the selection of patients have introduced bias? | High risk | ||
Are there concerns that the included patients and setting do not match the review question? | High | ||
DOMAIN 2: Index test (All tests) | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Unclear | ||
Was the model designed in an appropriate manner? | Yes | ||
Could the conduct or interpretation of the index test have introduced bias? | Low risk | ||
Are there concerns that the index test, its conduct, or interpretation differ from the review question? | Low concern | ||
DOMAIN 3: Reference standard | |||
Is the reference standard likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Yes | ||
Could the reference standard, its conduct, or its interpretation have introduced bias? | Low risk | ||
Are there concerns that the target condition as defined by the reference standard does not match the question? | Low concern | ||
DOMAIN 4: Flow and timing | |||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | Yes | ||
Could the patient flow have introduced bias? | Low risk | ||
DOMAIN 5: Comparative | |||
Were different AI tests were developed and interpreted without knowledge of each other. | |||
Are the proportions and reasons for missing data similar for all index tests? | |||