Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2020 Apr 2;15(4):e0230589. doi: 10.1371/journal.pone.0230589

Reproducibility, and repeatability of corneal topography measured by Revo NX, Galilei G6 and Casia 2 in normal eyes

Adam Wylęgała 1,2,*, Robert Mazur 1,2, Bartłomiej Bolek 1,2, Edward Wylęgała 1,2
Editor: Andrzej Grzybowski3
PMCID: PMC7117679  PMID: 32240192

Abstract

Purpose

To test the repeatability and reproducibility of the topography module in posterior segment spectral domain optical coherence tomography with Revo NX (new device) and to compare keratometry values obtained by a Scheimpflug tomography (Galilei G6) and a swept source OCT (Casia 2).

Methods

In this prospective study, healthy subjects with nonoperated eyes had their central corneal thickness (CCT), anterior and posterior K1/K2 corneal power measured with the new device. Two operators made 6 measurements on the new device to check intraobserver repeatability and reproducibility, and measurement on Casia 2 and Galilei G6. Bland-Altman plots were used to assess the agreement between the devices for each analyzed variable.

Results

94 eyes (94 patients) were studied. All devices produced significantly different mean CCT, the highest for Galilei 569.13±37.58 μm followed by Casia 545.00 ±36.15 μm and Revo 537.39±35.92 μm. The mean anterior K1 was 43.21 ± 1.37 for Casia 2 43.21 ± 1.55 for Revo NX and 43.19 ± 1.39 for Galilei G6, and the differences were insignificant p = 0.617. The posterior K1 for Revo NX was -5.77 ± 0.25 whereas for Casia 2 it was -5.98±0.22 and for Galilei G6–6.09±0.28 D p< 0.0001. The Revo NX showed intraclass correlation coefficient ranging from 0.975 for the posterior K2 surface, and 0.994 for anterior K1 and 0.998 for CCT.

Conclusions

Revo NX is independent of the user and offers a high level of repeatability for the anterior and posterior cornea. The wide range of differences between the devices suggests they should not be used interchangeably.

Introduction

The most important refractive part of the optic system of an eye is the cornea. Due to the difference between two refractive indexes of the air and tear film it has the highest refractive power of all the structures in the eye. Measuring the corneal topography is hence of major importance before performing cataract and refractive procedure, or monitoring the progression of a disease [13]. Furthermore, measurements of central corneal thickness are vital in diagnosis of glaucoma, Fuchs endothelial cell dystrophy and corneal graft rejection. Modern tomographers can measure many parameters including: anterior and posterior curvatures, pachymetry, refractive power, corneal thickness and provide high quality images. The introduction of Optical Coherence Tomography (OCT) to ophthalmology allowed a new way to quantify and visualize structures in anterior and posterior segment of an eye.

Anterior segment OCT utilized a coherence interferometer to generate 2 or 3 dimensional images of the anterior segment of the eye [4]. Currently there are two different types of OCT devices which allow to observe anterior segment. Devices such as Casia-2 are dedicated to imaging and analyzing the anterior segment only. While, devices like Zeiss Cirrus OCT, designed to obtain images of the posterior segment, are also capable of imaging the anterior segment after an optional anterior module is attached [5]. Revo NX is the latter type. Contrary to most OCT machines with add on lens that measure only anterior curvature, [6] it is cable of generating anterior and posterior corneal surface maps with respective keratometry. Measuring posterior corneal power is vital in keratoconic patients and in IOL calculation [7]. The major benefit of using a combined system is the lower price, and higher resolution. While drawback is the lack of collimated light at the cornea, whcihc leads to the messurements being distance dependent. Further the field of view is twice smaller then in the single use device.

There are two forms of measuring the precision of a device: repeatability and reproducibility. Repeatability means variability of results measured in short intervals, while reproducibility is defined as variability of results measured under different circumstances e.g., exams taken by different operators [8]. Accurate quantification of corneal power is of utmost importance in the age of premium IOL, and refractive surgery. Although pachymetry assessment can be a way to monitor corneal edema or to adjust IOP for corneal thickness, some previous studies that compared older devices concluded that the corneal parameters produced by other devices should not be used interchangeably. In this study, we evaluated the correlation and efficiency of measurements of the anterior segment of healthy eyes taken with the three devices.

The goal of this paper is to assess both the repeatability of the spectral domain OCT—Revo NX, and the agreement between a rotating Scheimpflug camera (Galilei Z6) and Anterior segment Swept Source OCT—Casia 2.

Methods

This study was approved by the bioethical committee of the Silesian Medical University and adhered to the tenets of the Declaration of Helsinki. Before the examination, the participants had signed informed consent and had been informed of the experimental procedure. We included 94 eyes of 35 males and 59 females aged 32.34 ±10.21 in this prospective study. Subjects were students, interns, and workers of the hospital with no corneal conditions including ectatic disease such as KC. Recruitment time started in September 2018 and lasted till the end of January 2019. Participants who had been wearing any type of contact lenses less than 72 hours prior to the measurements were not included in the study, nor were those who had undergone any ophthalmic surgery e.g. cataract or refractive.

Study devices

Revo NX, software version 9.0.0 (Optopol Technology Ltd, Zawiercie, Poland) is a high speed 110 000 A-scan/sec spectral-domain OCT operates at 830 nm center wavelength, with 5 μm axial and 18 μm transverse imaging resolutions. It can visualize the posterior segment of the eye and measure the axial length with an add-on lens as well as create maps of the cornea and images of the anterior segment. The device automatically acquires 16 B-scans of the 8 mm corneal diameter. Keratometry values in this study were calculated in the 3 mm central zone. Anterior, Posterior and True Corneal power, CCT is averaged from the central 3 mm zone. The device uses a refractive index of 1.3375 in order to convert the radius calculation expressed in mm to curvature power in D. To calculate the posterior K 1.336 and 1.376 refraction indexes are used.

The Galilei G6 Dual Scheimpflug Analyzer (Ziemer, Port, Switzerland) combines 20 Placido rings based topography with a dual rotating Scheimpflug camera 9. Scheimpflug technology is considered gold standard in corneal meassurements. Simulated keratometry (SimK) is calculated from the 0.5 to 2.0 mm annular (semichord) zone and is represented as dioptres using a refractive index of 1.3375. The posterior Mean K is calculated using a refractive index of 1.376 for the cornea and a refractive index of 1.336 for the aqueous humor. It is calculated over an area of 4 mm in diameter (2 mm radius or semichord).

A different technology is used by CASIA2 (Tomey Corporation, Nagoya, Japan) Swept Source anterior segment OCT (AS-OCT). It uses a swept laser 1310 nm wavelength, which is longer than in SD-OCT devices providing higher penetration but lower resolution, 50 000 A-scan/sec high-speed detector, and contrary to SD-OCT it lacks spectrometer. It uses a CMOS camera instead. Corneal power is calculated using a 1.3375 refractive index. Further, keratometry values are calculated on a 3.2 mm diameter.

Measurement technique

All devices were placed in one darkened room. All measurements were taken on the same days by two trained operators. One eye of each subject was randomly chosen. Every participant had 6 Topo scan measurements on Revo NX (3 scans carried out by each operator), to measure repeatability and reproducibility, followed by one corneal map measurement on Galilei G6, Corneal Map scan on Casia 2. For every device, anterior and posterior K1 and K2 values were recorded as well as apical CCT. Only measurements well centered and with high-quality indexes were included in the study.

Statistical analysis

Statistical analysis was conducted—using Statistica software ver. 13.1 (Dell Inc, Tulsa, OK, USA.) releases by Statsoft (Krakow, Poland). Numerical results for repeatability and reproducibility contain six quantities computed for observers separately and respectively for the entire dataset: mean, standard deviation (SD.), within-subject standard deviation (Sw.), test-retest repeatability (TRT.), within-subject coefficient of variation (CoV.), intraclass correlation coefficient (ICC.) were calculated for repeatability and reproducibility of the Revo NX. A comparison between 3 devices was analyzed using Bland-Altman plots. The normality of the data was tested using the Shapiro-Wilk test. The paired Student t-test was used to assess the differences between the devices. Statistical data in the form of an excel spreadsheet as well as a detailed description with the mathematical equation used will be available in Mendeley data depository from 24-MAY-2019 https://data.mendeley.com/datasets/kvs6258sdp/draft?a=0f60c172-fbcc-4185-904f-b6866a939314

Results

Interoperator repeatability and reproducibility

The operator impact on the device was insignificant with an interoperator intraclass correlation coefficient for both anterior and posterior K1 and K2 parameters ranging from 0.975 to 0.994 (Fig 1 and Table 1).

Fig 1. Mean interoperator repeatability between operator A and B and reproducibility of Revo NX.

Fig 1

Table 1. Intraoperator repeatability of Revo NX, each operator performed three measusrements.

Parameter Operator Mean SD. SW TRT. CoV.[%] ICC.
Anterior K1 A 42.50 1.49 0.12 0.33 0.28 0.994
B 42.47 1.50 0.14 0.39 0.33 0.991
Anterior K2 A 43.19 1.54 0.16 0.46 0.38 0.989
B 43.17 1.56 0.14 0.40 0.33 0.992
Posterior K1 A -5.75 0.23 0.03 0.10 -0.60 0.978
B -5.76 0.23 0.03 0.09 -0.56 0.981
Posterior K2 A -6.00 0.26 0.04 0.11 -0.63 0.979
B -6.01 0.26 0.04 0.11 -0.69 0.975
Central corneal thickness A 531.05 32.54 1.47 4.07 0.28 0.998
B 530.73 32.68 1.50 4.16 0.28 0.998

SD. = Standard deviation SW. = within-subject standard TRT. = test-retest repeatability, CoV. = within-subject coefficient of variation, ICC. = intraclass correlation coefficient,

The Revo NX showed a high level of reproducibility that was also statistically insignificant with intraclass correlation coefficient ranging from 0.977 to 0.991 and standard deviation from 0.23 D for posterior K1 to 1.55 D for anterior K2. The intraoperator difference in the standard deviation in Anterior K1 was 0.01 and 0.02 for K2 while the posterior K1 and K2 standard deviation showed no difference (Fig 1 and Table 2). CCT showed even higher level of ICC of 0.998, with the mean CCT of 530.89±32.55 μm.

Table 2. Revo NX reproducibility based on six measurement from both operators.

Parameter Mean SD. SW. TRT. CoV.[%] ICC.
Anterior K1 42.48 1.49 0.14 0.40 0.34 0.991
Anterior K2 43.18 1.55 0.16 0.46 0.38 0.989
Posterior K1 -5.75 0.23 0.04 0.10 -0.61 0.977
Posterior K2 -6.00 0.26 0.04 0.11 -0.65 0.978
Central corneal thickness 530.89 32.55 1.62 4.47 0.30 0.998

SD. = Standard deviation SW. = within-subject standard TRT. = test-retest repeatability, CoV. = within-subject coefficient of variation, ICC. = intraclass correlation coefficient,

Comparison

Differences in mean anterior K1 corneal measurements between G6 43.19 ± 1.39. Casia2 43.21 ± 1.37 and Revo NX 43.21 ± 1.55 were statistically insignificant (Fig 2). However the analysis showed statistically significant differences between anterior K2 for Casia 2. 44.17 ± 1.38, Revo NX 43.98 ± 1.53 and Galilei G6 44.15 ± 1.37 which were significant between Casia 2 and Revo p< 0.000, and Revo and G6 p = 0.004 (Fig 2). Differences between anterior keratometry of Casia 2 and Galilei G6 showed no significance with p = 0.21 and p = 0.46 for the K1 and K2 respectively. The devices were not interchangeable for the measurement of posterior K1 and K2. Posterior K1 showed significant differences between Galilei G6–6.09 ± 0.28. Revo -5.77±0.25 and Casia -5.98±0.22 for all comparisons p< 0.0001 (Table 3). The mean posterior K2 was -6.03 ±0.27 for Revo NX and -6.29 ±0.24 for Casia 2 and -6.53±0.39 for Galilei p< 0.0001 (Fig 3). The highest mean apical CCT was noted by Galilei G6 it was 569.13±37.58 followed by Casia 545.00 ±36.15 while Revo NX demonstrated the smallest CCT of 537.39±35.92 (Table 4). All comparisons were significant p<0.0001 (Fig 4).

Fig 2. Bland-Altman plots showing the agreement between anterior K1 obtained by the Galileli G6.

Fig 2

Casia 2 and Revo NX and K2 in 94 normal eyes. The mean difference is represented by the solid blue line whereas the dotted lines represent±1.96 SD.

Table 3. Differences between mean of the Galilei G6 and standard deviations for the difference of CCT.

Value Device Galilei G6 Casia 2 Revo Nx
Mean SD. difference between G6 SD. for the difference Difference vs G6 SD. for the difference
CCT 569.13 37.02 23.93 9.58 31.74 10.32
K1 anterior 43.15 1.39 -0.04 0.28 -0.06 0.46
K2 anterior 44.15 1.38 -0.03 0.35 0.16 0.53
K1 posterior -6.10 0.28 -0.11 0.21 -0.33 0.24
K2 posterior -6.53 0.39 -0.23 0.29 -0.50 0.31

Difference was always calculated (Galilei G6)–(Casia 2 or Revo Nx). SD.-standard deviation.

Fig 3. Bland-Altman plots showing the agreement between posterior K1 and K2 obtained by the Galileli G6 Casia 2 and Revo NX in 94 normal eyes.

Fig 3

The mean difference is represented by the solid blue line whereas the dotted lines represent ±1.96 SD.

Table 4. The mean, difference, range, SD, limits of agreement (LoA) with 95% Cis, ICC of K1 K2, and CCT between the Revo Nx, Casia 2 and Galiei G6.

Mean SD Range Difference of the means SD.for the diff. Lower endpoint of 95% CI Upper endpoint of 95% CI ICC
K1 Anterior Galilei 43.155 1.390 39.43–46.1
K1 Anterior Meridian 3mm REVO 43.210 1.557 39.5–46.7 -0.055 0.457 -0.149 0.038 0.952
K1 Anterior CASIA 43.191 1.374 39.5–46.55
K1 Anterior Meridian 3mm REVO 43.210 1.557 39.5–46.7 -0.019 0.440 -0.109 0.071 0.956
K2 Anterior CASIA 44.177 1.385 40.62–47.17
K2 Anterior Meridian 3mm REVO 43.987 1.531 40.2–47.2 0.190 0.552 0.077 0.303 0.921
K2 Anterior Galilei 44.150 1.376 40.3–47.46
K2 Anterior Meridian 3mm REVO 43.987 1.531 40.2–47.2 0.163 0.535 0.054 0.273 0.927
K1 Posterior CASIA -5.986 0.217 -0.93
K1 Posterior Meridian 3mm REVO -5.771 0.248 -1.1 -0.215 0.084 -0.233 -0.198 0.595
K2 Posterior CASIA -6.296 0.247 -1.16
K2 Posterior Meridian 3mm REVO -6.027 0.271 -1.5 -0.269 0.087 -0.287 -0.251 0.532
K1 Posterior Galilei -6.096 0.280 -1.55
K1 Posterior Meridian 3mm REVO -5.771 0.248 -1.1 -0.325 0.236 -0.374 -0.277 0.166
K2 Posterior Galilei -6.530 0.392 -2.12
K2 Posterior Meridian 3mm REVO -6.025 0.272 -1.5 -0.505 0.308 -0.568 -0.441 0.016
CCT Central Power Galilei 569.489 37.024 482–637
CCT Central Power CASIA 545.559 35.697 474–612 23.930 9.579 21.957 25.903 0.774
CCT Central Power CASIA 545.000 36.155 474–612
CCT Central Power REVO 537.389 35.928 467–605 7.611 3.717 6.833 8.390 0.973
CCT Central Power Galilei 569.128 37.577 482–637
CCT Central Power REVO 537.389 35.928 467–605 31.739 10.322 29.577 33.901 0.653

Fig 4. Bland-Altman plots showing the agreement between central corneal thickness obtained by the Galileli G6.

Fig 4

Casia 2 and Revo NX in 94 normal eyes. The mean difference is represented by the solid blue line whereas the dotted lines represent ±1.96 SD.

Discussion

In clinical medicine. the measurements performed in vivo are constantly changing and their true value is unknown. If a new method or a new device is brought to the market it is compared with the current well-established methods–the so called gold-standard. The changes between the current and a new method cannot be too big to influence the clinical decision. Bland and Altman proposed a graphical plot that is easy to interpret to determine the usefulness of a new method [8,9].

In this study. we compared the repeatability and interoperator reproducibility of a new corneal topographer module of Revo NX SD-OCT with Galilei G6 Schimpflug camera and Casia 2 SS-OCT in normal eyes. As it was concluded in many previous studies the measurements from two keratometric systems cannot be used interchangeably [3,1012] There are two types of devices capable of measuring anterior and posterior keratometry: OCT based systems and Scheimpflug camera. Some OCT systems use swept-source technology featuring lower resolution but faster acquisition rate [13]. Others relay on spectral domain producing a smaller acquisition window but with a higher image quality [14]. The biggest advantage of AS-OCT over a Scheimpflug based system is that the numeric values are accompanied by the presence of high quality images that are superior in terms of resolution [15].

Crawford et. al compared Galilei. another Scheimpflug camera Pentacam (Oculus. Weltzar. Gemany) and Orbscan II (Bausch&Lomb. Rochester. USA). The authors showed a good level of repeatability. while Galilei exhibited best reapeatability [12]. Similarly. Meyer and his colleagues compared Orbscan II. Galilei and Pentacam in keratoconic eyes and observed that Orbscan II has the least repeatable measurements. Furthermore there was no significant difference between Pentacam and Galilei [11]. Another study that showed no agreement between corneal diameters measured by Galilei. Orbscan and EyeSys (EyeSys Corneal Analysis System. Houston. Tx. USA) was published by Salouti et al. The authors concluded that these differences come from different measurement methods [16]. This is further complicated because the manufactures rarely disclose the method of capturing measurements. Kannengießer evaluated IOL topographies using Casia. Pentacam and TMS-2N (Tomey. Nagoya. Japan) and concluded that Casia creates a high level of variation compared with the other machines [17].

Casia showed higher dioptric values compared to Pentacam in both anterior K1 and posterior surface [18]. The authors speculate that these changes are due to the presence of various methods applied in the devices. As we showed in Table 3. Casia 2 demonstrated higher keratometry values compared to Scheimpflug device while Revo NX showed higher values only in anterior K2. Repeatability values in a similar study were 0.61%. 0.82%. and 0.80% for the SD-OCT. Pentacam. and ultrasound respectively [19]. Furthermore Savini et al. showed high agreement between videokeratographs and Scheimpflug device. However the level of agreement was considerably high around 1 D [20].

CCT was shown to be the highest in Scheimpflug device which is consistent with the previous studies. One study reported mean difference of 13.6 μm between Pentacam and Casia [18]. Another study found that for Pentacam and Casia the mean CCT was 544 μm and 533 μm respectively [10]. In our study the mean CCT measured by Revo NX was 537.39 ±35.9. 545.56 ± 35.7μm and 569.37± 37.0 μm for Casia and Galilei G6 respectively. Another work examined the comparison and repeatability between AS-OCT and Scheimpflug based system. It was discovered that the mean CCT was highest in ultrasound device. followed by Scheimpflug based and SD-OCT. Interoperator reproducibility was lowest in ultrasound based technology. The authors link the highest ultrasound thickness with the tear film dislocation partially caused by the anesthetic drops. Sheimpflug image system compared with the SS-OCT tends to provide higher CCT values [19]. The reason why Scheimpflug produced the highest CCT is due to the probable inclusion of tear film into CCT [10]. Different methods yield different results due to the variuose reference models used such as average speed of sound or refractive index.

Previous studies compared the agreement between older types of devices. in our study we related the latest version of swept source OCT and dual Scheimpflug combined with placido disc tomography with high sped spectral OCT. We believe that the lack of agreement showed in our paper. compared with previous studies showing high agreement. is related to the better precision of modern devices. Orbscan II. for instance. showed ICC. of 0.984 and 0.981 for the flat and step axis respectively while Galilei (a newer device) had an ICC. of 0.991 and 0.994 [12]. Revo NX showed 0.991 and 0.989. It is important to note that our study group was more than 3 times bigger.

Measuring anterior corneal surface is easier than measuring the posterior [21]. In order to measure the latter. sophisticated mathematic algorithms have to be implemented. which is why there is a significant difference between the recordings of the devices. Secondly do to the a very strong reflex at the air/cornea interface makes it difficult to corelcty identify edges. Thirdly posterior surface evaluation is hindered by the errors of the dront surface. Moreover. the size of the posterior measurement is different for all three devices. Casia 2 measures on a 3.2 mm radius while Galilei G6 on a 4 mm and in Revo NX it is within 3 mm. Refractive indexes for posterior or surface can vary in different devices. Anterior surface keratometry can be measured in simulated keratometry when values are calculated from the annular (semichord) or in true keratometry where values are measured within the circle. There is no posterior simulated keratometry [22].

Limitations of this study

Our sample did not include eyes with corneal conditions such as keratoconus or post-transplant where different results might be observed. Secondly. the volunteers were relatively young.

In conclusion. Revo NX provides reliable and repeatable results. Also. inter-operator reproducibility of the measurements is high. The agreement between devices is low and is due to different methods utilized. It is important then not to compare results between devices.

Supporting information

S1 File

(DOCX)

Acknowledgments

The authors gratefully acknowledge Prof. Achim Langenbucher from Institute of Experimental Ophthalmology. Saarland University. Homburg/Saar. Germany for his insightful comments about the design of our experiment. We also thank Optopol Technology Ltd. for providing the Revo NX equipment with corneal topography module used in this study. Optopol Technology Ltd. Played no further role in this study.

Data Availability

https://data.mendeley.com/datasets/kvs6258sdp/draft?a=0f60c172-fbcc-4185-904f-b6866a939314

Funding Statement

Optopol Technology Ltd. provided the Revo NX equipment with corneal topography module used in this study. AW received a speaker's honorarium form Carl Zeiss and works as a consultant for Carl Zeiss Meditec. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Goebels S. Eppig T. Wagenpfeil S. Cayless A. Seitz B. Langenbucher A. Staging of Keratoconus Indices Regarding Tomography. Topography. and Biomechanical Measurements. Am J Ophthalmol. Elsevier; 2015;159: 733-738.e3. 10.1016/J.AJO.2015.01.014 [DOI] [PubMed] [Google Scholar]
  • 2.Wylegała E. Nowińska A. Usefulness of anterior segment optical coherence tomography in Descemet membrane detachment. Eur J Ophthalmol. 2009;19: 723–728. 10.1177/112067210901900506 [DOI] [PubMed] [Google Scholar]
  • 3.Goebels S. Pattmöller M. Eppig T. Cayless A. Seitz B. Langenbucher A. Comparison of 3 biometry devices in cataract patients. J Cataract Refract Surg. Elsevier; 2015;41: 2387–2393. 10.1016/j.jcrs.2015.05.028 [DOI] [PubMed] [Google Scholar]
  • 4.Wylegała E. Teper S. Nowińska AK. Milka M. Dobrowolski D. Anterior segment imaging: Fourier-domain optical coherence tomography versus time-domain optical coherence tomography. J Cataract Refract Surg. 2009;35: 1410–4. 10.1016/j.jcrs.2009.03.034 [DOI] [PubMed] [Google Scholar]
  • 5.Ang M. Baskaran M. Werkmeister RM. Chua J. Schmidl D. Aranha dos Santos V. et al. Anterior segment optical coherence tomography. Prog Retin Eye Res. 2018;66: 132–156. 10.1016/j.preteyeres.2018.04.002 [DOI] [PubMed] [Google Scholar]
  • 6.Kiraly L. Stange J. Kunert KS. Sel S. Repeatability and Agreement of Central Corneal Thickness and Keratometry Measurements between Four Different Devices. J Ophthalmol. 2017;2017: 1–8. 10.1155/2017/6181405 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Mäurer S. Seitz B. Langenbucher A. Eppig T. Daas L. Imaging the Cornea. Anterior Chamber. and Lens in Corneal and Refractive Surgery. OCT—Applications in Ophthalmology. 2018. 10.5772/intechopen.78293 [DOI] [Google Scholar]
  • 8.Martin Bland J. Altman DG. STATISTICAL METHODS FOR ASSESSING AGREEMENT BETWEEN TWO METHODS OF CLINICAL MEASUREMENT. Lancet. Elsevier; 1986;327: 307–310. 10.1016/S0140-6736(86)90837-8 [DOI] [PubMed] [Google Scholar]
  • 9.Bland JM. Altman DG. Statistical Methods in Medical Research. Stat Methods Med Res. 1999;8: 161–179. 10.1177/096228029900800205 [DOI] [PubMed] [Google Scholar]
  • 10.Eppig T. Schröder S. Langenbucher A. Rubly K. Seitz B. Mäurer S. Comparison of Corneal Tomography: Repeatability. Precision. Misalignment. Mean Elevation. and Mean Pachymetry. Curr Eye Res. Taylor & Francis; 2018;43: 1–8. 10.1080/02713683.2017.1377258 [DOI] [PubMed] [Google Scholar]
  • 11.Meyer JJ. Gokul A. Vellara HR. Prime Z. McGhee CNJ. Repeatability and Agreement of Orbscan II. Pentacam HR. and Galilei Tomography Systems in Corneas With Keratoconus. Am J Ophthalmol. Elsevier Inc.; 2017;175: 122–128. 10.1016/j.ajo.2016.12.003 [DOI] [PubMed] [Google Scholar]
  • 12.Crawford AZ. Patel DV. Mcghee CNJ. Comparison and Repeatability of Keratometric and Corneal Power Measurements Obtained by Orbscan II. Pentacam. and Galilei Corneal Tomography Systems. Am J Ophthalmol. Elsevier Inc.; 2013;156: 53–60. 10.1016/j.ajo.2013.01.029 [DOI] [PubMed] [Google Scholar]
  • 13.Kanellopoulos AJ. Asimellis G. Comparison of high-resolution scheimpflug and high-frequency ultrasound biomicroscopy to anterior-segment OCT corneal thickness measurements. Clin Ophthalmol. 2013;7: 2239–2247. 10.2147/OPTH.S53718 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wylęgała A. Principles of OCTA and Applications in Clinical Neurology [Internet]. Current Neurology and Neuroscience Reports. 2018. p. 96 10.1007/s11910-018-0911-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Karnowski K. Kaluzny BJ. Szkulmowski M. Gora M. Wojtkowski M. Corneal topography with high-speed swept source OCT in clinical examination. 2011;2: 2709–2720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Salouti R. Nowroozzadeh MH. Zamani M. Ghoreyshi M. Salouti R. Comparison of horizontal corneal diameter measurements using Galilei. EyeSys and Orbscan II systems. Clin Exp Optom. 2009;92: 429–433. 10.1111/j.1444-0938.2009.00407.x [DOI] [PubMed] [Google Scholar]
  • 17.Kannengießer M. Zhu Z. Langenbucher A. Janunts E. Evaluation of free-form IOL topographies by clinically available topographers. Z Med Phys. 2012;22: 215–223. 10.1016/j.zemedi.2012.04.002 [DOI] [PubMed] [Google Scholar]
  • 18.Szalai E. As Berta A. Hassan Z. Aszl O M Odis L. Reliability and repeatability of swept-source Fourier-domain optical coherence tomography and Scheimpflug imaging in keratoconus. J Cart Refract Surg. 2012;38: 485–494. 10.1016/j.jcrs.2011.10.027 [DOI] [PubMed] [Google Scholar]
  • 19.Piotrowiak I. Soldanska B. Burduk M. Kaluzny BJ. Kaluzny J. Measuring Corneal Thickness with SOCT. the Scheimpflug System. and Ultrasound Pachymetry. ISRN Ophthalmol. 2012;2012: 1–5. 10.5402/2012/869319 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Barboni P. Savini G. Carbonelli M. Hoffer KJ. Agreement Between Pentacam and Videokeratography in Corneal Power Assessment. J Refract Surg. 2018;25: 534–538. 10.3928/1081597x-20090512-07 [DOI] [PubMed] [Google Scholar]
  • 21.Li H. Leung CKS. Wong L. Cheung CYL. Pang CP. Weinreb RN. et al. Comparative Study of Central Corneal Thickness Measurement with Slit-Lamp Optical Coherence Tomography and Visante Optical Coherence Tomography. Ophthalmology. Elsevier; 2008;115: 796-801.e2. 10.1016/j.ophtha.2007.07.006 [DOI] [PubMed] [Google Scholar]
  • 22.Shin MC. Chung SY. Hwang HS. Han KE. Comparison of two optical biometers. Optom Vis Sci. 2016;93: 259–265. 10.1097/OPX.0000000000000799 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Andrzej Grzybowski

8 Jan 2020

PONE-D-19-32488

Reproducibility, repeatability and agreement of corneal topography measured by Revo NX, Galilei G6 and Casia 2.

PLOS ONE

Dear MD Wylęgała,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Feb 22 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Andrzej Grzybowski

Academic Editor

PLOS ONE

Additional Editor Comments:

Thank you for submitting this interesting. Please revise the ms according to the reviewer's comments.

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following financial disclosure:

"Adam Wylęgała is The Kosciuszko Foundation Scholar."

a. Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

b. Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

3. Please provide details of the obtained participant consent in the ethics statement on the online submission form. Currently this information is only available in the methods section of your manuscript.

4. Please carefully proofread your manuscript for typographical errors. For example, in the Methods section “… had been informed of the experimental procedure.We included 94 eyes of 35 Males and 59 Females …” should be written as “… had been informed of the experimental procedure. We included 94 eyes of 35 males and 59 females …”

5. Please provide further details regarding how participants were recruited, including the participant recruitment date.

6. We note that you have a patent relating to material pertinent to this article.

a. Please provide an amended statement of Competing Interests to declare this patent (with details including name and number), along with any other relevant declarations relating to employment, consultancy, patents, products in development or modified products etc. Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” If there are restrictions on sharing of data and/or materials, please state these.

Please note that we cannot proceed with consideration of your article until this information has been declared.

We note that the Revo NX equipment used in this study was provided by Optopol Technology Ltd. Please state this information in your competing interests statement and clarify whether Optopol Technology Ltd. played any further role in the study.

b. This information should be included in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In general, this study is very interesting and timely! I strictly recommend publication of this manuscript after a thorough overwork! Here are some comments and recommendations for improvement of the manuscript:

Title:

Title should be reformulated: What do the authors mean with agreement? There should be a statement that measurements were performed in 'normal' eyes

Abstract:

What does 'correct measurements' mean?

Use coreal power of the anterior/posterior surface instead of anterior/posterior keratometry! Keratometry always means anterior surface radius converted to diopters using an artificial keratometer index! Pleas consider this throughout the manuscript!

How were K values averaged? separately for flat and steep meridian?

What are measures such as 0.975 for repeatability stand for? Cronbach's alpha? Is this a value for the steep or the flat meridian?

What is the repeatability for thickness measurement?

Anterior corneal power should be around 49 diopters instead of 43 diopters, maybe the authors converted the front surface radius with a keratometry index instead of reald air-cornea interface data?

Introduction:

Please mention the benefits AND DRAWBACKS of combined posterior/anterio segment OCTs and dedicated anterior segment OCTs! The major drawback of combined systems is the lack of collimated light at the cornea, which means that the measurement results depend on the measurement distance. Another major drawback of all comboned systems is the small diameter of the measurement volume, here 8 mm for the Revo instead of 16 mm for the Casia

Methods:

males and females instead of Males and Females

Did the authors check all volunteers for ectatic diseases such as KC?

6 measurements FROM each operator...

Why didn't the authors record corneal thickness at thinnes point?

Instead of simply evaluating the data of the steep and flat meridian, data should be assessed using vector decomposition, e.g. the classical Humphrey notation. This is necessary due to the fact that beside flat and steep meridian the orientation axes between measurement and devices may vary!

5 µm axial resolution refers to air or medium?

Please clarify: Keratometry...calculated in the 3 mm zone: Where exactly? At a ring with diameter of 3 mm or including all data from the central 3 mm zone? This is important because classical keratometry does not consider the entire 3 mm zone but distinct measurement points at a diameter of around 3 mm depending on the device.

Conversion of radius of curvature to dioptric power using a keratometer index os not appropriate for anterior surface power. Instead, evaluate keratometric power OR anterior and posterior surface power. Keratometric power somehow mimics the behaviour of he entire cornea.

Why is Galilei using posterior mean K instead of evaluating both meridians separately?

What does 'Thus, unlike the (anterior) SimK, it includes the central 1 mm in diameter' mean?

standard deviation instead of Standard deviation...

Results:

Interoperator repeatability and reproducibility??? Sure you refer to reproducibility only comparing the results of different operators...

For all results I recommend to consider vector components instead of flat/steep meridional data due to the fact that otherwise different orientations of 'astigmatism is ignored. Why is corneal thickness missed in tables 1 and 2?

Discussion:

is launched to the market instead of brought to the market

tomographis systems instead of keratometric systems...

Scheimpflug camera instead of Sheimpflug camera.

Please state that it is simply a question of using a specific eye model or assumingany average speed of sound or refractive index if ultrasound, Scheimpflug or OCT (with different wavelength) yield different results for central corneal thickness...

Scheimplfug should read Scheimpflug.

'Measuring anterior corneal surface is easier th 258 an measuring the posterior[21]. In order to measure the latter, sophisticated mathematic algorithms have to be implemented, which is why there is a significant difference between the recordings of the devices.' might be half of the truth!! At the front surface there is a very strong reflex due to air-cornea interface which makes exact and reliable detection of the edge difficult. On the other hand, back surface evaluation (not measurement!!) requires inverse raytracing, which means that all errors of front surface measurement affect back surface measurements!

Figures:

The resolution of the figures is weak. The authors should use other image formats with a higher resolution for the upload of the revised version!

In general, in times where topographers and tomographers are more and more used for diagnostics and surgery planning such evaluations of systems which are newly on the market are more than welcome and very important for the reader! My congratulations to this interesting manuscript!

Reviewer #2: The goal of this study is to compare the repeatability and inter-operator reproducibility of a new corneal topographer module of Revo NX SD-OCT with Galilei G6 Schimpflug camera and Casia 2 SS-OCT in normal eyes.

I suggest several things that could make the presentation clearer.

1. Put the description of the devices before the description of the measurement techniques. Just switch the order.

2. There are several places in the text and in the footnotes of the tables in which punctuation (a comma) is needed. Also, the references in the footnotes do not totally match what they are referring to. For example in Table 1, Standard deviation Sw = within-subject standard.

3. This section below is not clear. Clarify what the 6 measurements are. Also put commas and an “and” in to clarify the paragraph.

Every participant had 6 measurements (for each operator) starting with 3 Topo scan program on the Revo NX carried out by each operator to measure repeatability and reproducibility, followed by one corneal map measurement on Galilei G6, and? Corneal Map scan on Casia 2. For every device, anterior and posterior K1 and K2 values were recorded as well as apical CCT. Only measurements well centered and with high quality indexes were included in the study.

I can only count 5 measurements if you are referring to devices: 3 topo scan, one Galilei GT and one Casia 2.

I can only count 5 measurements if you are referring to variables: anterior and posterior K1 and K2 and CCT.

4. This section needs clarification. You don’t need to capitalize mean and standard deviation. Use a comma to separate items in a list. Note that “Within-subject standard deviation” is just hanging at the end of the paragraph. It is not a sentence.

Numerical results for repeatability and reproducibility contain six quantities computed for observers separately and respectively for the entire dataset: mean, standard deviation, within-subject standard deviation, test-retest repeatability, within-subject coefficient of variation, and intraclass correlation coefficient were calculated for repeatability and reproducibility of the Revo NX. Within-subject standard deviation.

5. The statistical methods needs more elaboration. Define how each of these measurements were computed:

• within-subject standard deviation (Is this the root mean square error from the model?)

• test-retest repeatability

• Cov% (this appears to be with-subject standard deviation divided by the mean rather than the standard deviation divided by the mean) Is this computed using the rmse from the model?)

• ICC—was this computed based on a random effects model? How many measurements were included? If this was done for each operator, was it the 3 measures each?

6. Interoperator and Reproducibility Section and Table 1 issuses:

• It is not clear which devices are being compared in the table? Is is all three? Or just the new device? Based on the comments below the table, it appears to be an assessment of Revo NX. Labels for the table and section would help the reader.

• It is labeled as “interoperator repeatability,” yet the table contains information for each operator (A and B). Therefore, this should be labeled as “intra-operator.”

• Column in table says observator. Observator is not an English word. Use “operator” to match the title of the table.

• I don’t understand how there is an ICC for each operator. If the two operators are being compared, there should be only one ICC.

• OR Are you saying that the 3 measurements for each operator are being compared separately? If that is the case, the title should be ‘Intra-operator.’

• How many measurements were used? This needs to be specified in the table.

• From the text below the table, it appears that the way the operators are being compared is by taking the difference between values in the table. To carry out a true INTER-operator assessment, the measurement from both operators need to be in the model. For example, the data would need to be laid out like this.

Operator Measurement K1 K2…

A 1

A 2

A 3

B 1

B 2

B 3

• Note that the computations from the model need to use the number of repetitions used. In the models that appear in Table 1, I assume that there were 3 measurements per operator. So, in the computation of ICC the number of measurements must be used.

o BET=(MSB-MSE)/3 ; *** NOTE: this denominator must match number of observers/measurements ;

7. Table 2 issues

• I am not sure what is being compared in this table. At least the table title specified that it is for Revo NX. Define the reproducibility. Is this the result of having both operators in the same model?

• The footnote also needs to be cleaned up.

8. Table 3

• What is the rationale for comparing the Galilei G6 to the other two devices?

• If Galilei G6 is the “gold standard,” it would be helpful to state that in the methods.

9. Comparison section of results

• This section is comparing pair-wise differences between devices for paired t-tests. What is a little confusing is whether just one measurement per device is being used. It appears that 3 measures per operator were used for the Revo NX and one measurement for each of the other devices (based on my understanding of the methods). So, which of the three Revo NX measures are being compared with the other devices and for which operator. Or was this done separately for each operator and only results for one operator presented.

• This is just one example of the lack of explanation and clarity shows up in this paper.

10. Plots

• Plots are hard to read. They seem blurry. A better rendering should be done for the manuscript.

• Note that all the plots say “P=0.000.” This should be corrected to match the text that describes the paired t-test or use P<0.001 for labels on plots if the values are actually that low.

• Bland-Altman plots are a visual indication of agreement. In addition to the paired t-tests, the limits of agreement should be mentioned. These are presented in the plots. One would state within which limits one would consider devices to be interchangeable, regardless of the statistical significance of the tests.

11. Overall, this paper may offer new information that is useful. However, clearing up some of the questions above would help readers understand it better.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Sandra Stinnett

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 2;15(4):e0230589. doi: 10.1371/journal.pone.0230589.r002

Author response to Decision Letter 0


25 Feb 2020

Reviewer #1:

Comment: In general, this study is very interesting and timely! I strictly recommend publication of this manuscript after a thorough overwork! Here are some comments and recommendations for improvement of the manuscript:

Title:

Comment:

Title should be reformulated: What do the authors mean with agreement? There should be a statement that measurements were performed in 'normal' eyes

Response: we changed the tile in accordance with the reviewer’s wish.

Reproducibility, and repeatability of corneal topography measured by Revo NX, Galilei G6 and Casia 2 in normal eyes.

Abstract:

Comment:

What does 'correct measurements' mean?

Response: that the quality index was high enough. We decided to omit this word.,

Comment

Use coreal power of the anterior/posterior surface instead of anterior/posterior keratometry! Keratometry always means anterior surface radius converted to diopters using an artificial keratometer index! Pleas consider this throughout the manuscript!

Response:

We changed the keratometry to corneal power both within the abstract as within the text.

Comment:

How were K values averaged? separately for flat and steep meridian?

What are measures such as 0.975 for repeatability stand for? Cronbach's alpha? Is this a value for the steep or the flat meridian?

Reponses:

The measure value such as 0.975 stand for Intraclass correlation coefficient and 0.975 is the lowest for posterior K2 while 0.994 is for the anterior K1

The Revo NX showed intraclass correlation coefficient ranging from 0.975 for the posterior K2 surface, and 0.994 for anterior K1.

Comment:

What is the repeatability for thickness measurement?

Response:

We did calculate it

Central corneal thickness A 531.05 32.54 1.47 4.07 0.28 0.998

B 530.73 32.68 1.50 4.16 0.28 0.998

Comment:

Anterior corneal power should be around 49 diopters instead of 43 diopters, maybe the authors converted the front surface radius with a keratometry index instead of reald air-cornea interface data?

Response:

Thank you for this comment it is true that that anterior corneal power is around 49 however as we used a commercially available software in our study which automatically divides anterior corneal power through keratometry index. The reason is why anterior surface result gives 43 D instead of 49 D is used refraction index. Refraction index which is used in the software in compared devices is 1.3375 instead of 1.376 (ref index of cornea). the reason is that devices have to comparable to standard keratometers which are using ref index 1.3375. This ref index is demanded by regulation EN ISO 19980:2012 Ophthalmic instruments — Corneal topographers. In point 3.11 and 3.12. The reference device provides all corneal power maps but the aim of study was to compare common parameters.

Introduction:

Comment:

Please mention the benefits AND DRAWBACKS of combined posterior/anterio segment OCTs and dedicated anterior segment OCTs! The major drawback of combined systems is the lack of collimated light at the cornea, which means that the measurement results depend on the measurement distance. Another major drawback of all comboned systems is the small diameter of the measurement volume, here 8 mm for the Revo instead of 16 mm for the Casia

Response:

Thank you for this comment. We decided to include this statement in our paper:

The major benefit of using a combined system is the lower price, and higher resolution. While drawback is the lack of collimated light at the cornea, whcihc leads to the measurements being distance dependent. Further the field of view is twice smaller then in the single use device.

Methods:

Comment

males and females instead of Males and Females

Response:

We changed the Males and Females to males and females.

Comment:

Did the authors check all volunteers for ectatic diseases such as KC?

Response: All volunteers were checked for ecstatic disease during scanning.

Comment:

6 measurements FROM each operator...

Response:

Yes this was corrected

Comment:

Why didn't the authors record corneal thickness at thinnes point?

Instead of simply evaluating the data of the steep and flat meridian, data should be assessed using vector decomposition, e.g. the classical Humphrey notation. This is necessary due to the fact that beside flat and steep meridian the orientation axes between measurement and devices may vary!

Response: This is indeed a very insightful comment, we decided to do that as we based our study on other papers. For our future papers we are going your method.

Comment

5 µm axial resolution refers to air or medium?

Response:

It is 5µm in tissue so medium.

Comment:

Please clarify: Keratometry...calculated in the 3 mm zone: Where exactly? At a ring with diameter of 3 mm or including all data from the central 3 mm zone? This is important because classical keratometry does not consider the entire 3 mm zone but distinct measurement points at a diameter of around 3 mm depending on the device.

Response:

At a cross of two diameters of a ring both diameters have 3 mm.

Comment:

Conversion of radius of curvature to dioptric power using a keratometer index os not appropriate for anterior surface power. Instead, evaluate keratometric power OR anterior and posterior surface power. Keratometric power somehow mimics the behaviour of he entire cornea.

Response:

According norm and Calculations of Gullstrand model eye and common use in corneal topographer is presenting anterior surface in power in dioptres instead of millimetres.

Comment:

Why is Galilei using posterior mean K instead of evaluating both meridians separately?

Reposne:

We tried to contact Zimmer company but received no reply so we cannot deliberate on why it is so.

Comment:

What does 'Thus, unlike the (anterior) SimK, it includes the central 1 mm in diameter' mean?

standard deviation instead of Standard deviation...

Repose:

We decided to delete this sentence

Comment:

Results:

Interoperator repeatability and reproducibility??? Sure you refer to reproducibility only comparing the results of different operators...

Response:

Interoperator was changed to intraoperator repeatability as it refers to the differences between one operator, while reproducibility was interoperator meaning we compared dresults from operator A and B.

Comment

For all results I recommend to consider vector components instead of flat/steep meridional data due to the fact that otherwise different orientations of 'astigmatism is ignored. Why is corneal thickness missed in tables 1 and 2?

Discussion:

Response:

We have once more recalculated the date and included the CCT in tab 1 and 2.

Comment:

is launched to the market instead of brought to the market

tomographis systems instead of keratometric systems...

Scheimpflug camera instead of Sheimpflug camera.

Response:

We have checked it, and the phrase “bring to market” is commonly used. We corrected the last two.

https://www.theguardian.com/business/2020/jan/13/polluting-vehicles-could-be-pulled-from-uk-sale-say-carmakers

“Carmakers are rushing to bring to market new electric cars with zero exhaust emissions – including Volkswagen’s ID.3, Vauxhall’s Corsa-e and an electric Fiat 500 – this year, but production will initially be limited as factories gear up. At the same time, they are keen to hang on to their profitable but polluting sales of internal combustion engines.”

Comment:

Please state that it is simply a question of using a specific eye model or assumingany average speed of sound or refractive index if ultrasound, Scheimpflug or OCT (with different wavelength) yield different results for central corneal thickness...

Response:

We have included following statement: Different methods yield different results due to the variuose reference models used such as average speed of sound or refractive index.

Commnet:

Scheimplfug should read Scheimpflug.

Response:We corrected it

Comment:

'Measuring anterior corneal surface is easier th 258 an measuring the posterior[21]. In order to measure the latter, sophisticated mathematic algorithms have to be implemented, which is why there is a significant difference between the recordings of the devices.' might be half of the truth!! At the front surface there is a very strong reflex due to air-cornea interface which makes exact and reliable detection of the edge difficult. On the other hand, back surface evaluation (not measurement!!) requires inverse raytracing, which means that all errors of front surface measurement affect back surface measurements!

Response we included this statement:

Secondly do to the a very strong reflex at the air/cornea interface makes it difficult to corelcty identify edges. Thirdly posterior surface evaluation is hindered by the errors of the dront surface.

Figures:

The resolution of the figures is weak. The authors should use other image formats with a higher resolution for the upload of the revised version!

In general, in times where topographers and tomographers are more and more used for diagnostics and surgery planning such evaluations of systems which are newly on the market are more than welcome and very important for the reader! My congratulations to this interesting manuscript!

The goal of this study is to compare the repeatability and inter-operator reproducibility of a new corneal topographer module of Revo NX SD-OCT with Galilei G6 Schimpflug camera and Casia 2 SS-OCT in normal eyes.

I suggest several things that could make the presentation clearer.

Comment 1. Put the description of the devices before the description of the measurement techniques. Just switch the order.

Response 1: Thank you for this comment we have changed that.

Comment 2. There are several places in the text and in the footnotes of the tables in which punctuation (a comma) is needed. Also, the references in the footnotes do not totally match what they are referring to. For example in Table 1, Standard deviation Sw = within-subject standard.

Response 2 We have changed the punctuation both in the footnotes and in the text. We also changed the abbreviations and unified them across the text and in the footnotes.

3. This section below is not clear. Clarify what the 6 measurements are. Also put commas and an “and” in to clarify the paragraph.

Every participant had 6 measurements (for each operator) starting with 3 Topo scan program on the Revo NX carried out by each operator to measure repeatability and reproducibility, followed by one corneal map measurement on Galilei G6, and? Corneal Map scan on Casia 2. For every device, anterior and posterior K1 and K2 values were recorded as well as apical CCT. Only measurements well centered and with high quality indexes were included in the study.

I can only count 5 measurements if you are referring to devices: 3 topo scan, one Galilei GT and one Casia 2.

I can only count 5 measurements if you are referring to variables: anterior and posterior K1 and K2 and CCT.

Response 3 : Every participant had 8 measurement 6 on Revo NX 3 for operator A and 3 for operator B, 1 on Galieli and 1 on Casia. .

Comment 4: This section needs clarification. You don’t need to capitalize mean and standard deviation. Use a comma to separate items in a list. Note that “Within-subject standard deviation” is just hanging at the end of the paragraph. It is not a sentence.

Numerical results for repeatability and reproducibility contain six quantities computed for observers separately and respectively for the entire dataset: mean, standard deviation, within-subject standard deviation, test-retest repeatability, within-subject coefficient of variation, and intraclass correlation coefficient were calculated for repeatability and reproducibility of the Revo NX. Within-subject standard deviation.

Response 4:

Thank you for this comment we have corrected this sentence

“Numerical results for repeatability and reproducibility contain six quantities computed for observers separately and respectively for the entire dataset: mean, standard deviation (SD.), within-subject standard deviation (Sw.), test-retest repeatability (TRT.), within-subject coefficient of variation (CoV.), intraclass correlation coefficient (ICC.) were calculated for repeatability and reproducibility of the Revo NX.”

5. The statistical methods needs more elaboration. Define how each of these measurements were computed:

• within-subject standard deviation (Is this the root mean square error from the model?)

• test-retest repeatability

• Cov% (this appears to be with-subject standard deviation divided by the mean rather than the standard deviation divided by the mean) Is this computed using the rmse from the model?)

• ICC—was this computed based on a random effects model? How many measurements were included? If this was done for each operator, was it the 3 measures each?

Response5:

We have decided to add description of the data as a supporting material.

Numerical results for repeatability resp. reproducibility contain six quantities computed for observers separately resp. for the entire dataset:

• Mean

• Standard deviation

• Sw

• TRT

• CoV[%]

• ICC

Mean is the arithmetic mean of input values.

Standard deviation is the sample standard deviation, ie. with N-1 in the denominator, where N is the sample size.

Sw = within-subject standard deviation, is the root mean square of sample standard deviations of values measured on a single object, ie.

Sw = ((σ12+...+σM2)/M)1/2,

where M is the number of objects (eyes) and σk equals the sample standard deviation of values measured on the k-th object.

TRT = test-retest repeatability, is defined as = 2,77·Sw.

CoV = within-subject coefficient of variation, is defined as = Sw/Mean or = 100·Sw/Mean when reported as %.

ICC = intraclass correlation coefficient, is defined as the ratio of appropriate estimated variances. For this, it is assumed that there is a set of measured values yij for the i-th object in j-th repetition; i = 1, 2, ..., N (N = the number of objects) and j = 1, 2, ..., Ni (different numbers of repetitions are permitted for different objects). The measured values are modelled by the equation below:

yij = μ + σAei + σBeij,

where μ is the average value and e's are independent realizations of a standard normal random variable and σA2 and σB2 are resp. interclass and intraclass variances. ICC is given by:

ICC = sA2/(sA2 + sB2),

where sA2, sB2 are estimated values of the variances σA2, σB2 according to the equations:

ΣiΣj(yij - yi)2 = (M - N)sB2,

Σi(yi - y)2 = (N - 1)(sA2 + sB2/H),

where yi denotes the mean value for i-th object: yi = Σjyij/Ni, while y denotes the overall mean: y = Σiyi/N. In the above, M = ΣiNi equals the total number of measurements and H = N/Σi(1/Ni) is the harmonic mean of Ni's.

Comment 6. Interoperator and Reproducibility Section and Table 1 issuses:

• It is not clear which devices are being compared in the table? Is is all three? Or just the new device? Based on the comments below the table, it appears to be an assessment of Revo NX. Labels for the table and section would help the reader.

• It is labeled as “interoperator repeatability,” yet the table contains information for each operator (A and B). Therefore, this should be labeled as “intra-operator.”

• Column in table says observator. Observator is not an English word. Use “operator” to match the title of the table.

• I don’t understand how there is an ICC for each operator. If the two operators are being compared, there should be only one ICC.

• OR Are you saying that the 3 measurements for each operator are being compared separately? If that is the case, the title should be ‘Intra-operator.’

• How many measurements were used? This needs to be specified in the table.

• From the text below the table, it appears that the way the operators are being compared is by taking the difference between values in the table. To carry out a true INTER-operator assessment, the measurement from both operators need to be in the model. For example, the data would need to be laid out like this.

Operator Measurement K1 K2…

A 1

A 2

A 3

B 1

B 2

B 3

• Note that the computations from the model need to use the number of repetitions used. In the models that appear in Table 1, I assume that there were 3 measurements per operator. So, in the computation of ICC the number of measurements must be used.

o BET=(MSB-MSE)/3 ; *** NOTE: this denominator must match number of observers/measurements ;

Response 6:

• This table is about repeatability of Revo NX and we changed title accordingly.

• We changed the title from inter- to intraoperator

• We changed Observator to Operator.

• We have applied two ICCs because the values are calculated from 3 meassurements by each operator. Tab 2 shows ICC of 6 meassurements from two operators.

• Yes that is correct we updated title accordingly.

• Yes, this has been updated in the title.

• We agree however this information is provided in the tab 2.

• We have included the number of repetitions in the title.

Comment 7.

Table 2 issues

• I am not sure what is being compared in this table. At least the table title specified that it is for Revo NX. Define the reproducibility. Is this the result of having both operators in the same model?

• The footnote also needs to be cleaned up.

Response 7.

• We changed the tile to “Table 2 Revo NX reproducibility based on six measurement from both operators”.

• The foot note has been cleaned up

Comment 8. Table 3

• What is the rationale for comparing the Galilei G6 to the other two devices?

• If Galilei G6 is the “gold standard,” it would be helpful to state that in the methods.

Reposne 8. Galiei is a Scheimflug technology that is longer used and is considered as a gold standard in keratometric measurements. “Scheimpflug technology is considered gold standard in corneal measurements.”

Comment 9. Comparison section of results

• This section is comparing pair-wise differences between devices for paired t-tests. What is a little confusing is whether just one measurement per device is being used. It appears that 3 measures per operator were used for the Revo NX and one measurement for each of the other devices (based on my understanding of the methods). So, which of the three Revo NX measures are being compared with the other devices and for which operator. Or was this done separately for each operator and only results for one operator presented.

• This is just one example of the lack of explanation and clarity shows up in this paper.

Response 9:

• A mean from 6 measurements from Revo NX was used for comparison between the results.

• We are sorry for this we hope that the changes implemented are sufficient and make the manuscript easier to read.

10. Plots

• Plots are hard to read. They seem blurry. A better rendering should be done for the manuscript.

• Note that all the plots say “P=0.000.” This should be corrected to match the text that describes the paired t-test or use P<0.001 for labels on plots if the values are actually that low.

• Bland-Altman plots are a visual indication of agreement. In addition to the paired t-tests, the limits of agreement should be mentioned. These are presented in the plots. One would state within which limits one would consider devices to be interchangeable, regardless of the statistical significance of the tests.

Response 10:

• We changed the figures into higher resolution.

• We included new table 4 with the LoA values and ICC.

11. Overall, this paper may offer new information that is useful. However, clearing up some of the questions above would help readers understand it better.

Decision Letter 1

Andrzej Grzybowski

4 Mar 2020

Reproducibility, and repeatability of corneal topography measured by Revo NX, Galilei G6 and Casia 2 in normal eyes.

PONE-D-19-32488R1

Dear Dr. Wylęgała,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Andrzej Grzybowski

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Thank you for preparing a high quality revision and including all reviewers comments. Thank you also for submitting your interesting paper to our journal.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for this detailled overwork of the manuscript! In the present version this manuscript is ready for publication!

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Acceptance letter

Andrzej Grzybowski

9 Mar 2020

PONE-D-19-32488R1

Reproducibility, and repeatability of corneal topography measured by Revo NX, Galilei G6 and Casia 2 in normal eyes.

Dear Dr. Wylęgała:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Andrzej Grzybowski

Academic Editor

PLOS ONE


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES