Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2024 Jun 7;19(6):e0305036. doi: 10.1371/journal.pone.0305036

Comparison of Foraging Interactive D-prime and Angular Indication Measurement Stereo with different methods to assess stereopsis

Sonisha Neupane 1,*, Jan Skerswetat 1, Peter J Bex 1
Editor: Amithavikram R Hathibelagal2
PMCID: PMC11161055  PMID: 38848392

Abstract

Purpose

Stereopsis is a critical visual function, however clinical stereotests are time-consuming, coarse in resolution, suffer memorization artifacts, poor repeatability, and low agreement with other tests. Foraging Interactive D-prime (FInD) Stereo and Angular Indication Measurement (AIM) Stereo were designed to address these problems. Here, their performance was compared with 2-Alternative-Forced-Choice (2-AFC) paradigms (FInD Stereo only) and clinical tests (Titmus and Randot) in 40 normally-sighted and 5 binocularly impaired participants (FInD Stereo only).

Methods

During FInD tasks, participants indicated which cells in three 4*4 charts of bandpass-filtered targets (1,2,4,8c/° conditions) contained depth, compared with 2-AFC and clinical tests. During the AIM task, participants reported the orientation of depth-defined bars in three 4*4 charts. Stereoscopic disparity was adaptively changed after each chart. Inter-test agreement, repeatability and duration were compared.

Results

Test duration was significantly longer for 2-AFC (mean = 317s;79s per condition) than FInD (216s,18s per chart), AIM (179s, 60s per chart), Titmus (66s) or RanDot (97s). Estimates of stereoacuity differed across tests and were higher by a factor of 1.1 for AIM and 1.3 for FInD. No effect of stimulus spatial frequency was found. Agreement among tests was generally low (R2 = 0.001 to 0.24) and was highest between FInD and 2-AFC (R2 = 0.24;p<0.01). Stereoacuity deficits were detected by all tests in binocularly impaired participants.

Conclusions

Agreement among all tests was low. FInD and AIM inter-test agreement was comparable with other methods. FInD Stereo detected stereo deficits and may only require one condition to identify these deficits. AIM and FInD are response-adaptive, self-administrable methods that can estimate stereoacuity reliably within one minute.

Introduction

Stereopsis is a critical function of the human visual system and is a cornerstone of perception across many species. Impairment of stereopsis often indicates the presence of a visual disorder during development or neurodegenerative disease [1, 2]. Assessing and monitoring stereopsis is therefore critical in detecting and managing a range of disorders, however, booklet-based clinical methods for measuring stereoacuity have several problems including: low sensitivity to changes of stereoacuity; coarse resolution [3]; poor agreement across tests [4, 5]; poor test-retest repeatability, especially when stereoacuity is low [6] memorization artefacts; monocular cues [7, 8]; assumptions based on testing distance and interpupillary distance and less flexibility on testing distance.

Poor agreement across tests [4, 5, 9] could arise from differences in the task, stimulus properties and display technology. Stereoacuity thresholds can also depend on the direction of the disparity (crossed/near or uncrossed/far depth), but most tests only measure one direction (typically crossed disparity). Different tests use different stimuli, but stereoacuity depends on spatial structure [10, 11], eccentricity [12] and may be differentially affected by degraded image quality (e.g., from refractive error, cataract, or amblyopia) [13].

Many adaptive computer-based methods, such as Alternative Forced Choice (AFC) tasks, address the above problems of inaccuracy and imprecision. Although adaptive procedures can generate more sensitive measures of threshold performance, these tests are time-consuming [14, 15] and the repeated administration of below threshold stimuli can be frustrating for naïve participants.

To address these problems, we adapted two novel computer-based methodologies for the assessment of stereoacuity: Foraging Interactive D-prime (FInD) [16] and Angular Indication Measurement (AIM) [17]. Both methods are 1) computer-based, thus they are deployable on a range of devices; 2) randomized to avoid memorization artifacts across repeated tests [18]. 3) stimulus-agnostic, thus can be used to display local (random dot-defined) or global (ring) targets, with broad-band or narrow-band stimuli, and with either crossed, uncrossed, or mixed disparity; 4) self-administered, removing the need for a clinical examiner and enabling home testing; 5) response-adaptive, allowing accurate measurement of stereoacuity in people with low or high stereopsis and precise measurement of small differences in stereoacuity; 6) simple with user-friendly tasks and interfaces that can be completed by participants with a range of ages or cognitive functional levels [19]; 7) generalizable, using the same fundamental task for the assessment of multiple visual functions (e.g. acuity, color, contrast, motion and form sensitivity, among others), minimizing the number of tasks the participant is required to learn.

The two studies reported here were performed as proof-of-concept studies for 2 novel methods to measure stereoacuity: In Study One, we introduce FInD Stereo and its features. We compare estimates of stereoacuity, test duration, inter-test reliability, and repeatability of FInD with standard 2-AFC methods, and clinically used tests (Randot and Titmus) in stereo-typical and atypical participants. We also use the FInD method to compare different spatial properties of stereo-inducing stimuli and examine their effect on stereoacuity. In Study Two, we introduce AIM Stereo and its features, then we compare AIM Stereo against the above-mentioned clinical tests using the same outcome measures.

Study 1 FInD Stereo—Methods

The study was approved by the Institutional Review Board at Northeastern University (14-09-16) and followed the guidelines of the Declaration of Helsinki. The recruitment period was from November 29, 2021 to October 30, 2022. Informed written consent was obtained from all participants prior to the start of the experiment. The participants were the staff of the research laboratory and undergraduate students who completed the study for course credit.

Participants

20 normally sighted and 5 binocularly impaired (3 with self-reported amblyopia, 1 with strabismus and 1 with strabismus and amblyopia) adults participated in Study 1. One participant was excluded due to poor vision. Participants’ details for Study 1 are provided in Table 1.

Table 1. Demographic and optometric summary of the cohorts for Study 1.

Binocularly normal Binocularly impaired
N 19 5
Age range [years] 19–35 18–55 
Visual Acuity (OU) >20/25 20/10-20/115
Spherical Equivalent [Median (Range)] -0.13D (-2.00D to +1.00D) +0.50D (-4.50D to +3.00D)
Residual astigmatism [Median (Range)] 0.00D (0.00D to +1.00D) -0.25 D (0.00D to +4.50D)
Binocularly impaired participants’ details
OD VA OS VA OU VA Clinical detail
20/13 20/33 20/15 Left esotropia
20/13 20/13 20/10 Alternate Strabismus
20/115 20/115 Bilateral Amblyopia (high astigmatism)
20/14 20/48 20/21 Anisometropic Amblyopia
20/12 20/100 20/10 Anisometropic Amblyopia

Top) Summary of demographics for Study 1 for both groups. Bottom) Details of all binocularly impaired participants. Residual spherical equivalent and astigmatism as determined via autorefraction.

Stimuli and procedure

Stimuli were generated using Mathworks MATLAB software (Version 2021b), the Psychtoolbox [2022] and were presented on a gamma-corrected 32” 4K LG monitor with maximum luminance of 250/m2 and screen resolution of 3840 x 2160 and 60Hz at 80 cm viewing distance. A chinrest was used to maintain the viewing distance. Red-blue anaglyph glasses were used to present the stimuli to each eye dichoptically. Horizontal disparity was used to induce stereo-depth.

In Study 1, participants performed six different tests: two FInD stereoacuity tasks, two 2-AFC stereoacuity tasks, and two clinical tests (Randot and Titmus) in randomized order. The time taken for participants to complete the self-administered FInD and 2-AFC tests was recorded by the test computer, the time taken to complete the examiner-administered clinical tests was recorded with a stopwatch.

FInD Stereo

FInD [16] is a self-administered paradigm in which stimuli are displayed over one or more charts, each containing a grid of N cells (here N = 16), a random subset of which (a uniform random deviate between 0.66N and N-1) contain a signal stimulus that vary from easy- to hard-to-detect intensity levels, the rest contain null stimuli, in random positions (Fig 1). The participant’s task is to select cells that contain a signal. Three charts per condition, i.e. one initial and two adaptively changed charts, were deployed, each comprising 4*4 cells, each cell subtended 4°*4° with a 0.01° (1 pixel) black (≈0 cd/m2) border that also served as a fusion lock. Each cell contained either a signal (stereoscopic disparity ≠ 0) or null (stereoscopic disparity = 0) stimulus. The stimuli were either rings (2.5° radius, 1.5 arcmin line width; Fig 1A) or depth-defined Gaussian-shaped (σ = 1°) dips within a noise carrier (Gaussian luminance distribution, element size 1 pixel; Fig 1B). The ring and dip stimuli investigate different aspects of stereopsis. The ring stimuli consist of sparse contour features, referred to as ‘local’ stereopsis, whereas the dip stimuli are defined by dense noise elements, referred to as ‘global’ stereopsis. These stimulus types have been used in different populations and with some evidence for separate processing mechanisms [23]. Using standard chart-based tests, stereoacuity estimates are similar for local and global stereopsis tests [24]. Ring and the noise carrier of dip stimuli were band-pass filtered with an isotropic raised log cosine filter:

H(f)={log2(ω)<log2(ωpeak)1=00.5*(1+cos(π*(log2(ω)log2(ωpeak))))log2(ω)>log2(ωpeak)+1=0 [1]

where ω is spatial frequency and the peak spatial frequency was either 1, 2, 4, or 8 cycles/°. The Michelson contrast of the stimuli was scaled to 100%, with mean luminance 125 cd/m2.

Fig 1. FInD Stereo paradigm.

Fig 1

FInD Depth charts for A) Ring and B) Dip stimuli. Participants clicked the cells where they perceive targets in depth (i.e., in front (Ring) or behind (Dip) relative to the background). The range of disparities presented on each chart spanned easy (d’ = 4.5) to difficult (d’ = 0.1) and was adaptively calculated based on the participant’s responses to previous charts. Depth profiles are shown for easy visualization for ring and dip stimuli at the bottom of the figure. C) The responses of the participant (blue circles, error bars indicate 95% binomial standard deviation) were used to calculate d’ as a function of stereoscopic disparity and a decision function was used to estimate the probability of a Yes response (red curve). Green dashed lines indicate 95% confidence intervals at each stereoscopic disparity.

The magnitude of signal stereoscopic disparity on each chart was log-scaled from easy (d’ = 4.5) to difficult (d’ = 0.1), adaptively for each participant. On the first chart, the disparity range was scaled to span 0.005° (0.3 arcmin) to 0.5° (30 arcmin) (which were also the upper and lower bound min and max disparity) in evenly spaced log steps to cover the broad typical stereoacuity range for binocularly healthy adults [25]. On subsequent charts, the disparity range was based on the results of the fit of Eq 2 to the data from all previous charts. For ring stimuli, stereoscopic disparity was created by horizontally displacing the ring in each eye by half the required disparity in opposite directions. For the Gaussian dip stimuli, stereoscopic disparity was created by generating spatial offsets in the noise carrier in opposite directions in each eye using spatial image warping as in [26] with a Gaussian profile (σ = 1°). Ring stimuli had crossed disparity and the dip stimuli had uncrossed disparity.

Participants had unlimited time to click on cells that contained a target with depth and not on cells where the target contained no depth. Fig 1 shows the experiment procedure, once the participant had clicked a cell, a black circle appeared outside the target to indicate that the cell had been selected, and they could click an unlimited number of times to select or deselect (black circle disappeared) a response. Once they were satisfied with their selections, participants clicked on an icon to proceed to the next chart. The response in each cell was then classified as a Hit, Miss, False Alarm, or Correct Rejection, to calculate d’ as a function of stereoscopic disparity, and the probability of a Yes response as a function of signal intensity was calculated as:

p(Yes)=1Φ(Φ1(1F)dmax×(Sθ)γ((dmax)21)+(Sθ)2γ) [2]

where p(Yes) is the probability of a Yes response, ϕ is the normal cumulative distribution function, F is the false alarm rate, S is stimulus intensity, θ is threshold, d’max is the saturating value of d’ and was fixed at 5, and γ is the slope. The fit to data from all completed charts was used to select the individualized range of stereoscopic disparities (from d’ = 0.1 to 4.5) stimuli for subsequent charts.

2-AFC Stereo

To compare stereoacuity estimates of FInD with the gold standard psychophysical paradigm [27], the same participants completed 2 alternative forced choice (2-AFC) tasks. The stimuli were the same as in FInD, with signal and null stimuli presented either side-by-side for rings, or sequentially for dips. The ring stimuli had crossed disparity and the dip stimuli had uncrossed disparity. Fig 2 illustrates the experiment procedure and Fig 2A and 2B show the ring and dip stimuli. For the spatial 2-AFC ring procedure, the two stimuli were presented side-by-side for 1.25 sec and for the temporal 2-AFC procedure, the two dip stimuli were presented sequentially for 0.50 sec each, separated by a blank field for 0.50 sec. Participants indicated which of the two stimuli contained a depth target and they had unlimited time to respond. The different spatial frequency stimuli were randomly interleaved within separate runs for ring and dip stimuli. A 3-down-1-up algorithm [28] adjusted the stereoscopic disparity each trial with a total of 40 trials for each of the 4 spatial frequency conditions, resulting in 160 trials for ring and 160 trials for the dip stimuli per participant. The raw data were fitted with a cumulative normal function from which threshold was calculated as the 75% correct point.

Fig 2. AFC paradigm.

Fig 2

A) Spatial 2-AFC Ring task: target & null stimuli were presented side-by-side for 1.25 sec. Participants clicked on the left or right side of the screen to indicate whether the right or left stimulus was presented in depth. B) Temporal 2-AFC Dip task: target & null stimuli were presented sequentially for 0.50 sec each, separated by a blank screen for 0.50 sec. Participants clicked on the left or right mouse button to indicate whether the first or second stimulus contained depth. C) The proportion of correct trials as a function of stereoscopic disparity (blue circles, error bars indicate 95% binomial standard deviation) for each spatial frequency were fit with a cumulative gaussian function (red line) green dashed lines indicate 95% confidence limits at each disparity.

Clinical tests -Titmus and Randot

The Titmus stereo test (Stereo Optical Company, Inc., USA) has 3 sections to measure stereoacuity at 40 cm: fly, Wirt circles (0.7° ∅), and animals. The fly section measures gross stereoacuity at 59 mins of arc, with pass or fail scoring. The circle and animal sections measure the stereo-threshold between 800–40 arcsec and 400–100 arcsec, respectively. The Randot stereo (Stereo Optical Company, Inc., USA) test also has 3 sections at 40 cm: circles, forms, and animals. The Wirt circles, forms, and animals sections measure the stereo-threshold between 400–20 arcsec, 500 arcsec and 400–100 arcsec respectively. The stereo threshold was taken as the highest stereoacuity the participant could observe on any section, which was found with the circles stimuli for most participants.

Statistical analysis

Experiment duration and threshold estimates were analyzed with Matlab’s anovan and multcompare functions for ANOVA and planned comparisons between tests. Duration data were skewed for FInD and Titmus data (Study 1) and AIM data (Study 2) and log-transformation was applied to convert durations to normally distributed data. Threshold estimates were log-transformed to convert the stereo-values to log-stereoacuity. The data for ring scotoma (FInD and 2AFC) and clinical tests (both Study 1 and 2) were still skewed, and further transformation did not convert it to normally distributed data. Hence, Wilcoxon signed rank test and Kruskal-Wallis tests were performed for these threshold data. The corrplot function was used to compare linear correlations across tests, and stimulus conditions, using the Kendall’s rank correlation coefficients. We used customized Bland-Altman plots to analyze the repeatability for all tests.

Results—Study 1

Test duration

We measured the test duration of the control and binocularly impaired participants with the FInD, 2-AFC, and the clinical methods. The test durations from two runs and 4 spatial frequencies were averaged for each participant (Fig 3). There was not a significant difference in test duration between the control participants and the binocularly impaired participants (p = 0.48). Overall, both the control and binocularly impaired participants took the least time with the clinical tests (mean 88.9 sec) followed by FInD tests (215.9 sec) and 2-AFC methods (316.9 sec) (F(1,5) = 25.34, p = 0.002). There was not a significant test duration difference between dip and ring stimuli (p>0.05) or between Titmus and Randot Stereotest (p>0.05).

Fig 3. Boxplots of test duration for stereoacuity assessment compared among tests.

Fig 3

Total Test Duration for FInD (3 charts for each of 4 spatial frequencies) for ring and dip stimuli, 2-AFC control experiments (40 trials for 4 interleaved spatial frequencies) for Ring and Dip stimuli, and the clinical tests Titmus and Randot. Test durations are shown in seconds for control (left panel, blue) and binocularly impaired (right panel, magenta) participants. Data points show the results for individual participants expressed by a horizontally jittered kernel density, boxes indicate the 25–75% interquartile range, whiskers represent 1st and 99th percentiles.

Stereo threshold

Threshold stereoacuities at each test spatial frequency, are shown in Fig 4 using log-stereoacuity for control (blue data points) and binocularly impaired (magenta data points) participants measured with the FInD Ring, FInD Dip, 2-AFC Ring, and 2-AFC Dip tasks. Stereoacuities for Randot and Titmus tests are shown in Fig 4C and 4F. Each participant completed two assessments for each test and the mean of the thresholds was used in the analyses. Values greater than 3000 arcsec (3.48 log arcsec) were removed from the data analysis, resulting in the removal of 24 thresholds from 3 control participants (total: 608 thresholds) and 24 thresholds for 2 binocularly impaired participants (total: 152). Stereoacuities for all control participants were 60 arcsec or less with clinical tests and stereo-threshold measured with FInD and 2-AFC tests were 1.3 and 1.6 times higher respectively than the clinical tests. The binocularly impaired participants had a wide range of stereoacuity in the clinical tests and this was observed with the other tests (compare control [blue] and binocularly impaired [magenta] data in Fig 4). The application of the data transformation failed to convert the skewed data to normally distributed data. For computer based tests, there was a significant difference in stereo-thresholds between group (H(1,364) = 54.38, p<0.0001; Kruskal-Wallis), and the overall test type (H(3,362) = 56.76, p<0.0001; Kruskal-Wallis), However, there was not a significant effect of spatial frequency(H(3, 362) = 1.98, p = 0.58; Kruskal-Wallis).

Fig 4. Boxplots of log stereo-thresholds.

Fig 4

Stereoacuity thresholds in log arcsec are shown for A) FInD Ring B) FInD Dip D)2-AFC Ring E) 2-AFC Dip for each test peak spatial frequency 1, 2, 4, 8 c/° and for C) Randot F) Titmus clinical tests. Individual thresholds from control participants are shown as blue data points in the boxes, which indicate the 25–75% interquartile range, whiskers represent 1st and 99th percentiles. Data from participants with impaired binocularity are plotted in magenta and are not included in the boxplot calculations.

Fig 5 shows the Kendall’s rank coefficients of determination (R2) between Randot, Titmus, FInD and 2-AFC Stereo-tests of binocularly normal participants. 5 out of 15 correlations were significantly different from zero (p<0.05), and are identified in red. The Randot and Titmus thresholds correlated with each other but did not significantly correlate with any computerized task. FInD Dip and Ring correlated with each other, FInD Ring correlated also with both AFC generated thresholds. Both 2-AFC results also correlated with each other.

Fig 5. Correlations between tests.

Fig 5

Kendall’s rank correlations (R2) expressed as linear function (red line) between log-stereoacuities generated by Randot, Titmus, FInD Ring and FInD Dip, and the 2-AFC Ring and Dip tests of binocular normally sighted participants on bottom left, and p-values indicated numerically in top right side of graph (red numbers refer to significantly different from null hypothesis). Histograms of data distribution for each test are shown in the diagonal. FInD and 2 AFC data are averaged across spatial frequency conditions.

Repeatability results

Fig 6A and 6B show Bland-Altman analyses of repeatability between the same test from the control participants and the binocularly impaired participants. The results show that all the test paradigms have comparable test-retest repeatability with little bias.

Fig 6. Repeatability between tests.

Fig 6

Repeatability of FInD Ring, FInD Dip, 2-AFC Ring, 2-AFC Dip, Titmus and Randot tests from 2 runs for A) control participants B) binocularly impaired participants. Blue dots represent control participants (6A) and magenta dots represent binocularly impaired participants 6B). Each panel consists of 6 figures, corresponding to the FInD Ring, FInD Dip, 2-AFC Ring, 2-AFC Dip, Titmus and Randot tests. Each figure contains 1 data point for each participant for Randot and Titmus and 4 data points for each participant, one for each of the 4 spatial frequencies tested for FInD and 2-AFC.

Discussion–Study 1

There was a low correlation among most of these tests, consistent with previous studies showing that most of the tests currently used in the clinics and research labs have low agreement. A study by Matsuo et al. (2014) found low correlation (0.43) between Titmus and TNO Stereoacuity [29]. Similarly, the correlation between the 3 rods test and Titmus/TNO/Distant Randot was less than 0.5 in Matsuo et al. (2014) study and 0.2 between 3 rod test and Distant Randot [29, 30]. In another study by McCaslin et al. (2020), the correlation between Randot and Asteroid was 0.54 whereas it was 0.66 between Randot and Propixx stimulus [31]. Vancleef et al. (2017) study also shows the low agreement between TNO and other stereo-tests [32].

A surprising aspect was the overall stereoacuity threshold estimates on the computer tests (2-AFC and FInD) were 1.3–1.6 times higher than the clinical tests. The stereo-threshold of the control participants were within 55 (1.74 log) arcsec with Randot and Titmus but with the 2-AFC and FinD, it was within 1260 (3.1 log) arcsec. There may be different reasons for this. One reason may be due to the pixel limitations of the screen. The display subtended 47.26° and each pixel subtended 0.74 arcmin/ 44.3 arcsec, which is close to the lowest stereoacuity (50 arcsec) measured in our computer tests, although we enabled subpixel rendering in Psychtoolbox. Secondly, studies by Hess and colleagues have suggested that many normally sighted people have poor stereopsis and that clinical standard tests are unable to detect these stereo anomalous populations, suggesting that monocular artefacts may lead to an overestimate of stereoacuity by clinical tests [25]. Thirdly, it may also be that the difference in the test protocol. Vancleef et al. (2017) have found stereoacuity thresholds are approximately 2 times higher for TNO than Randot [32], even greater than the difference we observe. They also hypothesize that the use of anaglyph red-green 3D glasses (used for our computer tests) may reduce binocular fusion and cause binocular rivalry more than the polarizing filters (used in the clinical tests). Other groups have reported that anaglyph glasses may increase stereoacuity error due to chromatic imbalance and rivalry [33]. The higher thresholds for 2AFC tests may be related to the fixed presentation time for 2AFC, while presentation time was unlimited for FInD, TNO than Randot. Fourthly, clinical tests report the smallest disparity that was correctly detected, whereas FInD and 2-AFC report thresholds at a specified criterion level above guessing rate (d’ = 1 or 75% correct, respectively), which may be higher than the lowest detectable disparity. Lastly, we, and some of the naïve participants, noticed that there was uncertainty about the reference depth of the display screen, even though there was a fusion box in all cases. This meant that both target and null stimuli in FInD cells and 2-AFC intervals sometimes appeared to be in depth relative to the display. This effect would increase the false alarm rate and decrease d’ in FInD paradigms and increase errors in 2-AFC tasks.

These various sources of error may have collectively contributed to the differences among tests. To address uncertainty concerning the presence of depth, we developed AIM Stereo as an alternative forced choice paradigm that does not require a subjective estimate of a reference depth plane.

Methods- Study 2

23 normally sighted adults participated in Study 2. Two participants were excluded as they were unable to complete the experiment (one because of headache and another due to technical error). Participants’ details for Study 2 are provided in Table 2.

Table 2. Demographic and optometric summary of cohort for Study 2 experiments.

Binocularly normal participants
N 21
Age Range 18–22 years
Visual Acuity (OU) 20/20-20/30
Residual Refractive error -0.75 D to +1.13 D
(Median: +0.19 D)
(abs: +0.31 D)
Residual astigmatism 0 to 1.25 D
(Median: 0.25 D)

In Study 2, participants performed AIM Stereo, Randot, and Titmus tests in randomized order. The time taken for participants to complete the self-administered AIM tasks was recorded by the test computer. AIM Stereo was repeated twice on the same day.

AIM Stereo

AIM is a self-administered paradigm in which stimuli are displayed in a series of charts comprising a grid of cells all of which contain a target [34]. For AIM Stereo, 3 charts, with 4*4 cells each of which contained 100 dots (0.14°) within a 6°⌀ circular area, surrounded by a white response ring with 0.1° line width, were deployed (see Fig 8A for example charts). Stereoscopic disparity was applied to dots within a 5° x 1.2° rectangular bar of random orientation within the noise background. This test was performed with a red-blue anaglyph display and glasses in crossed and uncrossed disparity sign. The stereoscopic disparity of each bar was selected to span a range from difficult (-2σ) to easy (+2σ) relative to a threshold-estimate that was selected by the experimenter (1° to 1’) in chart one and thereafter based on a fit to data from previous charts (see Eq 3). The participants had unlimited time to report the orientation of the depth-defined bar via mouse click, guessing when they were unsure. Their reported orientation was displayed by two black (≈0 cd/m2) feedback marks, and they could adjust their report with further clicks. Once they had indicated the orientation of the bar in all cells, they could click on a ‘Next’ icon to proceed to the next chart. Orientation errors (i.e. the difference between indicated vs. actual bar-orientation) as a function of horizontal disparity were fit with a cumulative Gaussian function to derive stereo-thresholds:

θerr=θmin+(θmaxθmin)*(0.50.5*erf(δδτ2γ)) [3]

Fig 8. Test duration using AIM Stereo for run 1 and run 2.

Fig 8

Data are plotted as in Fig 3A.

Where θerr is orientation error, θmin is the minimum report error for a highly visible target, θmax is the maximum mean error for guessing (here 45°), δ is stimulus stereoscopic disparity, δτ is threshold disparity and γ is the slope. Thus, the threshold is derived from the midpoint between minimum angular error and 45° maximum mean error. Due to AIM’s continuous report paradigm, the model enables a personalized performance profile and includes threshold, slope, and minimum angular error parameters. See Fig 7B for an example psychometric function.

Fig 7. AIM- Stereo paradigm.

Fig 7

A) Participants viewed three AIM charts, each containing a 4*4 grid of 6° ⌀ cells with 100 dots, red in one eye and blue to the other, with a central disparity-defined 5°x1.25° rectangular bar of random orientation. Participants indicated the perceived orientation of the bar by clicking on the corresponding angle on the white ring surrounding each cell. Two black marks indicated the reported orientation and participants could adjust the reported orientation with unlimited further clicks. The range of disparities presented on subsequent charts was adaptively calculated based on their responses to previous charts. Visualization of the depth appearance of the stimuli is presented at the bottom of the figure. B) Angular error function (red line) using AIM paradigm. The y axis depicts the indicated orientation error for each disparity level (x axis). The responses of a representative participant’s indications of each bar orientation error (blue circles) as a function of the stereoscopic disparity of the bar. The dashed green lines are 95% confidence intervals of the fit.

Results- Study 2

Experiment duration

Fig 8 shows the test duration for the AIM task. For 3 charts of 4*4 grid of cells (48 orientation reports), the average duration for the first run was 241 sec and 117 sec for the second run, respectively. Log-transformation was applied before applying statistics to transform the skewed data to normally distributed data. This test time difference between runs was statistically significant (p<0.0001; paired t-test), suggesting a learning effect using AIM Stereo. When using 3 charts, i.e., one initial and two adaptive steps, AIM took significantly longer to complete than Titmus (66 sec) or Randot (97 sec), (F(2, 56) = 66.41, p<0.0001) from the Study 1 tests. However, post-hoc analysis of AIM Stereo using fewer charts (i.e. first chart only or first and second) shows that test time is significantly reduced (37 sec median first chart, 73 secs for first and second chart for the second run). The test time for first and second chart from second run of AIM is similar to Titmus but significantly shorter than Randot (F(2, 56) = 7.75, p<0.01).

Stereo threshold

Fig 9 shows log stereo-threshold for Randot, Titmus and AIM. The median (inter-quartile range) stereo-threshold with the Randot, Titmus and AIM were 1.40(0.18), 1.60(0) and 1.82(0.49) log arcsec respectively. The application of the data transformation failed to convert the skewed data to normally distributed data.

Fig 9. Log stereoacuities of Randot, Titmus, and AIM Stereo.

Fig 9

Data are plotted as in Fig 4.

For AIM, the thresholds for the 2 runs were not statistically different (z(96) = -0.336, p = 0.73; Wilcoxon signed-rank test) and were averaged. The median of AIM stereo-thresholds were significantly higher (H(2,58) = 18.12, p = 0.0001; Kruskal-Wallis), than the stereo-thresholds from Randot or Titmus by a factor of 1.1–1.3 times.

Fig 10 shows Kendall’s rank correlations (top-right triangle), histograms (diagonal) and linear functions (bottom-left triangle) between the Randot, Titmus, and AIM Stereo. All correlation between these tests were low and did not reach statistical significance (R2 ≤ 0.038, p <0.05). Histograms indicate a difference of distributions between stereo tests.

Fig 10. Kendall’s rank correlations and histograms between Randot, Titmus, and AIM Stereo.

Fig 10

Data are plotted as in Fig 5.

Fig 11 shows the repeatability of the AIM Stereo tests. There was a small bias (-0.01) and a tendency for an increase in estimated stereoacuity (decrease in stereo-threshold) on the second test, indicating a small learning effect. Overall, the stereo-thresholds tended to be stable over repetitions.

Fig 11. Bland-Altman tests of AIM Stereo between the two runs.

Fig 11

Data are plotted as in Fig 6.

General discussion

We introduce and evaluate two new stereoacuity tests, FInD and AIM Stereo, that were developed to address problems with current clinical tests. The results show that both FInD and AIM are comparable or faster in duration to current clinical tests (where all measure stereoacuity for a single spatial structure), are significantly faster than 2-AFC tests without loss of accuracy or precision and are sensitive to binocular visual impairment.

Inter-Test agreement

There was good agreement between stereoscopic thresholds measured FInD and classic 2-AFC paradigms, suggesting that the faster speed of FInD did not come at the loss of accuracy. However, FInD and 2-AFC stereo thresholds were 1.3–1.6 times higher than those recorded by the clinical tests. We speculate that this difference could be attributed to underestimate of stereoacuity by FInD due to properties of the apparatus and task employed in the present study: the spatial resolution of our display (44 arcsec pixels) was close to the highest measured stereoacuity (50arcsec); the use of red/blue anaglyph stimuli could lead to rivalry that may impede binocular fusion [32]; differences in threshold criterion that were higher for FInD and 2-AFC than the smallest correct disparity for clinical tests; and uncertainty concerning the absolute depth of the reference plane may have led to false alarms. Additionally, the difference could be related to an overestimate of stereoacuity by clinical tests due to the presence of monocular artefacts.

While there was uncertainty concerning the target depth relative to the display in FInD stimuli, this ambiguity was eliminated with AIM in which a depth-defined bar is embedded in a disk of random dots. We speculate that this difference accounts for differences between estimates of stereoacuity measured with clinical tests, which were 1.6 times lower than those measured with FInD, but only 1.1–1.3 times lower than those measured with AIM. This finding suggests that the test protocol, threshold criteria and stimulus parameters play a role in estimates of stereo-acuity, since anaglyph displays were employed by both AIM and FInD. The measurement of a psychometric function with FInD, AIM and 2-AFC paradigms also provides additional information next to the threshold parameter, including slope and, minimum error angle, which may provide useful information concerning sensitivity and bias [17].

Repeatability

We investigated repeatability using Bland Altman plots and found no systematic learning effect as indicated as bias for FInD Stereo tests, 2-AFCs, or clinical tests for both controls and binocularly impaired individuals (Fig 6). The same was found for AIM Stereo (Fig 11). AIM Stereo showed less test-retest variability than FInD Stereo.

Correlation analysis

In Study 1, the correlation (R2) between Randot and Titmus was only 0.18 whereas in Study 2, it was 0.04. The correlation between clinical tests with other computer tests (FInD/2-AFC/AIM) ranged from 0.006 to 0.24. This variation frustrates efforts to compare results between studies that use different methods, and the same test must therefore be used to track recovery of stereoacuity.

Stimulus structure

Surprisingly, there was no significant effect of spatial frequency on stereo thresholds measured with FInD or either spatial or temporal 2-AFC methods. This finding is not consistent with several previous studies [10, 11], however others have argued that the spatial scale of the disparity signal, which was constant in our stimuli, rather than the spatial frequency of the stimulus, determines the upper limit for stereoacuity [35]. Consequently, using fewer spatial frequency conditions will further shorten the testing time for the FInD Stereo tasks. Next to other methodological difference between the tests used in the current study, the stimulus property differences i.e., stimulus size, method of dichoptic presentation, spatial-frequency profiles, may have also contributed to the threshold differences between stereo-tests. One advantage of FInD, AIM, and 2-AFC tests is that they can be self-administered and thus can be potentially remotely used. This makes it easier to follow up any change of stereoacuity, that may occur due to therapy or disease progression, e.g., amblyopia therapy. This decreases the chair time in clinician’s office while making it easier to keep track of the patient’s change in stereoacuity threshold during progression or remediation of disease.

Study limitations and future directions

A limitation of this study is the small number of binocularly impaired participants in the FInD experiment and none in AIM experiment. Another limitation is the lack of measurement of phoria. Phoria might impact the result by changing the depth plane relative to the background, particularly for the FInD paradigm where participants judged the presence of apparent depth in all cells. Some previous studies have shown that exophoric participants were likely to have better stereoacuity with crossed disparities and esophoric participants with uncrossed disparities when the phoria of more than 2 pd is considered [3640]. Since we did not measure phoria in the present study, we cannot be sure that phoria did not differ across tests, or over time within tests, either of which could affect the results. We did not measure and compare crossed and uncrossed disparity within each FInD and AIM test, which will be a potential future direction. Although beyond the scope of the current study, increasing blank space between each cell may aid to establish a reference plane to judge depth, which may increase the repeatability of both FInD and AIM tests. The current study introduced AIM Stereo and compared its result with conventional clinical stereovision tests as proof-of-concept. We will compare AIM Stereo to other adaptive techniques, e.g., 2-AFC methods in future studies. AIM’s approach offers two additional analysis features, namely a 3-parameter psychometric fit including threshold, slope, and min. angular report error and response error bias analysis [17], which may be suitable to detect distortions. A future investigation will examine whether these features provide additional psychometric biomarkers of stereovision impairment.

Conclusions

In conclusion, this proof-of-concept study introduced the FInD and AIM Stereo methods and compared them with standard 2-AFC methods and clinical tests. The results reveal limitations across methods, including low agreement between tests but show promising results for FInD and AIM Stereo tests, which can be used as a self-administered metric to measure and monitor stereoacuity thresholds, accurately, precisely, quickly, remotely and over time. Different stimulus conditions did not significantly affect the thresholds of FInD Stereo. Also, FInD was able to detect atypical stereovision in participant with impaired binocular vision. AIM and FInD Stereo combine the stimulus control of classic psychophysics paradigms with the speed and ease-of-use of clinical tests while also adding additional analysis features as well as removing the need for a test administrator.

Acknowledgments

Portions of this study have been presented at the Vision Science Society Conference 2022 and Vision Science Society conference 2023.

Data Availability

All data files are available from the Zenodo database (accession number 10.5281/zenodo.10688863).

Funding Statement

This project was supported by National Institutes of Health (www.nih.gov) (grant R01 EY029713 to PJB). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Lin G, Al Ani R, Niechwiej-Szwedo E. Age-related deficits in binocular vision are associated with poorer inhibitory control in healthy older adults. Frontiers in Neuroscience. 2020;14:605267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Hui L, Sen Xia H, Shu Tang A, Feng Zhou Y, Zhong Yin G, Long Hu X, et al. Stereopsis deficits in patients with schizophrenia in a Han Chinese population. Scientific reports. 2017;7(1):45988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.O’connor AR, Tidbury LP. Stereopsis: are we assessing it in enough depth? Clinical and Experimental Optometry. 2018;101(4):485–94. doi: 10.1111/cxo.12655 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Leske DA, Birch EE, Holmes JM. Real depth vs randot stereotests. American journal of ophthalmology. 2006;142(4):699–701. doi: 10.1016/j.ajo.2006.04.065 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Long J, Siu C. Randot stereoacuity does not accurately predict ability to perform two practical tests of depth perception at a near distance. Optometry and vision science. 2005;82(10):912–5. doi: 10.1097/01.opx.0000181231.20262.a5 [DOI] [PubMed] [Google Scholar]
  • 6.Adler P, Scally AJ, Barrett BT. Test–retest variability of Randot stereoacuity measures gathered in an unselected sample of UK primary school children. British journal of ophthalmology. 2012;96(5):656–61. doi: 10.1136/bjophthalmol-2011-300729 [DOI] [PubMed] [Google Scholar]
  • 7.Fawcett SL, Birch EE. Validity of the Titmus and Randot circles tasks in children with known binocular vision disorders. Journal of American Association for Pediatric Ophthalmology and Strabismus. 2003;7(5):333–8. doi: 10.1016/s1091-8531(03)00170-8 [DOI] [PubMed] [Google Scholar]
  • 8.Serrano-Pedraza I, Vancleef K, Read JCA. Avoiding monocular artifacts in clinical stereotests presented on column-interleaved digital stereoscopic displays. Journal of Vision. 2016;16(14):13–. doi: 10.1167/16.14.13 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Antona B, Barrio A, Sanchez I, Gonzalez E, Gonzalez G. Intraexaminer repeatability and agreement in stereoacuity measurements made in young adults. International journal of ophthalmology. 2015;8(2):374. doi: 10.3980/j.issn.2222-3959.2015.02.29 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Schor CM, Wood I. Disparity range for local stereopsis as a function of luminance spatial frequency. Vision research. 1983;23(12):1649–54. doi: 10.1016/0042-6989(83)90179-7 [DOI] [PubMed] [Google Scholar]
  • 11.Yang Y, Blake R. Spatial frequency tuning of human stereopsis. Vision research. 1991;31(7–8):1176–89. doi: 10.1016/0042-6989(91)90043-5 [DOI] [PubMed] [Google Scholar]
  • 12.Siderov J, Harwerth RS. Stereopsis, spatial frequency and retinal eccentricity. Vision research. 1995;35(16):2329–37. doi: 10.1016/0042-6989(94)00307-8 [DOI] [PubMed] [Google Scholar]
  • 13.Li Y, Zhang C, Hou C, Yao L, Zhang J, Long Z. Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex. BMC neuroscience. 2017;18(1):1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Vancleef K, Read JC, Herbert W, Goodship N, Woodhouse M, Serrano-Pedraza I. Two choices good, four choices better: For measuring stereoacuity in children, a four-alternative forced-choice paradigm is more efficient than two. PLoS One. 2018;13(7):e0201366. doi: 10.1371/journal.pone.0201366 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Tomac S, Altay Y. Near stereoacuity: development in preschool children; normative values and screening for binocular vision abnormalities; a study of 115 children. Binocular vision & strabismus quarterly. 2000;15(3):221–8. [PubMed] [Google Scholar]
  • 16.Bex P, Skerswetat J. FInD—Foraging Interactive D-prime, a rapid and easy general method for visual function measurement. Journal of Vision. 2021;21(9):2817–. [Google Scholar]
  • 17.Skerswetat J, He J, Shah JB, Aycardi N, Freeman M., Bex PJ. A new adaptive, self-administered, and generalizable method used to measure visual acuity. Optometry & Vision Science Forthcoming. 2024. [Google Scholar]
  • 18.McMonnies CW. Chart memory and visual acuity measurement. Clinical and Experimental Optometry. 2001;84(1):26–34. doi: 10.1111/j.1444-0938.2001.tb04932.x [DOI] [PubMed] [Google Scholar]
  • 19.Merabet LB, Manley CE, Pamir Z, Bauer CM, Skerswetat J, Bex PJ. Motion and form coherence processing in individuals with cerebral visual impairment. Developmental Medicine & Child Neurology. 2023. doi: 10.1111/dmcn.15591 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Brainard DH, Vision S. The psychophysics toolbox. Spatial vision. 1997;10(4):433–6. [PubMed] [Google Scholar]
  • 21.Kleiner M, Brainard D, Pelli D. What’s new in Psychtoolbox-3? 2007. [Google Scholar]
  • 22.Pelli DG, Vision S. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial vision. 1997;10:437–42. [PubMed] [Google Scholar]
  • 23.Chopin A, Silver MA, Sheynin Y, Ding J, Levi DM. Transfer of perceptual learning from local stereopsis to global stereopsis in adults with amblyopia: a preliminary study. Frontiers in Neuroscience. 2021;15:719120. doi: 10.3389/fnins.2021.719120 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Zhao L, Wu H. The difference in stereoacuity testing: contour-based and random dot-based graphs at far and near distances. Annals of Translational Medicine. 2019;7(9). doi: 10.21037/atm.2019.03.62 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Hess RF, To L, Zhou J, Wang G, Cooperstock JR. Stereo Vision: The Haves and Have-Nots. i-Perception. 2015;6(3):2041669515593028. doi: 10.1177/2041669515593028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Maiello G, Chessa M, Bex PJ, Solari F. Near-optimal combination of disparity across a log-polar scaled visual field. PLOS Computational Biology. 2020;16(4):e1007699. doi: 10.1371/journal.pcbi.1007699 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Green D, Swets J. Signal Detection Theory and Psychophysics (Peninsula Pub). 1989. [Google Scholar]
  • 28.Wetherill G, Levitt H. Sequential estimation of points on a psychometric function. British Journal of Mathematical and Statistical Psychology. 1965;18(1):1–10. doi: 10.1111/j.2044-8317.1965.tb00689.x [DOI] [PubMed] [Google Scholar]
  • 29.Matsuo T, Negayama R, Sakata H, Hasebe K. Correlation between depth perception by three-rods test and stereoacuity by distance randot stereotest. Strabismus. 2014;22(3):133–7. doi: 10.3109/09273972.2014.939766 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Kim Y-C, Kim S-H, Shim H-S. Comparison and correlation between distance static stereoacuity and dynamic stereoacuity. Journal of Korean Ophthalmic Optics Society. 2015;20(3):385–90. [Google Scholar]
  • 31.McCaslin AG, Vancleef K, Hubert L, Read JCA, Port N. Stereotest Comparison: Efficacy, Reliability, and Variability of a New Glasses-Free Stereotest. Transl Vis Sci Technol. 2020;9(9):29. doi: 10.1167/tvst.9.9.29 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Vancleef K, Read JC, Herbert W, Goodship N, Woodhouse M, Serrano‐Pedraza I. Overestimation of stereo thresholds by the TNO stereotest is not due to global stereopsis. Ophthalmic and Physiological Optics. 2017;37(4):507–20. doi: 10.1111/opo.12371 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.CORNFORTH LL, JOHNSON BL, KOHL P, ROTH N. Chromatic imbalance due to commonly used red-green filters reduces accuracy of stereoscopic depth perception. Optometry and Vision Science. 1987;64(11):842–5. doi: 10.1097/00006324-198711000-00007 [DOI] [PubMed] [Google Scholar]
  • 34.Skerswetat J, Boruta A, Bex PJ. Disability Glare Quantified Rapidly with AIM (Angular Indication Measurement) Glare Acuity. Investigative Ophthalmology & Visual Science. 2022;63(7):2558 – F0512–2558 –F0512. [Google Scholar]
  • 35.Wilcox LM, Hess RF. Dmax for stereopsis depends on size, not spatial frequency content. Vision Research. 1995;35(8):1061–9. doi: 10.1016/0042-6989(94)00199-v [DOI] [PubMed] [Google Scholar]
  • 36.Lam AK, Tse P, Choy E, Chung M. Crossed and uncrossed stereoacuity at distance and the effect from heterophoria. Ophthalmic and Physiological Optics. 2002;22(3):189–93. doi: 10.1046/j.1475-1313.2002.00030.x [DOI] [PubMed] [Google Scholar]
  • 37.Shippman S, Cohen KR. Relationship of heterophoria to stereopsis. Archives of ophthalmology. 1983;101(4):609–10. doi: 10.1001/archopht.1983.01040010609017 [DOI] [PubMed] [Google Scholar]
  • 38.Saladin JJ. Effects of heterophoria on stereopsis. Optometry and Vision Science: Official Publication of the American Academy of Optometry. 1995;72(7):487–92. [PubMed] [Google Scholar]
  • 39.Bosten J, Goodbourn P, Lawrance-Owen A, Bargary G, Hogg R, Mollon J. A population study of binocular function. Vision Research. 2015;110:34–50. doi: 10.1016/j.visres.2015.02.017 [DOI] [PubMed] [Google Scholar]
  • 40.Heravian J, HOSEINI YSH, Ehyaei A, Baghebani F, Mahjoob M, Ostadimoghaddam H, et al. Effect of induced heterophoria on distance stereoacuity by using the Howard-Dolman test. 2012. [Google Scholar]

Decision Letter 0

Amithavikram R Hathibelagal

8 Jan 2024

PONE-D-23-27405Comparison of Foraging Interactive D-prime and Angular Indication Measurement Stereoacuity with different methods to assess stereopsisPLOS ONE

Dear Dr. Neupane,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

ACADEMIC EDITOR:  Both the reviewers have expressed concerns and provided some useful suggestions. Please address the comments thoroughly.

==============================

Please submit your revised manuscript by Feb 22 2024 11:59PM.. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Amithavikram R Hathibelagal, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. Did you know that depositing data in a repository is associated with up to a 25% citation advantage (https://doi.org/10.1371/journal.pone.0230416)? If you’ve not already done so, consider depositing your raw data in a repository to ensure your work is read, appreciated and cited by the largest possible audience. You’ll also earn an Accessible Data icon on your published paper if you deposit your data in any participating repository (https://plos.org/open-science/open-data/#accessible-data).

3. Thank you for stating in your Funding Statement:

“This project was supported by National Institutes of Health (www.nih.gov)

(grant R01 EY029713 to PJB).”

Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now.  Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement.

Please include your amended Funding Statement within your cover letter. We will change the online submission form on your behalf.

4. We note that you have a patent relating to material pertinent to this article. Please provide an amended statement of Competing Interests to declare this patent (with details including name and number), along with any other relevant declarations relating to employment, consultancy, patents, products in development or modified products etc. Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

This information should be included in your cover letter; we will change the online submission form on your behalf.

5. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

6. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table1 & 2 in your text; if accepted, production will need this reference to link the reader to the Table.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: General Comments:

The study introduces two novel stereo measurement techniques, AIM and FlnD, and compares them with established tests such as 2AFC, Randot, and Titmus fly test. While the manuscript is technically sound, there needs a major restructuring with clarification and refinement for several points.

Clarity of Stimuli: The description of the FlnD noise stimulus needs further clarification. The term "noise" typically implies random dots without any signal. Consider renaming the target or providing a clear description of what subjects saw and responded to.

Viewing Distance: Ensure that details about the self-administrable tests, such as FlnD, AIM, and 2AFC, include information on how the viewing distance was maintained at 80 cm. Inconsistencies in viewing distance could contribute to data variation.

Aim of the study: Lines 137-144 outline the aims of the study but could be clarified. The lines starts out to stating there are 2 aims and explaining 5 aims. Consider explaining the features of FlnD and AIM in the introduction and focusing on the aims and objectives in this section.

Sections Organization: It is highly recommended that the FlnD and AIM results be combined as a single experiment for better coherence. The readers will be more interested in learning about the two paradigms rather than learning how the AIM came into being and to solve which problems of the FlnD paradigm. The author can discuss why AIM performed better than FlnD at the stereo thresholding in the discussion section.

Stats: Using stats to compare the results between the normal and binocular impaired individuals will lead to false results due to the huge uneven sample size

Specific Line-by-Line Comments:

Line 52: The classic sign for macular degeneration includes macular atrophy, drusens etc and the impact of these is loss of stereopsis. Stating stereopsis as a biomarker for degenerative diseases will be an error.

Line 115: Recheck the reference for the loss of stereoacuity due to refractive error and amblyopia as it refers to a paper related to retinal eccentricity.

Line 195-196: Provide consistency in units for the lower and upper bound min and max disparity, considering reporting in degrees.

Line 200-204: Clarify how the targets for the ring and noise stimuli were shifted towards opposite side and yet one produces crossed and another produces uncrossed disparities.

Line 275-276: Complete the sentence “Matlab’s anovan and multcompare functions for ANOVA and planned comparisons between tests”.

Line 290: Address the distribution of data in the statistical analysis section (if the data is normally distributed). Also, 2-way ANOVA was performed but looking at the fig 3, FlnD data, the mean and median are quite apart from each other.

Line 299: Clarify that whiskers usually represent 1st and 99th percentiles, not outliers.

Line 306: Explain the rationale for excluding values >3000 arcsec from analysis. Specially when they belong to the binocular impaired group, one would expect the stereo to be quite poor. If the data is skewed because of those subjects, a possibility would be to renumber stereo thresholds >3000 as 3000 and mention the reason for 3000 arcsec as the cutoff

Line 310-312: Discuss the potential implications of the observed variability

Line 314: p-value can never be exactly 0.

Line 345: “purple dot” – please keep colours consistent

Line 347: “Each figure contains 1 data point for each participant for Randot” shouldn’t the total number of dots be 20?

Line 406: Provide reasons for not including 2AFC in the comparison; consider combining the first and second experiments.

Line 457:Clarify the unit "5 X 1.25."

Line 457: It will be advisable to look at the learning in 2AFC and FlnD?

Line 489-499: Avoid repetition of introduction points and directly transition to conclusion statements.

Line 514: AIM was not tested on amblyopes in this manuscript.

Comments on the Figures:

Fig 3:

The different colours for the 6 tests are not required.

The horizontal jittering in 2AFC and clinical test seems to be huge. Please maintain uniform jittering.

Figure 4:

The normal can be given a single colour while the binocular impaired can be assigned a different colour and kept that uniform with figure 3. Also, number of subjects are not consistent.

Figure 5:

Consider combining with Figure 4 as panel E for easier comparison.

Figure 6:

Unequal number of subjects are presented here.

The graphs are repeated, for example randot Vs Titmus fly, or Titmus fly vs Randot will be same. Consider keeping only one to avoid confusion.

What additional inferences does the histogram generate?

Are these data points average of the 4 spatial frequency?

Consider including the Rsq value

Figure 7:

Align captions above the graphs for clarity.

Figure 11: Please remove repetitions of the graphs and mention the R sq and p values.

Reviewer #2: The study reports the inter-technique comparison and intra-technique repeatability of two new psychophysical measures of stereoacuity, relative to conventional clinical techniques and 2AFC psychophysical techniques. The two new tests were themselves quite repeatable but there was very little agreement between the various tests of stereoacuity. The time taken for the two new tests were shorter than conventional 2AFC techniques but significantly longer than the clinical tests of stereoacuity. These results are discussed in the context of a speed-accuracy trade-off in the assessment of stereoacuity.

The manuscript is easy to read and more or less free of grammatical errors. There are some confusing sentences dispersed all through the manuscript (indicated below in the specific comments) that need to be fixed. The figures are all of very low resolution and the details were not visible upon magnification. This needs to be fixed as well. In addition to these issues, the following issues need to be addressed in the manuscript.

1. While I do not have any concern with the quality of the science pursued in this manuscript, I worry about the utility of these techniques in the future assessment of stereoacuity in the clinic/research settings. This concern is more for the clinical application, and it primarily stems from the time it takes to complete the test, vis-à-vis, the present techniques of Titmus or RanDot stereoacuity. I appreciate that the two new techniques are far more scientifically rigorous, and they overcome several limitations fraught with the present testing paradigms. However, is this a motivation enough for an average clinician to switch over to this technique? These techniques take ~3-times the time to complete than the present techniques and in a busy clinic, this translates into significant chair time for the patient. I fear that the accuracy benefit obtained using this technique may be over-ridden by the time taken to complete the task.

There may be a solution to this though. As indicated in the discussion section of the manuscript (lines 506 – 509), testing only one spatial frequency stimulus or a single disparity sign may reduce the time to complete the task. Given that the target spatial frequency did not have an impact on the stereoacuity anyways, can this not be implemented in the present version of the software and evidence provided that this indeed reduces the time to do the task? I understand that this involves additional data collection and more work for the authors, but this may be a significant step towards the holy-grail of having a test that is both accurate and can be administered quickly.

2. The stimuli used in the two new stereoacuity techniques is not clear to me. There are bandpass filtered ring and noise stimuli that the participants had to appreciate depth in. What was the pattern of depth though? I assume that in the ring stimuli, the “ring” appeared in depth depending on the stimulus disparity used but what was seen in the “noise” stimuli? This is not clear from the explanation. Perhaps a figure showing the two stimuli may be useful to include here for helping the readers understand the stimulus.

3. Other specific issues…

a. Abstract conclusion: Why only FlnD detected stereo deficits. It is also detected by AIM right? Please rephrase.

b. Abstract conclusion: The last statement is a rather superficial one. It should be replaced with a statement that describes the advantage/challenges of switching over to the new techniques proposed in this study, vis-a-vis, the existing ones. The new techniques take ~3times longer to complete than the routine clinical tests. What would be the motivation for the clinician to switch over to the new technique should come out clearly here.

c. Line 78: Add to this list that these tests are meant to be run at a constant viewing distance and are meant for a single IPD - the former limits their ability to check stereopsis at different viewing distances while the latter creates issues with test accuracy.

d. In general, the introduction is way too long (4.5 pages). This can be significantly shortened, without losing information.

e. Line 123: Expand HMD’s.

f. Lines 137 - 144: This section is a bit confusing. The start makes it appear as if there are only two experiments, but the remainder of the manuscript appears like there are five parts to the study. So, add a statement before first to indicate that what ensues is a note on the organization of the manuscript.

g. Line 170: I am sure this was taken care of, but a brief mention that there was no leak of information between the red/blue filters to reassure the readers. Also, how was the issue of blur arising from chromatic aberrations of the eye handled in these stimuli?

h. Line 195: Should this be log arc sec?

i. Lines 286 – 287: Please include a measure of variance of the data in all measures. Also, please fix the p=0.000 to p<0.001.

j. Line 292: Use of Box plots suggests that data were not normally distributed. Was that the case? If so, the results need to be presented as median and IQR.

k. Line 517: Be explicit that the new tests are faster, relative to the 2AFC paradigms.

l. Like the introduction section, the discussion section could also be tightened. In several areas of this section, the results are repeated, and this redundancy can be removed.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Jun 7;19(6):e0305036. doi: 10.1371/journal.pone.0305036.r002

Author response to Decision Letter 0


27 Feb 2024

Northeastern University

105-107 Forsyth St,

Boston, MA, 02115

neupanesonisha@gmail.com

The Academic Editor

Investigative Ophthalmology & Vision Science

Feb 22, 2024

Dear Editor,

We thank you and the reviewers for your time and comments on our manuscript. In this letter, we provide a detailed response to the points raised by the reviewers.

We have highlighted the reviewers’ comments in bold, our responses are in blue plain text. We have also included the original and modified text where applicable and we have used italics for the lines from the manuscript.

We have revised the funding statement and conflict of interest as follows:

Funding Statement

This project was supported by National Institutes of Health (www.nih.gov)

(grant R01 EY029713 to PJB). There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing Interests:

I have read the journal's policy and the authors of this manuscript have the following competing interests:

FInD and AIM technologies are disclosed as provisional patented (AIM) and pending patent (FInD) and held by Northeastern University, Boston, USA.

FInD title: Method for visual function assessment; Application PCT/US2021/049250

AIM title: Self-administered adaptive vision screening test using angular indication.

Application PCT/US2023/012959.

JS and PJB are founders and shareholders of PerZeption Inc, which has an exclusive license agreement for FInD and AIM with Northeastern University. SN declares that she has no conflict of interest.

This does not alter our adherence to PLOS ONE policies on sharing data and materials.

Yours sincerely,

Sonisha Neupane 

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

We have edited the manuscript according to PLOS ONE’s style requirements.

2. Did you know that depositing data in a repository is associated with up to a 25% citation advantage (https://doi.org/10.1371/journal.pone.0230416)? If you’ve not already done so, consider depositing your raw data in a repository to ensure your work is read, appreciated and cited by the largest possible audience. You’ll also earn an Accessible Data icon on your published paper if you deposit your data in any participating repository (https://plos.org/open-science/open-data/#accessible-data).

We have deposited the data and the MATLAB code in the Zenodo repository. The DOI for our data and code is 10.5281/zenodo.10688863.

3. Thank you for stating in your Funding Statement:

“This project was supported by National Institutes of Health (www.nih.gov)

(grant R01 EY029713 to PJB).”

Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now. Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement.

Please include your amended Funding Statement within your cover letter. We will change the online submission form on your behalf.

This project was supported by National Institutes of Health (www.nih.gov)

(grant R01 EY029713 to PJB). There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

4. We note that you have a patent relating to material pertinent to this article. Please provide an amended statement of Competing Interests to declare this patent (with details including name and number), along with any other relevant declarations relating to employment, consultancy, patents, products in development or modified products etc. Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

This information should be included in your cover letter; we will change the online submission form on your behalf.

I have read the journal's policy and the authors of this manuscript have the following competing interests:

FInD and AIM technologies are disclosed as provisional patented (AIM) and pending

patent (FInD) and held by Northeastern University, Boston, USA.

FInD title: Method for visual function assessment; Application PCT/US2021/049250

AIM title: Self-administered adaptive vision screening test using angular indication;

Application PCT/US2023/012959.

JS and PJB are founders and shareholders of PerZeption Inc, which has an exclusive

license agreement for FInD and AIM with Northeastern University. SN declares that she has no conflict of interest. This does not alter our adherence to PLOS ONE policies on sharing data and materials.

5. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

We have deposited the data and the MATLAB code in the Zenodo repository. The DOI for our data and code is the 10.5281/zenodo.10688863.

6. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table1 & 2 in your text; if accepted, production will need this reference to link the reader to the Table.

We have referred the Table 1 and 2 in the text.

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: General Comments:

The study introduces two novel stereo measurement techniques, AIM and FlnD, and compares them with established tests such as 2AFC, Randot, and Titmus fly test. While the manuscript is technically sound, there needs a major restructuring with clarification and refinement for several points.

We thank the reviewer for taking the time to evaluate our paper and addressed each of the comments. To address the reviewers request for clarification and to improve the legibility of the revision, we changed the structure of all sections. We hope that the reviewer finds, that due to these changes, the overall legibility and quality of the manuscript has improved.

Clarity of Stimuli: The description of the FlnD noise stimulus needs further clarification. The term "noise" typically implies random dots without any signal. Consider renaming the target or providing a clear description of what subjects saw and responded to.

We thank the reviewer for the suggestion and have added the stimulus depth figures and edited the text to make it clear. We have also changed the name from ‘noise’ to ‘dip’ e.g. FInD Dip to avoid confusion.

Viewing Distance: Ensure that details about the self-administrable tests, such as FlnD, AIM, and 2AFC, include information on how the viewing distance was maintained at 80 cm. Inconsistencies in viewing distance could contribute to data variation.

We have included the following line in the Method Section.

Line 124: “A chinrest was used to maintain the viewing distance.”

Aim of the study: Lines 137-144 outline the aims of the study but could be clarified. The lines starts out to stating there are 2 aims and explaining 5 aims. Consider explaining the features of FlnD and AIM in the introduction and focusing on the aims and objectives in this section.

We have edited the text to make it clear that two experiments were conducted to address five research aims.

Line 93-100:” The two experiments reported here were performed as proof-of-concept studies with five aims: First, we introduce FInD Stereo and its features. Second, we compare estimates of stereoacuity, test duration, and test-retest reliability of FInD with standard 2-AFC methods, and clinically used tests (Randot and Titmus) in stereo-typical and atypical participants. Third, we compare different spatial properties of stereo-inducing stimuli and examine their effect on stereoacuity. Fourth, we introduce AIM Stereo and its features. Fifth, we compare AIM Stereo against the above-mentioned clinical tests using the same outcome measures.”

Sections Organization: It is highly recommended that the FlnD and AIM results be combined as a single experiment for better coherence. The readers will be more interested in learning about the two paradigms rather than learning how the AIM came into being and to solve which problems of the FlnD paradigm. The author can discuss why AIM performed better than FlnD at the stereo thresholding in the discussion section.

The FInD and AIM were done at different times with different set of participants. To reflect this, we have kept them as two experiments, but we have edited the text to make the flow better.

Stats: Using stats to compare the results between the normal and binocular impaired individuals will lead to false results due to the huge uneven sample size

We appreciate the comment and acknowledge this limitation of our study accordingly.

Line 523-524: A limitation of this study is the small number of binocularly impaired participants in the FInD experiment and none in AIM experiment.

Specific Line-by-Line Comments:

Line 52: The classic sign for macular degeneration includes macular atrophy, drusens etc and the impact of these is loss of stereopsis. Stating stereopsis as a biomarker for degenerative diseases will be an error.

We have removed the section stating stereopsis as a biomarker for degenerative disease.

Line 115: Recheck the reference for the loss of stereoacuity due to refractive error and amblyopia as it refers to a paper related to retinal eccentricity.

We have updated the reference.

Line 195-196: Provide consistency in units for the lower and upper bound min and max disparity, considering reporting in degrees.

We have changed the text to report them in degrees.

Line 153-156: On the first chart, the disparity range was scaled to span 0.005° (0.3 arcmin) to 0.5° (30 arcmin) (which were also the upper and lower bound min and max disparity) in evenly spaced log steps to cover the broad typical stereoacuity range for binocularly healthy adults.

Line 200-204: Clarify how the targets for the ring and noise stimuli were shifted towards opposite side and yet one produces crossed and another produces uncrossed disparities.

It depends on which side of the disparities the eyes are seeing. If the right side targets are visible to right eye and left side to left eye, it produces the uncrossed disparities whereas if it is the reverse, it provides the crossed disparities.

Line 275-276: Complete the sentence “Matlab’s anovan and multcompare functions for ANOVA and planned comparisons between tests”.

We have changed the sentence accordingly.

Line 235-236: Threshold estimates were analyzed with Matlab’s anovan and multcompare functions for ANOVA and planned comparisons were used to study the variance between tests.

Line 290: Address the distribution of data in the statistical analysis section (if the data is normally distributed). Also, 2-way ANOVA was performed but looking at the fig 3, FlnD data, the mean and median are quite apart from each other.

Some of the outcomes are not normally distributed. For data from Fig 3, we did log transformation to convert them to normally distributed data before performing ANOVA. The log threshold data (Figure 4) was not able to convert to normally distribution after further transformation and we have stated in the manuscript stating so.

Line 245-247: Log-transformation was applied before using ANOVA test to transform the skewed data to normally distributed data.

Line 275-276: The application of the data transformation failed to convert the skewed data to normally distributed data.

Line 299: Clarify that whiskers usually represent 1st and 99th percentiles, not outliers.

We have changed the text to clarify that.

Line 259-261: Data points show the results for individual observers expressed by a horizontally jittered kernel density, means are depicted in dark squares, boxes indicate the 25-75% interquartile range, whiskers represent 1st and 99th percentiles.

Line 306: Explain the rationale for excluding values >3000 arcsec from analysis. Specially when they belong to the binocular impaired group, one would expect the stereo to be quite poor. If the data is skewed because of those subjects, a possibility would be to renumber stereo thresholds >3000 as 3000 and mention the reason for 3000 arcsec as the cutoff.

The Titmus fly is equal to 3552’’ threshold but has variable disparity in

Attachment

Submitted filename: Response to Reviewers.docx

pone.0305036.s001.docx (48.8KB, docx)

Decision Letter 1

Amithavikram R Hathibelagal

16 Apr 2024

PONE-D-23-27405R1Comparison of Foraging Interactive D-prime and Angular Indication Measurement Stereo with different methods to assess stereopsisPLOS ONE

Dear Dr. Neupane,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

ACADEMIC EDITOR:  Please address the minor comments raised by one of the reviewers.

==============================

Please submit your revised manuscript by May 31 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Amithavikram R Hathibelagal, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Really appreciate the effort taken to incorporate the changes into the manuscript.

The explanation of the aims is still not proper yet. What do the authors mean by two experiments? Was it the development of the FlnD and AIM? The author might want to avoid saying it's two experiments.

Figure 1 is impressive as it explains clearly, what the subjects would have been able to see. However, that makes me wonder why would the authors wanted to create these two stimuli. Authors might want to add a statement to indicate the pros of the ring and dip stimulus.

Lines 244 to 247 are a part of statistical analysis. Also in that section, it will be better to explain which variables were not normally distributed and that the authors have log-transformed the variable.

Usually, the F number is associated with two degrees of freedom not a single like F(5) and many more in the further text.

Fig 5 Are the correlation values Pearson’s correlations? Also, correlation is denoted with the symbol r. In the last review, when I suggested reporting the r2 values, I meant only for those instances where the correlation value was higher and the regression line could be fit and could have reported the r2 values then. In the same context, Line 39 in the abstract should say correlation instead of agreement.

Reviewer #2: The authors have addressed all the comments of the reviewers. The manuscript is much better organized now. I have nothing further to add to this review.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Shrikant R Bharadwaj

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Jun 7;19(6):e0305036. doi: 10.1371/journal.pone.0305036.r004

Author response to Decision Letter 1


20 May 2024

Comments to the Author

Reviewer #1: Really appreciate the effort taken to incorporate the changes into the manuscript.

We thank the reviewer for the appreciation.

The explanation of the aims is still not proper yet. What do the authors mean by two experiments? Was it the development of the FlnD and AIM? The author might want to avoid saying it's two experiments.

Thank you for the suggestion, we originally described the development of the FlnD and AIM as 2 groups of experiments. However, to avoid confusion, we have changed the presentation into Study 1 and 2, one for each novel method, and each with experiments that were completed to investigate stereoacuity with each method. The 2 Studies are differentiated by the main novel method (FInD or AIM) and by different subject pools.

Line 91-99: The two studies reported here were performed as proof-of-concept studies for 2 novel methods to measure stereoacuity: In Study One, we introduce FInD Stereo and its features. We compare estimates of stereoacuity, test duration, inter-test reliability, and repeatability of FInD with standard 2-AFC methods, and clinically used tests (Randot and Titmus) in stereo-typical and atypical participants. We also use the FInD method to compare different spatial properties of stereo-inducing stimuli and examine their effect on stereoacuity. In Study Two, we introduce AIM Stereo and its features, then we compare AIM Stereo against the above-mentioned clinical tests using the same outcome measures.

Figure 1 is impressive as it explains clearly, what the subjects would have been able to see. However, that makes me wonder why would the authors wanted to create these two stimuli. Authors might want to add a statement to indicate the pros of the ring and dip stimulus.

We have added a statement in methods to indicate that the ring and dip stimulus investigate different aspects of stereopsis.

Line 144-148: The ring and dip stimuli investigate different aspects of stereopsis. The ring stimuli consist of sparse contour features, generally referred to as ‘local’ stereopsis, whereas the dip stimuli are defined by dense noise elements, generally referred to as ‘global’ stereopsis. These stimulus types have been used in different populations and with some evidence for separate processing mechanisms.

Lines 244 to 247 are a part of statistical analysis. Also in that section, it will be better to explain which variables were not normally distributed and that the authors have log-transformed the variable.

We have edited the statistical analysis section to include the following:

Line 241-248: Duration data were skewed for FInD and Titmus data (Study 1) and AIM data (Study 2) and log-transformation was applied to convert durations to normally distributed data. Threshold estimates were log-transformed to convert the stereo-values to log-stereoacuity. The data for ring scotoma (FInD and 2AFC) and clinical tests (both Study 1 and 2) were still skewed, and further transformation did not convert it to normally distributed data. Hence, Wilcoxon signed rank test and Kruskal-Wallis tests were performed for these threshold data.

We have also changed the following lines in results section to reflect this:

Line 289-293: For computer based tests, there was a significant difference in stereo-thresholds between group (H(1,364)=54.38, p<0.0001; Kruskal-Wallis), and the overall test type (H(3,362)=56.76, p<0.0001; Kruskal-Wallis), However, there was not a significant effect of spatial frequency(H(3, 362)=1.98, p=0.58; Kruskal-Wallis).

Line 457-460 : For AIM, the thresholds for the 2 runs were not statistically different (z(96)=-0.336, p=0.73; Wilcoxon signed-rank test) and were averaged. The median of AIM stereo-thresholds were significantly higher (H(2,58)=18.12, p=0.0001; Kruskal-Wallis), than the stereo-thresholds from Randot or Titmus by a factor of 1.1-1.3 times.

Usually, the F number is associated with two degrees of freedom not a single like F(5) and many more in the further text.

We have included the F number with two degrees of freedom wherever applicable.

Fig 5 Are the correlation values Pearson’s correlations? Also, correlation is denoted with the symbol r. In the last review, when I suggested reporting the r2 values, I meant only for those instances where the correlation value was higher and the regression line could be fit and could have reported the r2 values then. In the same context, Line 39 in the abstract should say correlation instead of agreement.

We used Kendall’s rank correlation and have noted that in the Statistical analysis section. We have added that in text and figure legend of Fig 5 and 10.

Following the reviewer’s comments in the first round, we report r2 values in the revised manuscript. We use r2 uniformly throughout the manuscript without assigning a criterion for the use of r or r2.

Reviewer #2: The authors have addressed all the comments of the reviewers. The manuscript is much better organized now. I have nothing further to add to this review.

We thank the reviewer for the appreciation.

Attachment

Submitted filename: Response to Reviewers.docx

pone.0305036.s002.docx (30KB, docx)

Decision Letter 2

Amithavikram R Hathibelagal

23 May 2024

Comparison of Foraging Interactive D-prime and Angular Indication Measurement Stereo with different methods to assess stereopsis

PONE-D-23-27405R2

Dear Dr. Neupane,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Amithavikram R Hathibelagal, Ph.D.

Academic Editor

PLOS ONE

Acceptance letter

Amithavikram R Hathibelagal

29 May 2024

PONE-D-23-27405R2

PLOS ONE

Dear Dr. Neupane,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Amithavikram R Hathibelagal

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response to Reviewers.docx

    pone.0305036.s001.docx (48.8KB, docx)
    Attachment

    Submitted filename: Response to Reviewers.docx

    pone.0305036.s002.docx (30KB, docx)

    Data Availability Statement

    All data files are available from the Zenodo database (accession number 10.5281/zenodo.10688863).


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES