Skip to main content
PLOS One logoLink to PLOS One
. 2025 Apr 16;20(4):e0316936. doi: 10.1371/journal.pone.0316936

English version of the Computer Vision Symptom Scale (CVSS17): Translation and Rasch analysis-based cultural adaptation

Mariano González-Pérez 1,¤,*, Carlos Pérez-Garmendia 2, Kathleen Hoang 3, Rosario Susi 4, Beatriz Antona 1, Ana-Rosa Barrio 1, Mark Rosenfield 5
Editor: Marianne Clemence6
PMCID: PMC12002468  PMID: 40238743

Abstract

Background

Because the CVSS17 was originally developed in Spanish, the objective of this study was to adapt it linguistically and culturally into English while evaluating its psychometric properties.

Methods

After translating and adapting the CVSS17 to English, 441 participants (aged 18 to 65 years) from a general population, recruited from an on-line panel, completed the English version (CVSS17ENG). To determine the measurement properties of CVSS17ENG, we used the partial credit model. To assess convergent validity, coefficients of correlation between CVSS17ENG and the Ocular Comfort Index or Visual Discomfort Scale were calculated. A subset of 218 subjects was tested for test-retest reliability. In addition, differences between CVSS17ENG and CVSS17 were tested through Differential Item Functioning (a Rasch statistic used to check item bias).

Results

A total of 441 responses to CVSS17ENG (average age, 38.57; age range, 19–65; females, 50.24%) showed good fit to the Rasch model, good precision (person separation index = 2.73), and suboptimal targeting (-1.43). Residual principal component analysis suggested multidimensionality, but this was ruled out by a disattenuated correlation coefficient value of 0.82, and no DIF according to sex or age was found. Pearson correlation for CVSS17ENG-VDS was 0.54 (p < 0.01) and Spearman correlation for CVSS17ENG-Ocular Comfort Index was 0.66 (p < 0.001). For test– retest reliability, the limits of agreement were 9.39 and -8.61. Rasch analysis results were similar for CVSS17 and CVSS17ENG and there was no evidence of item bias based on gender or age.

Conclusion

The English version of CVSS17 demonstrates comparable performance to the original, indicating its suitability for clinicians and researchers to reliably assess Digital Eye Strain among English-speaking users of screen-based electronic devices.

Introduction

Known as Digital Eye Strain(DES) [1], the set of visual and ocular symptoms associated with prolonged use of screen-based digital devices is prevalent in optometric practice and extends beyond the workplace due to the widespread use of digital devices for both social and professional activities [2].

The first Rasch-based patient-reported outcome instrument (PRO instrument) for measuring DES, the Computer Vision Symptom Scale (CVSS17), was published in 2014 [3]. It consists of 17 items designed to gather information on 15 different symptoms. CVSS17 is available in hard copy and online at the CVSS17 website (cvss17.com), where anyone can access the questionnaire, complete it, and instantly obtain their CVSS17 scores, which can be categorized into five levels of severity [4].

CVSS17 has shown significant advantages over existing assessment tools, notably the widely utilized Computer Vision Syndrome Questionnaire (CVS-Q). Researchers in various clinical studies [58] have chosen CVSS17 to overcome the limitations identified in CVS-Q. While CVS-Q exhibits suboptimal item-person targeting [9], CVSS17 shows a higher measurement precision [4]. Furthermore, CVSS17 allows for independent scoring of the two specific components of DES [10]: 1) the vision related symptoms, also known as Internal Factor Symptoms [11], and 2) the ocular surface related symptoms, also known as External Factor Symptoms [11]. Moreover, it provides scores that can be grouped into five different levels of severity [4]. These properties establish CVSS17 as an excellent option for comprehensive and detailed assessments in both research and clinical contexts.

As there are almost 600 million native English speakers worldwide (source: https://www.worlddata.info/languages/english.php), and English is the dominant language in scientific publications, an English version of the CVSS17 was necessary as the original was developed in Spanish. An in-house translated English version was provided in 2014 as supporting information. However, a good translation alone is not sufficient to ensure that the translated version preserves the psychometric properties of the original version. Therefore, a cross-cultural adaptation process was required, involving the development of versions of the original questionnaire that are equivalent to the original but are also adapted linguistically and culturally to a different context [12].

The objective of the present study was to adapt the CVSS17 to English and assess the performance of the new version by examining its main psychometric properties through Rasch analysis. For this purpose, an English version (CVSS17ENG) was developed in accordance with instructions mentioned in several guidelines [1214] and previous cross-cultural adaptations [1517].

Materials and methods

The original CVSS17 comprises 17 items that query about the frequency, intensity and discomfort associated with 15 different symptoms. A scoring table based on Rasch analysis assigns scores to each item, resulting in a final score ranging from 17 (least symptomatic) to 53 (most symptomatic) [3]. Final scores are categorized into five levels of severity, and the questionnaire can be subdivided into two sub-scales to independently evaluate either visual or ocular symptoms [4].

To accomplish our objectives, we conducted a two-stage investigation:

  • 1) Stage 1. Translation and transcultural adaptation of CVSS17. The original CVSS17 questionnaire was translated and adapted into English to create the English version of CVSS17 (CVSS17ENG). The translated version was pretested on a small sample, assessing the survey questions, and conducting preliminary Rasch analysis.

  • 2) Stage 2. Validity analysis, reliability analysis of the CVSS17ENG and comparison with the original version. In this stage, we administered CVSS17ENG to a larger sample of the USA population and analyzed responses using the Partial Credit Model in Rasch analysis. Spanish and English versions were compared to identify any Differential Item Functioning (DIF).

The study details are summarized in Fig 1.

Fig 1. Flow diagram showing the study details.

Fig 1

Stage 1. Translation and transcultural adaptation of CVSS17

In this stage, it was crucial to ensure cultural equivalence and appropriateness of the translated items. To achieve this, it was necessary to assemble a research team that included members (see supplementary information S1 File) with previous experience in cross-cultural adaptations of heath symptoms questionnaires. The team comprised the study coordinator (MG), clinicians who are native of English or Spanish (KH, MR, MG), professional translators (AB and RS) and other professionals able to provide relevant insights on technical terms (RS). Furthermore, the translation and cross-cultural adaptation process involved five steps [13, 14] which included:

  1. Direct translation by two bilingual translators: The original CVSS17 version underwent an independent translation by two bilingual translators: a professional translator and a staff member from the State University of New York (SUNY). These translators were unfamiliar with the explored concepts and had English as their first language.

  2. Development of a consensus version based on the two direct translations: The team conducted a meeting to consolidate and merge the two direct translations into a unified forward translation.

  3. Back translation by two native Spanish translators: Two additional bilingual translators, whose first language was Spanish and who were blind to the original version, independently translated the consensus version back into Spanish. To avoid information bias, one of the translators were a professional translator naive about the concepts explored.

  4. Final review meeting: Translators and research group members convened to integrate the four prior translations, resulting in a pre-final version of CVSS17ENG. For CVSS17ENG, the panel decided to utilize the Rasch-based scoring table from Spanish CVSS17, that is provided as supporting information (S1 Table).

  5. Pretesting of the consensus version: conducted to assess item difficulty in a small sample recruited by convenience from the University staff. A minimum sample size of 50 was chosen to ensure stable item calibrations and person measures within a 1-logit range, as recommended by Linacre [18]. The inclusion criteria were:

  • Aged from 18 to 65

  • English as the mother tongue

  • Use of screen-based electronic devices (SBED) at least four hours a day and/or over 20 hours a week

Exclusion criteria included:

  • Prior non-refractive visual surgery

  • Active visual or neurologic diseases, medication affecting vision, or any disability hindering questionnaire comprehension.

  • Employment or enrollment in optometry and/or vision sciences.

  • Unwillingness to participate in the study

Furthermore, the convergence validity of CVSS17ENG was preliminarily assessed in this sample by comparing its responses with those of the Visual Discomfort Scale (VDS) [19], which was administered alongside CVSS17ENG in a random order.

Stage 2. Validity analysis, repeatability analysis and comparison with the original version

In the second stage, we conducted an online survey using the research platform provided by Prolific Academic Ltd., a platform designed to assist researchers in enlisting participants for their online research [20]. For the recruitment process, we first specified the demographic characteristics of the target participants: age, location, and language proficiency. In addition, we used Prolific’s prescreening questions to filter participants based on the specific criteria relevant to the study (e.g., computer and/or digital device use, occupation, health conditions). All the Prolific’s users fulfilling our criteria were invited to participate in the study. We compensated respondents who completed the survey once with 1.60 USD, and those who completed it twice for repeatability assessment received 4.27 USD

In this stage, we administered the CVSS17ENG and specifically selected participants from the USA, fulfilling the same inclusion and exclusion criteria used for the Spanish version development. I

The inclusion criteria for this stage were:

  • Age from 18 to 65

  • Normal or corrected-to-normal vision

  • SBED use for more than 4 hours a day and/or over 20 hours a week

The exclusion criteria for this stage were:

  • Employment or enrollment in optometry and/or vision sciences.

  • Unwillingness to participate in the study

All these criteria were analyzed as reported by users in the Prolific database.

A total of 5386 Prolific users meeting these criteria were invited, and our study included the first 220 women and 221 men who accepted the invitation, as a minimum of 400 people were required for precise calibration of the CVSS17ENG [18] and 200 subjects in each group (male-female, Spanish-English) are recommended for DIF Analysis [21].

To assess the repeatability and convergent validity of CVSS17ENG, two weeks after the first survey, CVSS17ENG was administered a second time along with an online version of the Ocular Comfort Index (OCI) [22] to a subset of 218 subjects selected at random from the first set of participants. Therefore, intraclass correlation coefficient and within-subject standard deviations was calculated. In addition, the Bland-Altman method was used to determine the mean difference and 95% agreement limits between sessions.

Assuming an expected correlation between CVSS17 and OCI above 0.3, with five percent missing response, a minimum sample size of 90 subjects were required for the convergent validity assessment. Following the recommendations of McAlinden et al. [23], and assuming a 10% precision for studying test-repeatability between two CVSS17ENG administrations, a minimum sample size of 192 subjects was required.

This study was a part of a project aimed to develop a new PRO instrument called EDVOS-CAT and the CVSS17 was adapted to English for the validation phase of this instrument. The project protocol (Protocol number: 17.244-E) was approved by the Research Ethics Committee of Hospital Clínico San Carlos (Madrid, Spain). Recruitment for stage 1 began on March 1st, 2018, and ended in March 31st, 2018. For stage 2, it began on May 16th, 2022.

Participants were required to provide electronic informed consent before accessing the questionnaire. This was done by requiring participants to access the study welcome page (https://cvss17.com/english), which clearly communicated the purpose, research objectives, potential risks and discomforts, anticipated benefits, and formal acceptance of informed consent. Participants also had the option to download a copy of the informed consent form and contact information for the research group to address any questions they may have about participating in the study. Participants were required to acknowledge the information provided before accessing the questionnaire.

Analysis strategy

Descriptive data, repeatability assessment, factor analysis and coefficient of determination (R2) between CVSS17 and CVSS17ENG item measures were conducted using IBM SPSS Statistics package version 27.0 (Statistical Package for Social Sciences).

For the analysis of the psychometric properties of the CVSS17ENG and CVSS17 conducted in this study, we employed a unidimensional Item Response Theory (IRT) model: the Partial Credit Model (PCM), which is an extension of the Rasch model useful for polytomous items. PCM analyses were carried out using WINSTEPS (Version 4.7.0.0, Winsteps, Beaverton, Oregon, USA).

As the items in the CVSS17 are polytomous, it was necessary to select between the Partial Credit Model (PCM) and the Rating Scale Model (RSM) prior to analysis. The PCM is considered less restrictive, although it necessitates a larger sample size and may complicate communication with users, comparison with other similar instruments, and so forth [24]. The CVSS17 comprises items asking about various symptoms with different response options. For instance, some items inquire about the intensity of symptoms, whereas others ask about frequency. Moreover, according to the CVSS17 Rasch-based scoring system, these categories contribute differently to the overall scale score [3]. For instance, category 4 of item A32 adds four points, whereas category 4 of item A30 adds two points. Consequently, given that the RSM model assumes a uniform distribution of response categories across all items, the PCM was chosen to ensure the highest degree of accuracy.

As in previous questionnaire’s cross-cultural adaptations [1517,22], Rasch analysis results were used to determine:

  1. Rating scale performance: assessed by examining the order of category thresholds. Disordering happens when response options deviate from the expected hierarchical order.

  2. Item fit statistics (Infit and Outfit mean square fit): indicate the degree to which items in a domain align with Rasch model expectations. According to the criteria proposed by Khadka et al. [25], the recommended range for infit and outfit mean square values was (0.7, 1.3).

  3. Dimensionality: the scale is deemed unidimensional when there is a single latent variable of interest and the measurement focuses on the level of this variable [26]. To assess the dimensionality of the CVSS17ENG, a Principal Component Analysis (PCA) of the residuals was conducted in WINSTEPS to calculate the raw variance explained by measures and the eigenvalue of the first contrast. Cases where less than 50% of the raw variance is explained by measures and/or the Eigenvalue of the first contrast is higher than two are considered indicative of potential multidimensionality [25,27]. If this occurs, further analyses are required. Flor this study, we decided to calculate the disattenuated correlation coefficient between the item clusters obtained from the PCA analysis [28]; according to Kim et al. [29], values above 0.8 indicates no benefit in considering the test as multidimensional.

  4. Person separation index (PSI): The Rasch-based PSI, a reliability index -analogous to Cronbach’s α used in Classical Test Theory- ranging from 0 to 1, implies acceptable reliability when its value exceeds 0.8 [30].

  5. Levels of performance: the number of different levels of performance was calculated according to the method described by Wright [31].

  6. Targeting: The alignment between the difficulty of the items and the participants’ visual abilities was determined by calculating the difference between the average item difficulty and the average symptom level of the subjects [25,27].

  7. Differential item functioning (DIF): We examined each item to check if there was any difference in the way subgroups (male–female; respondents aged < 40 years vs ≥ 40 years old) answered each item (i.e., no DIF). For this purpose, we conducted a DIF analysis using WINSTEPS, which is based on two methods [32]:

  8. The Mantel-Haenszel method, which estimates the log odds of DIF size and significance from cross-tabulations of observations in the two groups.

  9. The logit-difference (logistic regression) method, which estimates the difference in Rasch item difficulties between the two groups, holding all other factors constant.

According to Khadka et al. [33], DIF contrast (i.e., the difference in item difficulty between the two groups) was classified as no-DIF for contrasts less than 0.50 logits, minimal for contrasts between 0.50 and 1.0 logits, and notable for contrasts greater than 1.0 logits.

Moreover, because testing for DIF is a useful way to validate questionnaire translations [34], we used DIF analysis to test whether the CVSS17ENG items were equivalent to the original. DIF for an item was considered a cross-cultural or translational-related issue for that particular translation [35].

For readers unfamiliar with these variables, a more detailed description of each of the assessed variables is provided elsewhere [25,27].

The CVSS17 has a distinctive feature in that it provides a reliable overall score for the set of DES symptoms, while also allows the independent assessment of either the visual or the ocular symptoms attributable to computer use [4]. These subscales are called the Internal Symptom Factor (ISF), which comprises items A2, A22, A28, A30, A33, C21 and C24, and the External Symptom Factor (ESF), which encompasses items A4, A9, A17, A20, A21, A32, A33, B7, B8, C16 and C23.

To confirm, in the CVSS17ENG, the domain structure proposed for the CVSS17, a PCA of the residuals was conducted for each subscale to confirm its unidimensionality.

Results

English version of CVSS17 (CVSS17ENG)

The final version of the questionnaire arising from the pretest is available as supporting information (S2 File), along with its scoring table (S1 Table).

Pretesting.

Participants for this part of the study were 61 subjects (age, 37.62 ±  12.52 years; range, 23–64 years; females, 57.4%; presbyopes: 41.0%) who completed the CVSS17ENG. Item statistics for this preliminary Rasch analysis and the most relevant topics discussed in the Final Review Meeting are shown as supporting information (S1 File).

Association between CVSS17ENG and VDS.

Table 1 summarizes the main results of CVSS17ENG and VDS in the pretest.

Table 1. Summary of CVSS17ENG and Visual Discomfort Scale (VDS) results in the pretest sample, expressed as raw scores and logits.
CVSS17ENG raw score CVSS17ENG (logits) VDS raw score VDS (logits)
Responses valid 61 61 56
missing 0 0 5 5
Mean 29.59 -1.34 36.02 -1.70
Median 30 -0.96 33 -1.88
Standard Deviation 6.50 1.83 8.43 1.34
Minimum 17 -7.01 24 -4.83
Maximum 47 2.57 56 0.64
Interquartile range 25 to 34 -2.20 to 0.15 30.25 to 42 -2.35 to -0.67

K-S test for CVSS17ENG and VDS scores (expressed in logits) indicated a normal distribution of both sets of measures (P >  0.05). Accordingly, we calculated the Pearson correlation coefficient between total scores for CVSS17 and VDS at 0.54 (p < 0.01) (Fig 2), this value can be considered as evidence of convergent validity[25].

Fig 2. Scatter plot of correlation between CVSS17ENG and VDS.

Fig 2

Values are expressed in logits.

Stage 2 results

For Stage 2, we utilized the responses from 441 questionnaires. To exclude the responses of subjects who did not answer the questionnaire sincerely (e.g., providing random answers), as we did in the development of CVSS17, we eliminated questionnaires with an Outfit > 2.5. Consequently, the responses of 19 participants were excluded from our dataset.

Finally, 422 questionnaires (50.24% female, 39.10% presbyopes, mean age ±  SD was 38.57 ±  11.17, interquartile range for age was 30 to 46) were used in the analysis. The mean ±  standard deviation CVSS17ENG score was 28.37 ±  7.22 and interquartile range was (23, 33).

Item fit statistics.

Item fit statistics, item measure (difficulty, in logits) and point bi-serial correlation for CVSS17ENG are displayed in Table 2.

Table 2. CVSS17ENG item fit statistics.
Item Measure Infit MnSQ Outfit MnSQ Pt. Bis
A2 0.57 0.97 0.93 0.61
A4 -2.25 0.79 0.77 0.77
A9 0.21 0.79 0.78 0.76
A17 -0.41 0.97 0.96 0.72
A20 -0.12 1.03 1.03 0.70
A21 -0.87 0.99 1.00 0.68
A22 0.57 0.93 0.86 0.64
A28 0.71 1.25 1.13 0.52
A30 2.22 0.83 0.32 0.40
A32 0.60 1.06 1.08 0.65
A33 -1.6 1.06 1.04 0.66
B7 -0.91 1.05 1.06 0.55
B8 1.14 0.96 0.82 0.58
C16 -0.68 1.13 1.17 0.63
C21 0.32 0.96 1.01 0.63
C23 0.34 1.01 0.96 0.62
C24 0.15 1.22 1.20 0.57

Results are ordered by item Id (left column). Measure = item difficulty (in logits); MnSQ = mean Square; Pt. Bis = point bi-serial correlation.

All items fell within the interval (0.7, 1.3) recommended by other authors[25,27] with the exception of A30, with an Outfit of 0.32.

Dimensionality.

Principal component analysis (PCA) of CVSS17ENG scores revealed that 49.8% of the raw variance was explained by CVSS17ENG measures. The first contrast eigenvalue was 2.19. To rule out multidimensionality [25,27], we examined the disattenuated correlation coefficient between the first and second contrast, which was 0.82. This indicates a 67% shared variance in person measures, so for practical purposes, both contrasts were considered as different branches of the same measures [29,36] and CVSS17ENG can be considered unidimensional [37].

Person separation index and performance levels.

The Person Separation Index for CVSS17ENG was 2.73, indicating a reliability of 0.88, distinguishing 3.97 score strata. Using the Wright method, a sample-independent method [31], CVSS17ENG was able to distinguish 5.7 symptom levels.

Targeting.

The targeting value estimated was -1.43 logits. The item-person map (Fig 3) shows that the CVSS17ENG were very demanding for the ability level in this sample, as we assessed a population-based sample, which included many individuals expected to have a low level of symptoms.

Fig 3. Rasch item-person map, displaying the self-reported symptoms level of the patients in our study (left side) along with the corresponding item difficulty (right side).

Fig 3

Differential item functioning (DIF) by sex and age.

Minimal DIF (DIF contrast = 0.50 for women) was found for item A21 (“Did your eyes burn?”). No more items showed either minimal or notable DIF for sex, so we did not investigate this further as, according to Khadka et al. [25], one item with minimal DIF does not indicate a substantial reduction in the quality of a questionnaire.

According to age group (presbyopes vs. non presbyopes), two items showed minimal DIF: Item A30 (“Did the letters appear double?), with DIF contrast=0.86 for presbyopes, and Item B7 (“Watery eyes”), with DIF = 0.56 for non-presbyopes. As indicated by Khadka et al. [25], the identification of minimal DIF in two items does not imply a substantial deterioration in the questionnaire’s quality.

English version vs. Spanish version (based on DIF analysis and R2 calculation)

We compared the responses of the second stage participants with the data from a sample of Spanish people used in a prior study [4]. The main characteristics of all the samples used in the study are summarized in Table 3.

Table 3. Main characteristics of the samples used in the study.
CVSS17 CVSS17ENG CVSS17ENG CVSS17ENG
Pre-test sample Validation sample Test-retest sample
n 796 61 422 218
Age mean 43.93 37.62 38.57 42.06
S.D. 10.05 12.52 11.17 11.78
Interquartile range 36 to 52 27 to 50 30 to 46 33 to 52
Presbyopes (%) 64.45 40.98 39.10 54.63
Female (%) 58.04 57.38 50.24 46.51
Location Spain USA USA USA
Mother tongue Spanish English English English

DIF analysis showed that four items showed minimal DIF: In item A17, DIF contrast was 0.53 for the English version; in item A28, DIF contrast was 0.90 for the Spanish version; in A32, English version had DIF contrast of 0.75; in C24, Spanish version had 0.64 for the Spanish version. According to the mentioned guidelines [25], four items with minimal DIF in CVSS17ENG are acceptable, making both versions equivalent. In addition, the coefficient of determination (R2) between the item measures derived from the DIF analysis was 0.856, so both versions shared 85.6% of their variability.

Psychometric properties of CVSS17ENG.

The main properties of CVSS17ENG compared with Rash model expectations are shown in Table 4.

Table 4. Comparison among CVSS17ENG, the original CVSS17 and Rasch model expectations.
Parameter Rasch model expectation CVSS17ENG CVSS17
Numer of items - 17 17
Response categories ordering Ordered Ordered Ordered
Person separation index (reliability) > 2.0 (> 0.80) 2.73 (0.88) 2.85 (0.89)
Item separation > 3.0 8.91 8.61
PCA Analysis: Eigenvalue of the first contrast < 2.0 2.19 1.37
Number of items with Infit outside (0.7, 1.3) 0 0 0
Number of items with Outfit outside (0.7, 1.3) 0 0 0
Number of items with DIF for gender of >1.0 logits and p < 0.05 None None None
Number of items with DIF for age group of >1.0 logits and p < 0.05 None None None
Targeting < 1.0 1.43 0.89
Intraclass Correlation Coefficient (ICC) - 0.79
(0.74, 0.84)
0.85
(0.80, 0.89)
Coefficient of repeatability (from Bland-Altman plot) - 9.00 8.14

DIF =  differential item functioning; PCA =  principal component analysis of the residuals.

According to existing guidelines [25,27], these results suggest that overall quality of CVSS17ENG is good.

Convergent validity and repeatability.

CVSS17ENG – Ocular Comfort Index Spearman Rho correlation index was calculated at 0.66 (p < 0.001) (Fig 4).

Fig 4. Scatter plot of the association between CVSS17ENG and Ocular Comfort Index (OCI).

Fig 4

OCI and CVSS17ENG scores are expressed in logits.

The correlation coefficients values obtained can be considered as evidence of convergent validity [25,27], the overall results of the convergent validity assessment are summarized in Table 5.

Table 5. Summary of convergent validity assessment results.
CVSS17ENG vs. VDS CVSS17ENG vs. OCI
Sample location USA USA
Test used for comparison Visual Discomfort Scale (VDS) Ocular Comfort Index (OCI)
Domain assesed Visual symptoms Ocular Symptoms
Coefficient of correlation Pearson Spearman Rho
Coefficient value (r) 0.54 0.66
95% confidence interval for r (0.34, 0.69) (0.58, 0.73)

For the test–retest assessment: the time interval between both administrations was 14.03 ±  0.11 days, the two-way single measure Intraclass Correlation Coefficient for test–retest repeatability was 0.794 (95% CI, 0.737–0.840), and within-subject standard deviations was 7.15. The mean difference between sessions was 0.39 and the limits of agreement including 95% of the differences were 9.39, -8.61 (Fig 5).

Fig 5. Bland-Altman plot for CVSS17ENG.

Fig 5

where the dotted line represents the mean difference between scores obtained from completing the questionnaire on two occasions. The solid lines represent the lower and upper 95% limits of agreement. The scores are expressed in raw score units.

Domain structure assessment.

For ISF, raw variance explained by measures was 51% and the Eigenvalue for the first contrast was 1.95, so ISF can be considered as unidimensional.

For ESF, the raw variance explained by measures was 53% and the Eigenvalue for the first contrast was 2.21. To rule out multidimensionality, we examined the disattenuated correlation coefficient between the first and second contrast, which was 0.78, indicating a 61% shared variance in person measures. Therefore, for practical purposes, both contrasts were considered as different branches of the same measures [36] and the subscale was judged unidimensional [37].

Discussion

CVSS17ENG was developed through a standard valid process to ensure its equivalence with CVSS17 [17]. This will enable cross-cultural research in DES and will allow comparability of data obtained from different communities. Optometrists and Ophthalmologists can now use this valid and reliable instrument, fostering a more consistent assessment of DES symptoms even across language barriers.

The higher accuracy of Rasch-validated instruments [38] helps clinicians make informed decisions about patient care, based on robust and comparable data. Moreover, CVSS17ENG addresses a critical gap in existing PRO-instruments for assessing DES in English speaking populations, as it can independently measure the two dimensions of DES: ocular symptoms and visual symptoms. This could enhance the precision and efficacy of therapeutic strategies for DES, because it allows tailoring interventions to target the unique aspects of visual discomfort experienced by individuals. In addition, epidemiological studies on DES may now use CVSS17ENG to analyze differences in ocular and visual symptoms among different populations and study their association with any possible risk factor.

It is important to note that DES should not be considered a disease or physiological condition. Instead, it is a set of symptoms that arise from prolonged use of screen-based digital devices. Unlike diseases, DES does not follow a binary presence/absence paradigm, but manifests along a spectrum of severity. Recognizing this subtle distinction is essential for accurate assessment and effective management. To determine the number of statistically significant different levels of scores (in our case, symptom scores) that a scale can discriminate, PCM analysis provides a reliability indicator, the Person Separation Index (PSI), which can be used to calculate the theoretical number of levels discriminated by the scale. Based on this data, the CVSS17ENG should be able to distinguish 3.97 levels [39]. However, this method may underestimate the effectiveness of the test if the person distribution is skewed towards the more “symptomatic side” of the scale [40], a common finding in PRO instruments for measuring symptoms. In such cases, the Wright method [40] helps to determine the true performance of the test. Our results showed that the CVSS17ENG can measure five different levels of performance. Unlike other existing questionnaires that categorize DES scores into dichotomous groups, the CVSS17ENG defines five Rasch-derived levels of symptoms, allowing for more detailed analysis that recognizes and quantifies subtle variations in symptom severity. This unique feature enables the differentiation and subsequent statistical study of different levels of DES symptoms, improving diagnostic accuracy and providing a framework for studying the progression of DES in clinical settings. In addition, researchers will gain a better understanding of the impact of DES on individuals in different context and demographics, which would be particularly useful for epidemiological studies.

During the pre-test, we received no negative feedback regarding the comprehensibility of any item, and there were just two missing responses in one item.

The assessment of convergent validity is essential when investigating the validity of a new PRO instrument, as it involves comparing the measures of a new test with those of another test that measures a closely related concept. As the DES is a group of symptoms related to ocular-surface complaints (referred to as “ocular symptoms”) and another set of symptoms related to visual comfort (referred to as “visual symptom”), we assessed the convergent validity of CVSS17ENG by comparing its responses with those of a questionnaire measuring visual discomfort, the VDS [41], and of another one designed to assess dry eye, the OCI [42]. The coefficient of correlation between VDS and CVSS17 [3], 0.54, was close to the one calculated for VDS versus CVSS17ENG, 0.60. Correlation between the Ocular Surface Disease Index (OSDI) and CVSS17 [43], 0.65, was almost equal to the one obtained for OSDI versus CVSS17ENG (0.66). These findings offer evidence of the convergent validity of CVSS17ENG and showed that the shared variance of CVSS17ENG with other scales measuring related topics is similar to those observed for CVSS17.

As recommended by Bradley and Massof [44], we used PCM results to compare item psychometric properties between CVSS17ENG and the original CVSS17. The results of this comparison confirmed the similarity between the two versions. Furthermore, we conducted a Rasch-based DIF analysis to compare the items of the Spanish version with those of the English version. This comparison is crucial for ensuring validity, as the absence of DIF indicates that both versions measure the same items is the same way. The presence of DIF in one item would imply that the measures obtained from the symptom assessed by this item is not equivalent in both versions. Additionally, the between-version DIF analysis is especially helpful in the translation review, as the presence of DIF in any item may indicate an inaccurate translation. Then, the lack of significant DIF between the English and Spanish versions guarantees the validity of the CVSS17ENG and the quality of its translation.

The test-retest repeatability of CVSS17ENG was comparable to that of the Spanish version. Those seeking to maximize its accuracy, may achieve this by administering the questionnaire on two separate occasions instead of once.

CVSS17ENG fitted the model (Infit: 1.00, Outfit: 0.95) but Outfit value for item A30 (“Did the letters appear double?”) was 0.32, suggesting limited measurement value and redundant information [45, 46]. However, no other items of CVSS17ENG showed misfitting so this can be considered acceptable [25,27]. Therefore, item A30 remained unchanged in the final version of CVSS17ENG.

Principal Component Analysis returned an eigenvalue higher than two, which was not observed during CVSS17 development. This could be because we used BIGSTEPS instead of WINSTEPS and it also could be explained by differences in sample composition. Nevertheless, the unidimensionality of CVSS17ENG was confirmed, for statistical purposes, through the disattenuated correlation coefficient between the first and second contrast from Principal Component Analysis.

One limitation of the study is the lack of clinical information from the participants, such as whether participants wore glasses or had any eye diseases. As a result, it was not possible to evaluate the impact of this clinical data on the CVSS17ENG performance.

Conclusions

The adaptation of the CVSS17 into English (CVSS17ENG) was rigorously conducted, ensuring linguistic equivalence and psychometric robustness.

Rasch analysis confirmed that CVSS17ENG is reliable, unidimensional, and performs equivalently to the original Spanish version.

Minimal DIF suggests cross-cultural equivalence and convergent validity was established through correlations with the VDS and the OSDI.

The study confirms that CVSS17ENG is a reliable tool for assessing DES in English-speaking populations in both research and clinical settings.

Supporting information

S1 File. Expert committee report, describing the final review meeting for the cross-cultural adaptation.

The name and role of each participant are provided, along with a summary of the discrepancies discussed in the session. Summary statistics for the Rasch analysis conducted on the pre-test responses are also displayed.

(PDF)

pone.0316936.s001.pdf (787.1KB, pdf)
S2 File. Hard copy version of the English version of Computer Vision Symptom Scale (CVSS17ENG).

PDF version of the CVSS17ENG, for those interested in distributing it as a hard copy.

(PDF)

pone.0316936.s002.pdf (93.6KB, pdf)
S1 Table. Scoring chart for the CVSS17ENG.

Each CVSS17ENG item is identified by a capital letter and a one- or two-digit number. Additionally, each response option for each item is identified by a number. This table is necessary to determine the score that each subject gets for each question when using the hard-copy version (respondents are automatically scored in the online version). For example, if a respondent chooses option 6 for the first question, they will receive three points; choosing option 3 for item A30 gets one point. The final score for each subject is obtained using the formula at the bottom of the table, both in CVSS17 points and in logits.

(PDF)

pone.0316936.s003.pdf (92.6KB, pdf)

Acknowledgments

The authors thank the SUNY’s staff for their cooperation in the first stage if the study.

Data Availability

Data is available at https://doi.org/10.6084/m9.figshare.25909171.v1.

Funding Statement

BA, AB, MG-P and RS received a grant from the Instituto de Salud Carlos III through Project PI18/00374 (co-funded by European Regional Development Fund “A way to make Europe”). DXC Technology Company, Madrid provided support in the form of a salary for CP-G. Downtown Eyecare, New York provided support in the form of a salary for KH. ALAIN AFFLELOU Óptico provided support in the form of a salary for MG-P The specific roles of these authors are articulated in the “author contributions” section. The funders didn't play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Wolffsohn JS, Lingham G, Downie LE, Huntjens B, Inomata T, Jivraj S, et al. TFOS lifestyle: Impact of the digital environment on the ocular surface. Ocul Surf. 2023;28:213–52. doi: 10.1016/j.jtos.2023.04.004 [DOI] [PubMed] [Google Scholar]
  • 2.Sheppard AL, Wolffsohn JS. Digital eye strain: prevalence, measurement and amelioration. BMJ Open Ophthalmol. 2018;3(1):e000146. doi: 10.1136/bmjophth-2018-000146 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.González-Pérez M, Susi R, Antona B, Barrio A, González E. The Computer-Vision Symptom Scale (CVSS17): development and initial validation. Invest Ophthalmol Vis Sci. 2014;55(7):4504–11. doi: 10.1167/iovs.13-13818 [DOI] [PubMed] [Google Scholar]
  • 4.González-Pérez M, Susi R, Barrio A, Antona B. Five levels of performance and two subscales identified in the computer-vision symptom scale (CVSS17) by Rasch, factor, and discriminant analysis. PLoS One. 2018;13(8):e0202173. doi: 10.1371/journal.pone.0202173 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Antona B, Barrio AR, Gascó A, Pinar A, González-Pérez M, Puell MC. Symptoms associated with reading from a smartphone in conditions of light and dark. Appl Ergon. 2018;6812–7. doi: 10.1016/j.apergo.2017.10.014 [DOI] [PubMed] [Google Scholar]
  • 6.De-Hita-Cantalejo C, Sánchez-González J-M, Silva-Viguera C, Sánchez-González MC. Tweenager computer visual syndrome due to tablets and laptops during the postlockdown COVID-19 Pandemic and the Influence on the Binocular and Accommodative System. J Clin Med. 2022;11(18):5317. doi: 10.3390/jcm11185317 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.De-Hita-Cantalejo C, García-Pérez Á, Sánchez-González J-M, Capote-Puente R, Sánchez-González MC. Accommodative and binocular disorders in preteens with computer vision syndrome: a cross-sectional study. Ann N Y Acad Sci. 2021;1492(1):73–81. doi: 10.1111/nyas.14553 [DOI] [PubMed] [Google Scholar]
  • 8.Molina-Aragonés J, Lemonche-Aguilera C, Cirilo S-S, López-Pérez C. Cuestionario CVSS17 y vigilancia de la salud de trabajadores profesionalmente expuestos a pantallas de visualización. Medicina y seguridad del trabajo. 2018;64(253):329–44. [Google Scholar]
  • 9.Seguí M del M, Cabrero-García J, Crespo A, Verdú J, Ronda E. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace. J Clin Epidemiol. 2015;68(6):662–73. doi: 10.1016/j.jclinepi.2015.01.015 [DOI] [PubMed] [Google Scholar]
  • 10.Coles-Brennan C, Sulley A, Young G. Management of digital eye strain. Clin Exp Optom. 2019;102(1):18–29. doi: 10.1111/cxo.12798 [DOI] [PubMed] [Google Scholar]
  • 11.Sheedy JE, Hayes JN, Engle J. Is all asthenopia the same?. Optom Vis Sci. 2003;80(11):732–9. doi: 10.1097/00006324-200311000-00008 [DOI] [PubMed] [Google Scholar]
  • 12.Arafat S, Chowdhury H, Qusar M, Hafez M. Cross cultural adaptation and psychometric validation of research instruments: a methodological review. J Behav Health. 2016;5(3):129. doi: 10.5455/jbh.20160615121755 [DOI] [Google Scholar]
  • 13.Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976). 2000;25(24):3186–91. doi: 10.1097/00007632-200012150-00014 [DOI] [PubMed] [Google Scholar]
  • 14.Wild D, Grove A, Martin M, Eremenco S, McElroy S, Verjee-Lorenz A, et al. Principles of good practice for the translation and cultural adaptation Process for Patient-Reported Outcomes (PRO) Measures: report of the ISPOR task force for translation and cultural adaptation. Value Health. 2005;8(2):94–104. doi: 10.1111/j.1524-4733.2005.04054.x [DOI] [PubMed] [Google Scholar]
  • 15.Adnan TH, Mohamed Apandi M, Kamaruddin H, Salowi MA, Law KB, Haniff J, et al. Catquest-9SF questionnaire: validation of Malay and Chinese-language versions using Rasch analysis. Health Qual Life Outcomes. 2018;16(1):5. doi: 10.1186/s12955-017-0833-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.González-Pérez M, Pérez-Garmendia C, Barrio AR, García-Montero M, Antona B. Spanish Cross-cultural adaptation and rasch analysis of the Convergence Insufficiency Symptom Survey (CISS). Transl Vis Sci Technol. 2020;9(4):23. doi: 10.1167/tvst.9.4.23 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Gothwal VK, Muthineni VV, Khadka J, Lamoureux EL, Pesudovs K, Cataract Outcomes Study Group. Psychometric properties of an indian translation of the vision-related activity limitation item bank in Cataract. Optom Vis Sci. 2019;96(12):910–9. doi: 10.1097/OPX.0000000000001459 [DOI] [PubMed] [Google Scholar]
  • 18.Linacre J. Sample size and item calibration stability. Rasch Measurement Transactions. 1994;7328. [Google Scholar]
  • 19.Conlon EG, Lovegrove WJ, Chekaluk E, Pattison PE. Measuring visual discomfort. Visual Cognition. 1999;6(6):637–63. doi: 10.1080/135062899394885 [DOI] [Google Scholar]
  • 20.What is Prolific and how does it work?. 2023.
  • 21.Lei P-W, Li H. Small-Sample DIF Estimation Using SIBTEST, Cochran’s Z, and Log-Linear Smoothing. Applied Psychological Measurement. 2013;37(5):397–416. doi: 10.1177/0146621613478150 [DOI] [Google Scholar]
  • 22.Barrio AR, González-Pérez M, Heredia-Pastor C, Enríquez-Fuentes J, Antona B. Spanish Cross-cultural adaptation, rasch analysis and validation of the ocular comfort index (OCI) Questionnaire. Int J Environ Res Public Health. 2022;19(22):15142. doi: 10.3390/ijerph192215142 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.McAlinden C, Khadka J, Pesudovs K. Precision (repeatability and reproducibility) studies and sample-size calculation. J Cataract Refract Surg. 2015;41(12):2598–604. doi: 10.1016/j.jcrs.2015.06.029 [DOI] [PubMed] [Google Scholar]
  • 24.Linacre J. Comparing and choosing between Partial Credit Models (PCM) and Rating Scale Models (RSM). Rasch Measurement Transactions. 2000;14(3):768. [Google Scholar]
  • 25.Khadka J, McAlinden C, Pesudovs K. Quality assessment of ophthalmic questionnaires: review and recommendations. Optom Vis Sci. 2013;90(8):720–44. doi: 10.1097/OPX.0000000000000001 [DOI] [PubMed] [Google Scholar]
  • 26.Wu M, Adams R. Applying the rasch model to psycho-social measurement: a practical approach. 2007.
  • 27.Pesudovs K, Burr JM, Harley C, Elliott DB. The development, assessment, and selection of questionnaires. Optom Vis Sci. 2007;84(8):663–74. doi: 10.1097/OPX.0b013e318141fe75 [DOI] [PubMed] [Google Scholar]
  • 28.Kandel H. Development of item banks to measure refractive error-specific quality-of-life parameters. 2018.
  • 29.Kim SY, Lee W-C, Kolen MJ. Simple-structure multidimensional item response theory equating for multidimensional tests. Educ Psychol Meas. 2020;80(1):91–125. doi: 10.1177/0013164419854208 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Rencz F, Mitev AZ, Szabó Á, Beretzky Z, Poór AK, Holló P, et al. A Rasch model analysis of two interpretations of “not relevant” responses on the Dermatology Life Quality Index (DLQI). Qual Life Res. 2021;30(8):2375–86. doi: 10.1007/s11136-021-02803-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Wright B. Separation, reliability and skewed distributions: statistically different levels of performance. Rasch Measurement Transactions. 2001;14(4):786. [Google Scholar]
  • 32.Linacre J. Winsteps® Rasch measurement computer program user’s guide. 2012.
  • 33.Khadka J, Huang J, Mollazadegan K, Gao R, Chen H, Zhang S, et al. Translation, cultural adaptation, and Rasch analysis of the visual function (VF-14) questionnaire. Invest Ophthalmol Vis Sci. 2014;55(7):4413–20. doi: 10.1167/iovs.14-14017 [DOI] [PubMed] [Google Scholar]
  • 34.Petersen MA, Groenvold M, Bjorner JB, Aaronson N, Conroy T, Cull A, et al. Use of differential item functioning analysis to assess the equivalence of translations of a questionnaire. Qual Life Res. 2003;12(4):373–85. doi: 10.1023/a:1023488915557 [DOI] [PubMed] [Google Scholar]
  • 35.Watt T, Barbesino G, Bjorner J, Bonnema S, Bukvic B, Drummond R. Cross-cultural validity of the thyroid-specific quality-of-life patient-reported outcome measure, ThyPRO. Quality of Life Research. 2015;24. [DOI] [PubMed] [Google Scholar]
  • 36.Linacre J. Data variance: Explained, modeled and empirical. Rasch Measurement Transactions. 2003;17(3):942–3. [Google Scholar]
  • 37.Woudstra-de Jong JE, Manning-Charalampidou SS, Vingerling H, Busschbach JJ, Pesudovs K. Patient-reported outcomes in patients with vitreous floaters: a systematic literature review. Survey of Ophthalmology. 2023;8(5):875. doi: 10.1016/j.survophthal.2023.06.003 [DOI] [PubMed] [Google Scholar]
  • 38.Braithwaite T, Calvert M, Gray A, Pesudovs K, Denniston AK. The use of patient-reported outcome research in modern ophthalmology: impact on clinical trials and routine clinical practice. Patient Relat Outcome Meas. 2019;10:9–24. doi: 10.2147/PROM.S162802 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Boone W, Staver J, Yale M. Person reliability, item reliability, and more. Rasch Analysis in the Human Sciences. 2014, 217–34. [Google Scholar]
  • 40.Wright B. Separation, reliability and skewed distributions statistically different sample-independent levels of performance. Rasch Measurement Transactions. 2014;14(4):786. [Google Scholar]
  • 41.Borsting E, Chase C, Tosha C, Ridder WH 3rd. Longitudinal study of visual discomfort symptoms in college students. Optom Vis Sci. 2008;85(10):992–8. doi: 10.1097/OPX.0b013e31818883cd [DOI] [PubMed] [Google Scholar]
  • 42.Johnson ME, Murphy PJ. Measurement of ocular surface irritation on a linear interval scale with the ocular comfort index. Invest Ophthalmol Vis Sci. 2007;48(10):4451–8. doi: 10.1167/iovs.06-1253 [DOI] [PubMed] [Google Scholar]
  • 43.González-Pérez M. Desarrollo y validación de una escala para medir la sintomatología visual asociada al uso de videoterminales en el trabajo. 2015.
  • 44.Bradley C, Massof R. Validating translations of rating scale questionnaires using Rasch analysis. Taylor & Francis. 2017, p. 1–2. [DOI] [PubMed] [Google Scholar]
  • 45.Linacre J. What do infit and outfit, mean-square and standardized mean. Rasch measurement transactions. 2002;16(2):878. [Google Scholar]
  • 46.Smith R. Polytomous mean-square fit statistics. Rasch Measurement Transactions. 1996;10:516–7. [Google Scholar]

Decision Letter 0

Amir Pakpour

30 Apr 2024

PONE-D-24-02434English version of the Computer Vision Symptom Scale (CVSS17): translation and Rasch analysis-based cultural adaptationPLOS ONE

Dear Dr. González-Pérez,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 14 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Amir H. Pakpour, Ph.D.

Academic Editor

PLOS ONE

Journal requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. Thank you for stating the following in the Acknowledgments Section of your manuscript: 

[This study was supported by the Instituto de Salud Carlos III through Project “PI18/00374” (co-funded by European Regional Development Fund “A way to make Europe”).]

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. 

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: 

 [BA, AB, MGP and RS received a grant by the Instituto de Salud Carlos III (https://www.isciii.es/Paginas/Inicio.aspx)  through Project “PI18/00374” (co-funded by European Regional Development Fund “A way to make

Europe”). The funders didn't play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript]

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

3. Thank you for uploading your study's underlying data set. Unfortunately, the repository you have noted in your Data Availability statement does not qualify as an acceptable data repository according to PLOS's standards.

At this time, please upload the minimal data set necessary to replicate your study's findings to a stable, public repository (such as figshare or Dryad) and provide us with the relevant URLs, DOIs, or accession numbers that may be used to access these data. For a list of recommended repositories and additional information on PLOS standards for data deposition, please see https://journals.plos.org/plosone/s/recommended-repositories.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: comments to improve the paper.

provide more details on the translation and cross-cultural adaptation process. specifically, describe the steps taken to ensure cultural equivalence and appropriateness of the translated items.

clarify the inclusion and exclusion criteria for the participants in the two stages of the study.

provide more information on the recruitment process, especially for the online panel used in stage 2. Justify sample size for each.

statistical analyses need major rework for explanations. the rationale for choosing the partial credit model (pcm) over the rating scale model (rsm) for rasch analysis should be explained in more detail. I am confused if this is irt or mirt?

the method used to assess unidimensionality should be described more clearly. the authors mention using the disattenuated correlation coefficient, but it would be helpful to provide a brief explanation of this approach. Use updated refs.

the authors should consider providing more details on the assessment of differential item functioning (dif) and the interpretation of the dif contrast values. I am lost in present manuscript.

the interpretation of the person separation index (psi) and the levels of performance should be clarified for readers who may be less familiar with rasch analysis.

the authors should explain the criteria used to determine acceptable fit statistics (e.g., the recommended range for infit and outfit mean square values).

provide more information on the methods used to assess convergent validity, specifically the rationale for choosing the visual discomfort scale (vds) and the ocular comfort index (oci) as comparison measures.

the authors should consider presenting the correlation coefficients between cvss17eng and vds/oci with their corresponding confidence intervals.

results and discussion it would be helpful to include a table summarizing the demographic characteristics of the participants in both stages of the study.

the results for the convergent validity assessment with vds and oci could be presented more clearly, perhaps with additional figures or tables. better visually.

the authors should consider discussing the clinical significance or implications of the observed differences in dif between the english and spanish versions.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

PLoS One. 2025 Apr 16;20(4):e0316936. doi: 10.1371/journal.pone.0316936.r003

Author response to Decision Letter 1


17 Jun 2024

Response to Journal requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

According to your advice, we revised file naming and addressed some minor formatting issues, such as changing parentheses to brackets in the references.

2. Thank you for stating the following in the Acknowledgments Section of your manuscript:

[This study was supported by the Instituto de Salud Carlos III through Project “PI18/00374” (co-funded by European

Regional Development Fund “A way to make Europe”).]

We note that you have provided funding information that is not currently declared in your Funding Statement.

However, funding information should not appear in the Acknowledgments section or other areas of your manuscript.

We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

[BA, AB, MGP and RS received a grant by the Instituto de Salud Carlos III (https://www.isciii.es/Paginas/Inicio.aspx) through Project “PI18/00374” (co-funded by European Regional Development Fund “A way to make

Europe”). The funders didn't play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript]

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

Thank you for your advice. We have omitted this information from the acknowledgements section and are providing a new Funding Statement along with the cover letter. The new Funding Statement remains at follows: “This study was supported by a grant by the Instituto de Salud Carlos III (https://www.isciii.es/Paginas/Inicio.aspx) through Project “PI18/00374” (co-funded by the European Regional Development Fund “A way to make Europe”). The funders didn't play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript”

3. Thank you for uploading your study's underlying data set. Unfortunately, the repository you have noted in your DataAvailability statement does not qualify as an acceptable data repository according to PLOS's standards.

At this time, please upload the minimal data set necessary to replicate your study's findings to a stable, public repository (such as figshare or Dryad) and provide us with the relevant URLs, DOIs, or accession numbers that may be used to access these data. For a list of recommended repositories and additional information on PLOS standards for data deposition, please see https://journals.plos.org/plosone/s/recommended-repositories.

Thank you. According to your advice, we’ve uploaded to FigShare the minimal dataset necessary to replicate our study findings. The new dataset’s DOI (10.6084/m9.figshare.25909171) is provided in the corresponding submission section.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-textcitations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Thank you. We missed including it in the first version, so in the new version of the manuscript, a list of the Supporting Information captions is provided at the end of the manuscript in a section titled 'Supporting Information'

1. Response to review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: comments to improve the paper.

• provide more details on the translation and cross-cultural adaptation process. specifically, describe the steps taken to ensure cultural equivalence and appropriateness of the translated items.

Thank you for your suggestion. Based on it we have provided this information in lines 121-127

• clarify the inclusion and exclusion criteria for the participants in the two stages of the study.

According to your recommendation, we’ve clarified the inclusion criteria for stage 1 (lines 149-160) and for stage 2 (lines 177-187)

• provide more information on the recruitment process, especially for the online panel used in stage 2.

Thank you for your suggestion. Based on it we have provided more information about the online panel used in stage 2. (lines 169-174)

• Justify sample size for each

Thank you for your feedback. Regarding sample size justification:

The justification for sample size used in Stage 1 is described in lines 146-150

Justification for sample size used in the validation of the CVSS17ENG and in DIF analysis is described in lines 189-191

Justification for sample size used in the convergent validity and test-retest reliability assessment is displayed in lines 198-202.

statistical analyses need major rework for explanations.

• the rationale for choosing the partial credit model (pcm) over the rating scale model (rsm) for rasch analysis should be explained in more detail.

Thank you for your suggestion. Based on your feedback, we have provided a more detailed explanation (lines 230-241) of the rationale for choosing the Partial Credit Model (PCM) over the Rating Scale Model (RSM).

• I am confused if this is irt or mirt?

Thank you for your observation. In the revised manuscript (lines 225-228), we have clarified that we used a unidimensional Item Response Theory (IRT) model.

• the method used to assess unidimensionality should be described more clearly. the authors mention using the disattenuated correlation coefficient, but it would be helpful to provide a brief explanation of this approach. Use updated refs.

Thank you. Following your recommendation, we have added a brief explanation (lines 258-262) about the use of the disattenuated correlation coefficient for assessing unidimensionality when the PCA analysis suggests multidimensionality.

• the authors should consider providing more details on the assessment of differential item functioning (dif) and the interpretation of the dif contrast values. I am lost in present manuscript.

Thank you for your observation. Based on your feedback, we have included I the new manuscript more details abaut the DIF assessment and its interpretation (lines 273-283)

• the interpretation of the person separation index (psi) and the levels of performance should be clarified for readers who may be less familiar with rasch analysis.

According to your suggestion, we have included in the new manuscript a comment in line 264, and a brief section in the discussion (lines 437-455).

• the authors should explain the criteria used to determine acceptable fit statistics (e.g., the recommended range for infit and outfit mean square values).

Based on your feedback, the recommended range for infit and outfit mean square values are now described in the methods section of the manuscript (lines 248-250).

• provide more information on the methods used to assess convergent validity, specifically the rationale for choosing the visual discomfort scale (vds) and the ocular comfort index (oci) as comparison measures.

According to your observation, we discuss it in the new manuscript (482-489).

• the authors should consider presenting the correlation coefficients between cvss17eng and vds/oci with their corresponding confidence intervals.

Thank you for the suggestion. According to it, we have included a new table (Table 5) in the manuscript (lines 411-415) that summarizes the results of the convergent validity assessment, including the confidence intervals of the correlation coefficients.

results and discussion

• it would be helpful to include a table summarizing the demographic characteristics of the participants in both stages of the study.

Based on your feedback, we have modified Table 3 and know it provides the demographic characteristics of all the samples used in the study

• the results for the convergent validity assessment with vds and oci could be presented more clearly, perhaps with additional figures or tables. better visually.

According to your suggestion, we have included a new table (Table 5) in the manuscript (lines 411-415) that summarizes the results of the convergent validity assessment and completes the information displayed in Fig 2 and Fig 4. Thank you

• the authors should consider discussing the clinical significance or implications of the observed differences in dif between the english and spanish versions.

Thank you very much for your suggestions to improve the paper, according to your feedback we’ve added a paragraph in the discussion (lines 498-507) about the implications of the DIF analysis results.

Attachment

Submitted filename: Response to reviewers.docx

pone.0316936.s005.docx (35.8KB, docx)

Decision Letter 1

Marianne Clemence

9 Jan 2025

English version of the Computer Vision Symptom Scale (CVSS17): translation and Rasch analysis-based cultural adaptation

PONE-D-24-02434R1

Dear Dr. González-Pérez,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Marianne Clemence

Staff Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for addressing all raised concerns during the peer review.

Thank you for addressing all raised concerns during the peer review.

Reviewer #2: "The authors have demonstrated a thorough and thoughtful approach in addressing all the comments and concerns raised by the previous reviewer. They have carefully considered each point of feedback and have made significant revisions, clarifications, and improvements to the manuscript. These revisions include refining the structure and clarity of the writing, enhancing the methodological explanations, and providing more robust justifications for the choices made in their study.

The authors have also strengthened the theoretical framework by incorporating additional references and discussing their research in the context of the latest developments in the field. Furthermore, they have addressed concerns regarding the data analysis by providing clearer explanations, revisiting certain analytical steps, and including supplementary material to ensure transparency and rigor.

In response to concerns regarding the presentation of results, the authors have reorganized the relevant sections, added clearer visuals (e.g., figures, tables), and provided more detailed explanations to ensure that their findings are presented in a way that is both accessible and comprehensive for readers. These changes significantly improve the overall readability and scholarly quality of the paper.

Upon reviewing the updated manuscript, I am confident that the authors have adequately resolved all previous issues and that the paper now meets the necessary standards for publication. The improvements made reflect a high level of academic diligence, and the paper is now far stronger in terms of clarity, methodological rigor, and overall contribution to the field. Therefore, I am pleased to recommend that the paper be accepted for publication."

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: Yes:  Dr. Ragni Kumari

**********

Acceptance letter

Marianne Clemence

PONE-D-24-02434R1

PLOS ONE

Dear Dr. González-Pérez,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Amir H. Pakpour

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Expert committee report, describing the final review meeting for the cross-cultural adaptation.

    The name and role of each participant are provided, along with a summary of the discrepancies discussed in the session. Summary statistics for the Rasch analysis conducted on the pre-test responses are also displayed.

    (PDF)

    pone.0316936.s001.pdf (787.1KB, pdf)
    S2 File. Hard copy version of the English version of Computer Vision Symptom Scale (CVSS17ENG).

    PDF version of the CVSS17ENG, for those interested in distributing it as a hard copy.

    (PDF)

    pone.0316936.s002.pdf (93.6KB, pdf)
    S1 Table. Scoring chart for the CVSS17ENG.

    Each CVSS17ENG item is identified by a capital letter and a one- or two-digit number. Additionally, each response option for each item is identified by a number. This table is necessary to determine the score that each subject gets for each question when using the hard-copy version (respondents are automatically scored in the online version). For example, if a respondent chooses option 6 for the first question, they will receive three points; choosing option 3 for item A30 gets one point. The final score for each subject is obtained using the formula at the bottom of the table, both in CVSS17 points and in logits.

    (PDF)

    pone.0316936.s003.pdf (92.6KB, pdf)
    Attachment

    Submitted filename: Response to reviewers.docx

    pone.0316936.s005.docx (35.8KB, docx)

    Data Availability Statement

    Data is available at https://doi.org/10.6084/m9.figshare.25909171.v1.


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES