Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2023 Sep 15;18(9):e0291635. doi: 10.1371/journal.pone.0291635

Functional social support: A systematic review and standardized comparison of different versions of the DUFSS questionnaire using the EMPRO tool

Cristina M Lozano-Hernández 1,2,3,4,*, Yolanda Pardo Cladellas 5,6,7, Mario Gil Conesa 8, Olatz Garin 5,9, Montserrat Ferrer Forés 5,6,9, Isabel del Cura-González 2,3,10
Editor: Victor Manuel Mendoza-Nuñez11
PMCID: PMC10503721  PMID: 37713437

Abstract

Background

Functional social support is one of the most established predictors of health, and the Duke-UNC Functional Social Support Questionnaire (DUFSS) is one of the most commonly used instruments to measure this parameter. The objective of this study is to systematically review the available evidence on the psychometric and administration characteristics of the different versions of the DUFSS and perform a standardized assessment though to a specifically designed tool.

Methods

A systematic review was performed in the PubMed/MEDLINE, SCOPUS, WOS and SCIELO databases. All articles that contained information on the development process of the instrument, the psychometric properties and aspects related to its administration were included, without restrictions based on publication date, language, or the version of the questionnaire that was studied. The selection and extraction procedure were carried out by two researchers. The articles finally included were peer-reviewed through a standardised assessment using the Evaluating the Measurement of Patient-Reported Outcomes (EMPRO) tool. PROSPERO registration number: CRD42022342977.

Results

A total of 54 articles were identified. After eliminating duplicates and screening articles based on the selection criteria, 15 studies that examined the DUFSS questionnaire resulting in 4 different versions: 3 articles obtained the 8-item version; 11 the 11-item version; and a single article obtained two versions, the 14-item version and the 5-item version. At least 60% of them did so in a young adult population, predominantly female and with a medium-low socio-economic level or with characteristics of social vulnerability. The EMPRO evaluation showed that the 11-item version (54.01 total score) was the only one that had been studied on all recommended attributes and had higher total scores than the other versions: 8 items (36.31 total score), 14 items (27.48 total score) and 5 items (23.81 total score). This difference appears in all attributes studied, with the highest scores in "reliability (internal consistency)" and "validity".

Conclusions

Of the 4 versions identified in the DUFSS questionnaire, the 11-item version was found to be optimal based on the EMPRO standardized tool. Although, a priori, we could prioritise its use in epidemiological studies over the other versions, it should be noted that this version should also be used with caution because there are attributes that have not been studied.

Background

Research on the relationship between social support and the state of health peaked in the 1970s [1]. Since then, several authors have shown the positive effects of social support on health outcomes [24]. Cobb and Cassell [5, 6] argue that the main protective role of social support lies in its moderating effect on life stress. Cohen and Gallant [2, 7] point out that social support impacts on behaviour and the way people manage their health problems and self-care, determining their lifestyles.

There are several definitions of social support across different disciplines [8, 9]. Sociologist P. A. Thoits (1982) defines it as the degree to which a person’s basic social needs are satisfied through interaction with others, where basic needs are understood as affiliation, affection, belonging, identity, security and approval [10].

Therefore, there is no consensus regarding the definition of social support [1113]. According to the conceptual model of Barrón A. [14], social support can be understood from three perspectives: structural, contextual and functional. Structural social support studies the so-called support network, which includes all the contacts of the individual; the size, frequency and density of networks are measured, but the availability of this resource is not measured since the support network is not always a source of available social support [15]. Contextual social support addresses the environment and the circumstances that favour or hinder social support [14].. The functional perspective focuses on the subjective assessment that the person makes regarding their own social support based on its availability and accessibility [16]. The meta-analyses performed by Uchino et al. and Dimatteo et al. show how the study of social support in relation to health is carried out from one or several perspectives according to the authors [17, 18].

Functional social support has been described as a stronger predictor of health than the rest of the perspectives of social support [19] Functional social support has three aspects: 1) emotional support, focused on the closest and most intimate social relationships, sources of care and empathy and composed of two dimensions (the confidential dimension and the affective dimension); 2) instrumental support, also called tangible or material support, because it refers to practical help in tasks, travel or financial aid; and 3) informational support, referring to support in decision-making or useful advice.

In 1983, Broadhead et al. [20] defined the characteristics of the association between functional social support and health. Based on this work and on the strategic recommendations issued by House and Kahn in 1985 regarding the study of social support, [11] Broadhead developed the Duke-UNC Fuctional Social Support (DUFSS) questionnaire in 1988 [16]. This questionnaire was validated in the USA in the context of primary care and on a general population of mostly young adult women with a medium-high socioeconomic status.

The original version[16] comprised 14-items across four dimensions: “amount of support”, “confidential support”, “affective support” and “instrumental support”. After examining the test-retest reliability, an 11-item version was obtained. This version included the confidential dimension (CF) and the affective dimension (AF) and measured emotional support but omitted measures of the amount of support and instrumental support. The subsequent factor analysis revealed that 3 of the 11 items did not correspond to the resulting dimensions in the other versions, indicating that further research was necessary and thus suggesting that the 8-item version be applied. The final 8-item version included a CF composed of items 3, 4, 5, 6 and 7; and an AF composed of items 1, 2 and 8.

Later (1991) De la Revilla et al. validated the Broadhead questionnaire, by taking its 11-item version, in Spain in the context of primary care and on a general population of mostly young adult women with a low socio-economic status. As a result, they obtained a version with the same number of items (11-items) with a different distribution in its dimensions: CF composed of items 7,8,6,4,1 and 10; and AF composed of items 11,9,2,3 and 5. This questionnaire has been widely used to study functional social support in National Health Surveys, as is the case in Spain [21], and in European surveys, such as the European Health Interview Survey [22]. Therefore, different versions of the DUFSS questionnaire exist, all of which have been validated across very different populations. Thus, it difficult to choose the most appropriate version.

The concept of social support stands out for its subjective character and nature as Patient Reported Outcomes (PRO). According to the recommendations established, a proper validation of a PRO should address certain attributes (FDA, Valderas y Argimon): conceptual and measurement model, reliability, validity, responsiveness and interpretability. In the case of the DUFSS questionnaire, the quality of measurement of social support of each of the different validations is unclear, as no work to date has provided information on this.

The objective of this study was to systematically review the available evidence on the psychometric and administration characteristics of the different versions of the DUFSS questionnaire and perform a standardized assessment though to a specifically designed tool.

Methods

Protocol and registration

A systematic review of the literature was carried out, and the results were reported in accordance with the guidelines of the Preferred Reporting Items of Systematic Reviews and Meta-Analyses Protocol (PRISMA). The protocol was registered in PROSPERO under registration number CRD42022342977.

Eligibility criteria

All articles that contained information on the development of the instrument, the psychometric properties and aspects related to the administration of the Duke-UNC Functional Social Support (DUFSS) questionnaire were included. In order to make the search as sensitive as possible, there were no restrictions based on publication date, neither on the format (paper or digital), nor on the language of the article or the version or language of the questionnaire used. Regarding the study population, only studies conducted in a population under 18 years of age were excluded, but there were no restrictions based on other population characteristics or settings.

Information sources

The search was performed on 21/02/2023 in the PubMed/MEDLINE, SCIELO, SCOPUS and WOS databases with the aim of searching a broad swath of databases.

Search strategy

To develop the search strategy, the different names used for this questionnaire were taken into account. The search strategy used was adapted to each of the databases, in which the terms that appear in Table 1 were included. The reference lists of the included articles were manually searched, and authors were contacted to obtain additional data if necessary. Table 1 shows the terms that have been used to build the search strategy.

Table 1. Search strategy.

“Social support”
    AND
“Duke Unc” OR “DUFSS” OR “FSSQ”.
    AND
Questionnaire* OR instrument* OR scale* OR index* OR survey* OR batter* OR inventor* OR measur* OR rating*.
    AND
Valid*OR Chronbach* OR "psychometric properties" OR psychometr* OR Factor Analysis,Statistical[MeSH] OR develop* OR valid* OR translat*.

Selection and data collection process

Initially, two reviewers screened the titles according to the inclusion criteria. Then, the same two reviewers did the same for the abstracts. Once the duplicates had been removed and based on the selection criteria, the eligibility of the full articles was assessed. Discrepancies that arose at each of the selection stages were resolved by discussion and consensus between the two researchers, and a third reviewer was consulted when consensus could not be reached between the two previous reviewers.

Data items

The selected studies were grouped by version type according to the number of items that made up the version resulting from their study. The unit of analysis through EMPRO was each version type of the DUFSS. The following data were extracted: author and year; the version of the questionnaire (number of items and language); the characteristics of the population and country; and the results obtained from the factor analysis carried out, whether exploratory and/or confirmatory: the dimensionality of the questionnaire (unifactorial or bifactorial with the items that make up each dimension). The evaluation and synthesis strategy of the selected articles was included stratifying the articles based on the versions of the DUFSS questionnaire.

Synthesis methods

EMPRO tool

The EMPRO tool [23] was designed to measure the quality of PRO instruments. This tool demonstrated excellent reliability in terms of internal consistency (Cronbach’s alpha = 0.95) and inter-rater concordance (intraclass correlation coefficient: 0.87–0.94). It evaluates quality as a global concept with 39 items across eight attributes (Table 2): “conceptual and measurement model” (concepts and population to be evaluated); ’’ reliability ’’ (to what extent an instrument is free of random errors); ’’ validity ’’ (to what extent an instrument measures what it intends); “sensitivity to changes” (ability to detect changes over time); “interpretability” (assignment of meanings to instrument scores); “burden” (time, effort and other administration and response requirements); “alternative modes of administration” (self-administered or heteroadministered and route of administration); and ’’ cultural and linguistic adaptations". Responses on each item are given on 4-point Likert scale, where 4 is "totally agree", 1 is "totally disagree". Other response options include "no information" and "not applicable". The items answered as ‘‘no information” were assigned a score of 1 (lowest possible score) if at least 50% of all items for one attribute were rated; b) items rated as ‘‘not applicable” (an option that is only available as an answer for 5 items) were not considered as part of the attribute score.

Table 2. Attributes assessed using the evaluating the measurement of patient-reported outcomes (EMPRO) tool.
Attribute Definition Items included
Conceptual and measurement model The rationale for and description of the concept and the populations that a measure is intended to assess and the relationship between these concepts 1. Concept of measurement stated
2. Obtaining and combining items described
3. Rationality for dimensionality and scales
4. Involvement of target population
5. Scale variability described and adequate
6. Level of measurement described
7. Procedures for deriving scores
Reliability The degree to which an instrument is free from random error Internal consistency:
11. Data collection methods described
12. Cronbach’s alpha adequate (QA)
13. IRT estimates provided
14. Testing in different populations
Reproducibility:
15. Data collection methods described
16. Test–retest and time interval adequate
17. Reproducibility coefficients adequate (QA)
18. IRT estimates provided
Validity The degree to which the instrument measures what it purports to measure. 19. Content validity adequate
20. Construct/criterion validity adequate
21. Sample composition described
22. Prior hypothesis stated (QA)
23. Rational for criterion validity
24. Tested in different populations
Responsiveness An instrument’s ability to detect change over time 25. Adequacy of methods (QA)
26. Description of estimated magnitude of change
27. Comparison of stable and unstable groups
Interpretability The degree to which one can assign easily understood meaning to an instrument’s quantitative scores. 28. Rational of external criteria
29. Description of interpretation strategies
30. How data should be reported stated
Burden The time, effort, and other demands placed on those to whom the instrument is administered (respondent burden) or on those who administer the instrument (administrative burden) Respondent:
31. Skills and time needed
32. Impact on respondents
33. Not suitable circumstances
Administrative:
34. Resources required
35. Time required
36. Training and expertise needed
37. Burden of score calculation
Alternative modes of administration Alternative modes of administration used for the administration of the instrument 38. The metric characteristics and use of each alternative mode of administration
39. Comparability of alternative modes of administration
Cultural adaptation Cultural and linguistic adaptation of the instrument. 8. Linguistic equivalence (QA)
9. Conceptual equivalence
10 Differences between the original and the adapted versions

QA: quality assessed

The overall score of the tool gives each item values from 0 (the worst possible score) to 100 (the best possible score), resulting in an overall average score based on the attributes. The result is considered adequate if it reaches at least 50 points.

Standardized assessment

Each instrument was evaluated by two different experts using the EMPRO tool. Three experts in measuring patient reported outcomes (PROs) composed the review group: two were senior researchers who belonged to the EMPRO tool development working group, and the third was a junior researcher who had been previously trained as an EMPRO evaluator. The pairs of reviewers were composed of a senior and a junior researcher. To minimize the likelihood of bias, the experts were not authors, nor had they participated in the process of development or adaptation of the assigned instrument.

The EMPRO evaluation process consisted of two consecutive rounds. In the first round, each expert independently evaluated the instrument that had been assigned to them from the full-text articles identified. In the second round, each expert received the results of the rating assigned by their review partner. Discrepancies were resolved by discussion or by consulting a third reviewer.

Analytic strategy

To carry out the analysis of the information, first, the identified studies were stratified according to the resulting item version; second, the published recommendations of the EMPRO tool were followed for the calculation of the scores [23, 24]. For this, at least half of the items that made up each attribute had to be rated from 1 to 4 (responses of “no information” were assigned a score of 1). The mean score was transformed into a scale ranging from 0 (the worst possible score) to 100 (the best possible score). The attributes “reliability” and “load” are composed of two subattributes each: “internal consistency and reproducibility” and “response load and administration load”, respectively. For reliability, the highest subscore of its components was chosen to represent the attribute. The calculation of the global score was performed with the average of the first five attributes, since the attribute of load and alternative versions are not metric characteristics but management characteristics. The scores of the attribute of alternative forms were not calculated because there were no validations for different forms of administration (it was self-reported, in some cases via interview), and the scores of the attribute of cultural and linguistic adaptations were not calculated because it was beyond the scope of this study.

Results

The search strategy (Table 1) yielded 54 studies; 52 studies were obtained from the databases used, and the remaining 2 articles were obtained via manual search. After eliminating duplicates, 30 potentially eligible studies remained. After screening the titles and abstracts, 23 studies remained for full text review. Eight studies were then excluded, including 2 meeting abstracts and 6 studies that examined a different tool that also used the abbreviation DUFFS. Finally, a total of 15 articles were included in this review. The flow diagram is detailed in Fig 1.

Fig 1. PRISMA flow chart—systematic literature search.

Fig 1

Table 3 shows the main characteristics of each of the published validations of the DUFSS questionnaire. Of the 15 studies included, 73.3% of them examined the 11-item version while the rest start from the 8-item and 14-item version in equal parts (13.3% each). The studies that used the 11-item version as a reference were carried out in Spanish-speaking countries [15, 2532] and in European countries, Italy [33] and Portugal [34], while those that used the 8-item and 14-item versions were carried out in English-speaking countries [16, 3537], mostly in the USA.

Table 3. Main characteristics of the DUFSS validations.

Resulting version Author/year Language version Version used Setting & Country Sample Results
8-item version Broadhead et al. [1988] [16] English 14-item Primary Care General Population
USA
n = 401
Women: 78%
Age: 35.7 (± *)
Medium-high socioeconomic status

AFE
CF: 3, 4, 5, 6 and 7
AF: 1, 2 and 8
Kathy B Isaacs et al. [2011] [35] English 8-items Specialized centre for pregnant women
USA
n = 186
Women: 100%
-
Low socioeconomic status
AFE
Unifactorial
H. M. Epino et al. [2012] [36] English 8-item Rural Primary Care
HIV-positive Rwanda
n = 603
Women: 62%
Age: 38 ± 10
Low socioeconomic status
AFE
Unifactorial
11-item version De la Revilla et al. [1991] [25] Spanish 11-item Primary Care
General Population Spain
n = 139
Women: 82%
Age: 46 ± 17.6
Low socioeconomic status
AFE
CF: 7,8,6,4,1 and 10
AF: 11,9,2,3 and 5
Bellón S. JA. et al. [1996 [15] Spanish 11-item Primary Care
General Population Spain
n = 656
Women: 72%
Age: 50.6 ± 18.9
Low socioeconomic status
AFE
CF: 1, 2, 6, 7,8,9 and 10
AF: 3, 4, 5 and 11
Alvarado BE. et al. [2005] [26] Spanish 11-item Municipal Population Register
Mothers of children between
6–18 m Colombia
n = 193
Women: 100%
-
AFE
CF: 4,5,10 and 11
AF: 6,7,8
Piña L. A. et al. [2007] [27] Spanish 11-item Rural specialized centre for HIV-positive individuals Mexico n = 67
Women: 32.40%
Age: 36.4 ± 10.6
Low socioeconomic status
AFE
Unifactorial
Ayala A. et al. [2012] [28] Spanish 11-item Municipal Population Register Noninstitutionalized seniors Spain n = 1012
Women: 56.30%
Age: 72.1 ± 7.8
-
AFE
CF: 7, 8, 6, 5, 4, 11 and 10; AF: 2, 1, 9 and 3
Cuellar-Flores L. et al. [2012] [29] Spanish 11-item Primary Care Caregivers Spain n = 128
Women: 85.90%
Age: 54.9 ± 15.1
-
AFE
CF: 2, 6, 7, 8, 9, 10 and 11; AF: 1, 3, 4 and 5
Mas-Exposito L. et al. [2013] [30] Spanish 11-item Specialized centre for people with mental illness Spain n = 241
Women: 32.40%
Age: 41.7 ± 11.6
Low socioeconomic status
AFE
CF: 4, 6,7,8, 10 and 11;
AF: 1, 2, 9, 3 and 5
Rivas Diez [2013] [31] Spanish 11-item Educational centres Chile n = 371
Women: 100%
Age: 37.6 ± 13.1
Medium-high socioeconomic status
AFE/AFC
CF: 4, 5, 6, 7, 8, 10 and 11
AF: 1, 2, 3 and 9
Specialized centre for victims of abuse Chile n = 97
Women: 100%
Age: 41.9 ± 10
Medium-low socioeconomic status
AFE/AFC
CF: 3, 5, 6, 7, 8, 9 and 10
AF: 1, 2, 4 and 11
Caycho R. T. et al. [2014] [33] Italian 11-item Specialized Centre for Peruvian Migrants Italy n = 150
Women: 58%
Age: 34.6 ± 10.3
Medium-low socioeconomic status
AFC
CF: 1,4,6,7,8 y10;
AF: 2, 3,5, 9 and 11
Mónica Aguilar-Sizer et al. [2021] [32] Spanish 11-item Educational centre for the general population Ecuador n = 535Women: 75.5%Age: 22± 5 AFE
CF: 6, 7 and 8
AF: 1, 2, 3, 4, 5, 9, 10 and 11
Martins, S. et al. [2022] [34] Portuguese 11-item Mother and fathers of young children educational centres Portugal n = 1.058
Women: 90.5%
Age: 35.7± 5.2 Medium-high socioeconomic status
AFC CF: 1, 6, 7,8,9 and 10 AF: 3, 4 and 5 IF: 2, 11, 12 and 13
14-item version Rebecca Saracino et al. [2014] [37] English 14-item Specialized centres for patients with incurable and advanced diseases (AIDS and cancer) USA n = 253
Women: 69.6%
Age: 58.2± 11 -
AFE Unifactorial
5-item/3-response version AFE Unifactorial

* Data not available. Factor analysis performed: Confirmatory Factor Analysis (CFA), Exploratory Factor Analysis (EFA)

CF (Confidential Factor), AF (Affective Factor) and FI (Instrumental Factor)

The original version of the questionnaire was designed as a tool to be used in the primary care setting. Thirty-three percent of the localized validations were performed in the field of primary care, and the rest were performed in educational or specific care settings, such as mental health care centres or among pregnant mothers. Regarding the sociodemographic characteristics of the population in which they have been validated, 66.7% have been performed on a predominantly female population; 60% in young adults, whose average age did not exceed 40 years, and 60% in population with medium-low socioeconomic status or who met characteristics of social vulnerability.

From a methodological point of view, only 20% of the included studies used a confirmatory factor analysis to examine the factorial structure of the questionnaire and the dimensions in which its items are grouped. In the exploratory factor analysis, a total of 73.3% of studies reported a two-dimension structure, consisting of the dimensions obtained by Broadhead (confidential and affective). In contrast, 20% of the studies reported a one-dimensional structure, and 6% of studies, all of which examined the modified 11-item version, reported a three-dimensional structure (confidential, affective and instrumental).

Across the 15 studies, the characteristics of 4 different versions of the DUFSS questionnaire were evaluated: the 14-item, 11-item, 8-item and 5-item version. All versions use a Likert scale with 5 response options, except for the 5-item scale, which used both a 5-point Likert scale and a 3-point Likert scale.

Table 4 shows the items that make up each of the versions. The elimination of some items in successive versions has resulted in the reestablishment of the numbering values. Therefore, the 8-item and 5-item versions do not share any numbering with the original version. A study performed on the 11-item version revealed that 2 new items were included but were not part of the original 14-item version.

Table 4. Different versions of the DUFSS questionnaire by number of items.

14-item version* 11-item version* 11-item version modified* 8-item version* 5-item version**
Item 1: visits with friends and relatives. Item 1 Item 1 Item 5 Item 5
Item 2: help around the house. Item 2 Item 2 Item 6 Item 6
Item 3: help with money in an emergency. Item 4 Item 4. Item 8 Item 9
Item 4: praise for a good job. Item 5 Item 5 Item 9 Item 12
Item 5: people who care what happens to me. Item 6 Item 6 Item 10 help when I need transportation.
Item 6: love and affection. Item 8 Item 8 Item 11
Item 7: telephone calls from people I Know. Item 9 Item 11 Item 12
Item 8: chances to talk to someone about problems at work or with my housework. Item 10 Item 9 Item 14
Item 9: chances to talk to someone I trust about my personal and family problems. Item 11 Item 12
Item 10: chances to talk about money matters. Item 12 Item 10
Item 11: invitations to go out and do things with other people. Item 14 Item 14
Item 12: I get useful advice about important things in life. help with transportation and move
Item 13: help when I need transportation. help with my children’s care
Item 14: help when I’m sick in bed.

*5-response Likert-type

**3-response Likert-type

The results of the EMPRO evaluation about psychometric values of each of each version are shown in Table 5. The highest score was obtained by the 11-item version (54.01 points), followed by the 8-item version (36.31 points), the 14-item version (27.48 points) and the 5-item version (23.81 points).

Table 5. EMPRO results: psychometric values of each version of the DUFSS questionnaire.

Attributes 14-Items 11-Items 8-Items 5-Items
Conceptual and measurement model 35.71 54.76 35.71 35.71
1. Concept of measurement stated +++I +++I +++I +++I
2. Obtaining and combining items described ++ +++ ++ ++
3. Rationality for dimensionality and scales +++ +++ ++I +++
4. Involvement of target population + ++ + +
5. Scale variability described and adequate + +++ +I +
6. Level of measurement described ++ ++ ++ ++
7. Procedures for deriving scores ++ ++ ++ ++
Reliability 41.66 70.83 44.44 25
Internal consistency 41.66 70.83 44.44 25
8. Data collection methods described +++ +++I ++I ++
9. Cronbach’s alpha adequate ++++ +++ +++I +++
10. IRT estimates provided NI +++ NI NI
11. Testing in different populations NI +++ NI NI
Reproducibility - 45.83 - -
12. Data collection methods described NI +++I NI NI
13. Test–retest and time interval adequate NI ++ NI NI
14. Reproducibility coefficients adequate + +++ NI NI
15. IRT* estimates provided NI NI NI NI
Validity 26.67 66.67 62.50 25
16. Content validity adequate + ++ +I +
17. Construct/criterion validity adequate ++ +++I ++I ++
18. Sample composition described +++ +++I ++++ +++I
19. Prior hypothesis stated ++ +++ +++I NI
20. Rational for criterion validity NA NA NA ++
21. Tested in different populations + +++ NA +
Responsiveness - 33.33 - -
22. Adequacy of methods NI +++ NI NI
23. Description of estimated magnitude of change NI ++ NI NI
24. Comparison of stable and unstable groups NI NI NI NI
Interpretability 33.33 44.44 38.89 33.33
25. Rational of external criteria ++ +++ ++I ++
26. Description of interpretation strategies NI ++ NI NI
27. How data should be reported stated +++ ++ +++ +++
Overall score 27.48 54.01 36.31 23.81
Burden
Burden: respondent 33.33 66.67 38.89 33.33
28. Skills and time needed +I +++I ++ +I
29. Impact on respondents +I ++I ++ +I
30. Not suitable circumstances +++ +++ ++I +++
Burden: administrative 33.33 50.00 66.67 66.67
31. Resources required NI +++ ++I +++
32. Time required NA NA NA NA
33. Training and expertise needed NA NA NA NA
34. Burden of score calculation +++ ++ +++I +++

Ítems Score: + 1 point, I ½ point, NI no information, NA. not applicable.

*IRT (Item Response Theory)

Conceptual and measurement model

The 11-item version obtained the highest score for this attribute (54.76 points), while the rest of the versions obtained a score of 35.71 points. The aspects that were least addressed by the included studies were the description of the measurement scale (including its scores) and the participation of the sample in a previous pilot.

Reliability

Only studies of the 11-item version evaluated both aspects; the other studies just examined the data collection method and calculated Cronbach’s alpha coefficient without assessing internal consistency or reproducibility. The highest internal consistency score was obtained by the 11-item version (70.83 points), because Cronbach’s alpha in all studies was higher than 0.9. In addition, most of the internal consistency quality criteria, with the exception of the IRT criterion, also had high scores (three or four crosses). Although most studies reported an adequate Cronbach’s alpha coefficient (≥0.70), they did not comprehensively measure the reliability of the instrument.

Validity

Only one study, from the 11-item version, used the EFA and CFA to examine content validity. The versions with the highest scores for this attribute were the 11-item version (66.67 points) and the 8-item version (62.50 points). In the included studies, differences were been observed between the dimensions of the DUFSS both in the distribution of its items and in the number of dimensions included in the tool. The relationship with previous hypotheses and related variables (convergent validity) was analysed in the 14-, 11- and 8-item versions, which improved their scores on the validity attribute.

Responsiveness

Regarding the sensitivity to change, only the 11-item version detects the sensitivity to change compared with of other validated scales (33.33 points); however, the 11-item version does not offer comparative results between groups (longitudinal validity), because most of the included studies used a cross-sectional design. In the rest of the versions, not enough information was found to calculate scores.

Interpretability

The interpretability of the questionnaire was similar across all versions (with a range of 33.33 to 44.44 points); the 11-item version had the highest score, followed by the 8-item version. None of the 4 versions studied herein offer sufficient information on the measurement and interpretation strategies of the DUFSS questionnaire. Only Bellón et al. (11-item version) provided information by stating that the 15th percentile (score ≤32) of their sample was the cut-off point for differentiating "good" from "low" social support.

Burden

Regarding response load and administration load, both attributes had scores ranging from 33.33 to 66.67. The 11-item version obtained a higher score for response load, whereas the 8- and 5-item versions obtained the highest scores for administration load.

Discussion

Main results

This systematic review has identified 15 studies on the validation of the DUFSS that have obtained different versions of the questionnaire according to the number of resulting items: 14-items, 11-items, 8-items and 5-items. Validations carried out in Spanish-speaking countries predominate in which, as in Italy and Portugal, the 11-item version validated by De La Revilla et al. was used, while in English-speaking countries, mostly in the USA, the 8-item and 14-item version validated by Broadhead et al. was used for their study (citation). Despite the fact that the origin of the questionnaire is American and that more than half of the studies were carried out in countries with different languages (Spanish, Portuguese, Italian and English) and in very specific populations (young adults, women and with a medium-low socio-economic level or with characteristics of social vulnerability), none of the studies analysed describe the process used to translate and culturally adapt the instrument to their study populations. This aspect could call into question the content validity of the different versions of the questionnaire in relation to the original. Comparing this reality with that of similar studies, we found that the 12-item Multidimensional Scale of Perceived Social Support (MSPSS) [28] was assessed for the psychometric properties of its existing translations using the COSMIN tool [29] and concluded that the translated versions provided little evidence for content validity. This absence may explain the differences in the distribution of its items as a result of the EFA and ACE (Table 4).

The quality assessment of the DUFSS questionnaire through the EMPRO tool showed that the 11-item version is the only one that has been studied through all the recommended attributes and the one that has obtained the highest score in its evaluation. This version scored higher than the rest in all the attributes studied, with the best scores for "reliability (internal consistency)" and "validity"; its lowest score was for the attribute "responsiveness", where the rest of the versions had no information (N/A). This result is probably due to the fact that the 11-item version has been studied by 73.3% of the studies analysed, while the other versions have less validations. The 8-item version obtained its best score in the attribute "validity", and attributes such as "responsiveness" and "reliability (reproducibility)"; the 14-item version obtained its highest score in the section "reliability (internal consistency)", despite the lack of information in "reliability (reproducibility)"; the 8-item version did not present information in the attributes "responsiveness" and "reliability (reproducibility)", hence the resulting low score. The complex nature of PRO instruments often raises important questions about how to interpret and communicate the results in a way that is not misleading, so it is essential that the validation of the instrument makes clear how the results are to be interpreted. None of the 4 versions studied herein offer sufficient information on the measurement and interpretation strategies of the DUFSS questionnaire. One of the most frequent strategies for interpreting PROs is to calculate percentiles from population values [38]. This method is only valid if the data come from a representative sample of the general population; however, this was not the case in the included studies. Bellón et al. found that the 15th percentile was the cut-off point to differentiate “good” from “low” social support. This percentile corresponded to a score ≤32, a score that later Fernández et al. [39] and Ruiz et al. [40] applied in their studies. On the other hand, authors such as Slade et al. [41] and Harley et al. [42] and the National Health Survey of Spain [43] expressed the results of social support in the form of a quantitative variable. The latter option seems to be the most appropriate approach, since it does not require percentiles to be extracted from the general population. However, the most appropriate approach would be to use population values or norms, reference samples, or standardized mean responses.

Limitations and strengths

This work constitutes the first review of the social support construct using a standardized methodology, the EMPRO tool, for the evaluation of one of the most well-known instruments.

A limitation of this work is that the systematic reviews depend on the information retrieved through the search strategy, so it is possible that we have not identified all the articles published on the questionnaire to be studied. However, given that social support is a construct of interest in multiple disciplines and that some validations might be indexed in non-health science databases, in addition to including Pubmed, we also searched multidisciplinary databases such as SCOPUS and WOS. This aspect added to the delicate search strategy designed, the additional manual search of references, as well as the double independent review process followed, may have minimised this problem [16].

The most important aspect to note as a limitation is that the results shown here do not suggest a single, unequivocal ranking or recommendation, as the EMPRO assessment includes several attributes and they have different relevance according to the purpose of the application of the questionnaire. For example, if the purpose is monitoring patients, responsiveness is the key attribute. Although this tool has shown excellent reliability in terms of internal consistency and inter-rater concordance the EMPRO scores depend on the quantity and quality of information provided for the assessment. Thus, information on psychometric properties that are not available, they are not taken into account and therefore penalise the assessment. On the other hand, EMPRO offers the possibility to answer with "no information" and if the half of items has not information, the attribute score is not calculated. In this regard, it is important to highlight that the work carried out by Ayala et al. [28] on the 11-item version addresses most of the attributes proposed by EMPRO, which increases the score of this version.

EMPRO ratings may be biased by the individual expertise of the evaluators, although the double and independent review conducted, as well as a comprehensive description of each item, may have attenuated this concern. Studies on metric properties from different country versions were considered in our EMPRO evaluation. Although these country versions can add noise in one sense, they also provide valuable information about the generalizability of the psychometric data to these measures.

The coexistence of different nomenclatures has been a challenge in the process of reviewing and selecting articles. The name of the questionnaire has been modified since the original author called it the Duke-UNC Functional Support Questionnaire (DUFSS). Subsequently, various researchers have modified the original acronym, thus yielding a variety of names: DUKE-UNC, DUKE-UNC-11 DUKE-UNK, FSSQ and DUFSSQ. This increases the confusion among researchers who want to use some version of the DUFSS; additionally, these alternate acronyms may be similar to different tools that measure separate concepts, thus creating the potential for more confusion–e.g., the Duke Social Support Index (DSSI) [44] or the Duke-UNC Health Profile (DUHP) [45]. In order to ensure that all validation articles on the DUFSS were located, we included the possible names it may acquire in the search strategy (this part is explained in the methodology); and once we had the search results, we went on to read the full text of any paper that might raise doubts, in order to ensure that it was the right questionnaire.

Applicability

This paper can shed light on the study of social support as a PRO in different domains and help to unravel the current complexity that exists around this questionnaire.

To know the different versions of the DUFSS questionnaire and providing relevant information about each one will allow researchers who wish to study this subject to choose the version that best suits their interests and to be aware of the evaluation of its quality. In addition, improving knowledge about this PRO will allow progress to be made and give greater strength to the work in the field of epidemiology and public health on person-centred care.

Furthermore, in the educational field, this study has two applications: on the one hand, to train specific tools in the study of PROs; on the other hand, it focuses on the importance of consulting the original sources and investigating the work that has been done previously on the research question.

Conclusions

There are 4 versions of the EMPRO questionnaire with different numbers of items: 14, 11, 8 and 5 items. All of them have been validated in very specific populations and not in the general population.

Among the 4 versions the DUFSS questionnaire, the 11-item version has been the most studied, especially in Spanish-speaking countries. This version scored higher than the others because it was the version with the largest number of studies and therefore more likely to address all the attributes taken into account by the EMPRO tool.

In order to be able to state with certainty that the 11-item version is more appropriate than the other versions, more studies are needed to evaluate each of the other versions. Although, a priori, we could prioritise its use in epidemiological studies over the other versions, it should be noted that this version should also be used with caution because there are attributes that have not been studied.

All versions of the DUFSS questionnaire should be used with caution, since many of the attributes studied have not shown sufficient quality in any of the versions analysed herein. It is necessary to conduct future studies on the DUFSS questionnaire to evaluate aspects such as its reproducibility and to perform complete factor analysis in the general population.

Supporting information

S1 Checklist. PRISMA 2020 checklist.

(DOCX)

S1 Dataset

(XLSX)

Acknowledgments

We thank JA López-Rodríguez for his help in reviewing this work.

Data Availability

All relevant data are within the paper and its Supporting Information files. The dataset corresponding to the evaluations of the different versions of the DUFSS questionnaire using the EMPRO tool has been published in Zenodo. The publication is under DOI: https://doi.org/10.5281/zenodo.8211219.

Funding Statement

The main author CMLH received a grant for the translation of this paper from the Foundation for Biosanitary Research and Innovation in Primary Care of the Community of Madrid (FIIBAP). This study was funded by the Fondo de Investigaciones Sanitarias ISCIII (Grant Numbers PI15/00276, PI15/00572, PI15/00996), REDISSEC (Project Numbers RD16/0001/0006, RD16/0001/ 0005 and RD16/0001/0004), and the European Regional Development Fund (“A way to build Europe”). Funders had no role in study design or in the decision to submit the report for publication. The publication of study results was not contingent on the sponsor’s approval or censorship of the manuscript.

References

  • 1.Kaplan BH, Cassel JC, Gore S. Social Support and Health. Med Care. 1977;15: 47–58. doi: 10.1097/00005650-197705001-00006 [DOI] [PubMed] [Google Scholar]
  • 2.Gallant MP. The influence of social support on chronic illness self-management: A review and directions for research. Health Education and Behavior. 2003;30: 170–195. doi: 10.1177/1090198102251030 [DOI] [PubMed] [Google Scholar]
  • 3.Lozano-Hernández CM, López-Rodríguez JA, Leiva-Fernández F, Calderón-Larrañaga A, Barrio-Cortes J, Gimeno-Feliu LA, et al. Social support, social context and nonadherence to treatment in young senior patients with multimorbidity and polypharmacy followed-up in primary care. MULTIPAP study. PLoS One. 2020;15: 1–15. doi: 10.1371/journal.pone.0235148 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Blazer DG. Social support and mortality in an elderly community population. Am J Epidemiol. 1982;115: 684–694. Available: https://academic.oup.com/aje/article/115/5/684/153301 doi: 10.1093/oxfordjournals.aje.a113351 [DOI] [PubMed] [Google Scholar]
  • 5.Sidney Cobb MD. Social Support as a Moderator of Life Stress. Psychosomatic Medicine. 1976;38. [DOI] [PubMed] [Google Scholar]
  • 6.Cassell A, Edwards D, Harshfield A, Rhodes K, Brimicombe J, Payne R, et al. The epidemiology of multimorbidity in primary care: A retrospective cohort study. British Journal of General Practice. 2018;68: e245–e251. doi: 10.3399/bjgp18X695465 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Cohen S. Psychosocial models of the role of social support in the etiology of physical disease. Health Psychol. 1988;7: 269–97. doi: 10.1037//0278-6133.7.3.269 [DOI] [PubMed] [Google Scholar]
  • 8.Kahn Rober L., Antonucci Toni C. Convoys Over the Life Course: Attachment Roles and Social Support. Life-Span Development and Behavior. 1980;3: 253–283. Available: https://www.researchgate.net/publication/259253271 [Google Scholar]
  • 9.To T, Ly ANA, Winter TIONS, Wong ST, Yoo GJ, Stewart AL, et al. Conceptual, Methodological, and Theoretical Problems in Studying Social Support as a Buffer Against Life Stress Author (s): Peggy A. Thoits Source: Journal of Health and Social Behavior, Vol. 23, No. 2 (Jun., 1982), pp. 145–159 Published by. Nurs Res. 1998;61: 2–5. doi: 10.1002/(SICI)1097-4679(199810)54:6<845::AID-JCLP13>3.0.CO;2-L [DOI] [PubMed] [Google Scholar]
  • 10.Thoits PA. Conceptual, methodological, and theoretical problems in studying social support as a buffer against life stress. J Health Soc Behav. 1982;23: 145–159. doi: 10.2307/2136511 [DOI] [PubMed] [Google Scholar]
  • 11.House JS, Kahn RL. Measures and concept of social support. Social support and health. 1985. pp. 83–108. Available: http://www.isr.umich.edu/williams/All Publications/DRW pubs 1985/measures and concept of social support.pdf [Google Scholar]
  • 12.Gottlieb BH, Bergen AE. Social support concepts and measures. J Psychosom Res. 2010;69: 511–520. doi: 10.1016/j.jpsychores.2009.10.001 [DOI] [PubMed] [Google Scholar]
  • 13.Aranda B. C, Pando M. M. Conceptualización del apoyo social y las redes de apoyo social. Revista de Investigación en Psicología. 2014;16: 233. doi: 10.15381/rinvp.v16i1.3929 [DOI] [Google Scholar]
  • 14.Barrón Ana. Apoyo social: aspectos teóricos y aplicaciones. Siglo XXI de España General, editor. Madrid: Siglo XXI; 1996. [Google Scholar]
  • 15.Bellón JA, Delgado A, Luna J LP. Validez y fiabilidad del cuestionario de apoyo social funcional Duke-UNC-11. Aten Primaria. 1996;18: 153–63. [PubMed] [Google Scholar]
  • 16.Broadhead W.E., Gehlbach H Stephen, de Gruy Frank V, Kaplan Berton H. The Duke-UNC Funtional Social Support Questionnaire. Measurement of Social Support in Family Medicine Patients. Med Care. 1988;26: 709–723. [DOI] [PubMed] [Google Scholar]
  • 17.DiMatteo MR. Social Support and Patient Adherence to Medical Treatment: A Meta-Analysis. Health Psychology. 2004;23: 207–218. doi: 10.1037/0278-6133.23.2.207 [DOI] [PubMed] [Google Scholar]
  • 18.Uchino BN, Cacioppo JT, Kiecolt-Glaser JK. The relationship between social support and physiological processes: A review with emphasis on underlying mechanisms and implications for health. Psychol Bull. 1996;119: 488–531. doi: 10.1037/0033-2909.119.3.488 [DOI] [PubMed] [Google Scholar]
  • 19.Broadhead W.E., Gehlbach H Stephen, de Gruy Frank V, Kaplan Berton H. Functional versus Structural Social Support and Health Care Utilization in a Family Medicine Outpatient Practice. Med Care. 1989;27: 221–233. doi: 10.1097/00005650-198903000-00001 [DOI] [PubMed] [Google Scholar]
  • 20.Broadhead WE, Kaplan BH, James SA, Wagner EH, Schoenbach VJ, Grimson R, et al. The epidemiologic evidence for a relationship between social support and health. J Epidemiol. 1983;117: 521–537. Available: https://academic.oup.com/aje/article/117/5/521/102512 doi: 10.1093/oxfordjournals.aje.a113575 [DOI] [PubMed] [Google Scholar]
  • 21.Instituto Nacional de Estadística, Ministerio de Sanidad SS e I. Encuesta Nacional de Salud 2017 ENSE 2017 Metodología. 2017. [Google Scholar]
  • 22.European Health Interview Survey—Access to microdata—Eurostat. [cited 25 Jul 2022]. Available: https://ec.europa.eu/eurostat/web/microdata/european-health-interview-survey
  • 23.Valderas JM, Ferrer M, Mendívil J, Garin O, Rajmil L, Herdman M, et al. Development of EMPRO: A tool for the standardized assessment of patient-reported outcome measures. Value in Health. 2008;11: 700–708. doi: 10.1111/j.1524-4733.2007.00309.x [DOI] [PubMed] [Google Scholar]
  • 24.Schmidt S, Garin O, Pardo Y, Valderas JM, Alonso J, Rebollo P, et al. Assessing quality of life in patients with prostate cancer: A systematic and standardized comparison of available instruments. Quality of Life Research. Springer International Publishing; 2014. pp. 2169–2181. doi: 10.1007/s11136-014-0678-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.de la Revilla Ahumada L, Bailón E, de Dios Luna J, Delgado A, Prados MA, Fleitas L. Validation of a functional social support scale for use in the family doctor’s office. Aten Primaria. 1991;8: 688–92. [PubMed] [Google Scholar]
  • 26.Alvarado BE, Zunzunegui MV, Delisle H. Validación de escalas de seguridad alimentaria y de apoyo social en una población afro-colombiana: aplicación en el estudio de prevalencia del estado nutricional en niños de 6 a 18 meses. Cad Saude Publica. 2005;21: 724–736. doi: 10.1590/S0102-311X2005000300006 [DOI] [PubMed] [Google Scholar]
  • 27.Piña LJ, Rivera IB. Validación del Cuestionario de Apoyo Social Funcional en personas seropositivas al VIH del noroeste de México. Ciencia y Enfermería. 2007;XIII: 53–63. doi: 10.4067/S0717-95532007000200007 [DOI] [Google Scholar]
  • 28.Ayala A, Rodríguez-Blázquez C, Frades-Payo B, Forjaz MJ, Martínez-Martín P, Fernández-Mayoralas G, et al. Psychometric properties of the Functional Social Support Questionnaire and the Loneliness Scale in non-institutionalized older adults in Spain. Gac Sanit. 2012;26: 317–324. doi: 10.1016/j.gaceta.2011.08.009 [DOI] [PubMed] [Google Scholar]
  • 29.Cuéllar-Flores I, Dresch V. Validación del cuestionario de Apoyo Social Funcional Duke-UNK-11 en personas cuidadoras. Ridep · No. 2012;34: 89–101. [Google Scholar]
  • 30.Mas-Expósito L, Amador-Campos JA, Gómez-Benito J, Lalucat-Jo L. Validation of the modified DUKE-UNC Functional Social Support Questionnaire in patients with schizophrenia. Soc Psychiatry Psychiatr Epidemiol. 2013;48: 1675–1685. doi: 10.1007/s00127-012-0633-3 [DOI] [PubMed] [Google Scholar]
  • 31.Rivas Diez Raquel. Apoyo Social Funcional en mujeres de la población general y en mujeres maltratadas chilenas. Propiedades psicométricas del Duke-UNC-11 Functional social support in the general population and Chilean battered women. Psychometric properties of the. 2013. [Google Scholar]
  • 32.Mónica Aguilar-Sizer Sandra Lima-Castro, Paúl Arias Medina Eva Karina Peña Contreras, Marcela Cabrera-Vélez Alexandra Bueno-Pacheco. Propiedades Psicométricas del Cuestionario de Apoyo Social Funcional Duke-UNK-11 en una Muestra de Adultos Ecuatorianos. “EUREKA” Revista de Investigación Científica en Psicología. 2021;18: 55–71. Available: www.psicoeureka.com.py [Google Scholar]
  • 33.Caycho Rodríguez T, Domínguez Lara S, Villegas G, Sotelo N, Carbajal León C. Análisis psicométrico del Cuestionario de Apoyo Social Funcional DUKE-UNK-11 en inmigrantes peruanos en Italia. Pensamiento Psicológico. 2014;12: 25–35. doi: 10.11144/Javerianacali.PPSI12-2.apca [DOI] [Google Scholar]
  • 34.Martins S, Martins C, Almeida A, Ayala-Nunes L, Gonçalves A, Nunes C. The Adapted DUKE-UNC Functional Social Support Questionnaire in a Community Sample of Portuguese Parents. Res Soc Work Pract. 2022; 104973152210760. doi: 10.1177/10497315221076039 [DOI] [Google Scholar]
  • 35.Isaacs KB, Hall LA. A psychometric analysis of the functional social support questionnaire in low-income pregnant women. Issues Ment Health Nurs. 2011;32: 766–773. doi: 10.3109/01612840.2011.610561 [DOI] [PubMed] [Google Scholar]
  • 36.Epino HM, Rich ML, Kaigamba F, Hakizamungu M, Socci AR, Bagiruwigize E, et al. Reliability and construct validity of three health-related self-report scales in HIV-positive adults in rural Rwanda. AIDS Care—Psychological and Socio-Medical Aspects of AIDS/HIV. 2012;24: 1576–1583. doi: 10.1080/09540121.2012.661840 [DOI] [PubMed] [Google Scholar]
  • 37.Saracino R, Kolva E, Rosenfeld B, Breitbart W, R. S, E. K, et al. Measuring social support in patients with advanced medical illnesses: An analysis of the Duke–UNC Functional Social Support Questionnaire. Palliat Support Care. 2015;13: 1153–1163. doi: 10.1017/S1478951514000996 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Argimon Pallás JMa. Métodos de investigación clínica y epidemiológica. Elsevier España. Elsevier; España SLU, editor. 2019. [Google Scholar]
  • 39.Fernández Vargas AM BZMLFMF. Salud autopercibida, apoyo social y familiar de los pacientes con enfermedad pulmonar obstructiva crónica. Medifarm. 2001;11: 530–539. [Google Scholar]
  • 40.Ruiz I, Olry A, Delgado CJ, Herrero MM, Muñoz N, Pasquau J, et al. Impacto del apoyo social y la morbilidad psíquica en la calidad de vida en pacientes tratados con antirretrovirales. Psicothema. 2005;17: 245–249. [Google Scholar]
  • 41.Slade P., C.O Neill, Simpson A.J., Lashen H. The relationship between perceived stigma,disclosure patterns, support and distress in new attendees an infertitlty clinic. Human Reproduction. 2007;22: 2309–2317. [DOI] [PubMed] [Google Scholar]
  • 42.Harley K, Eskenazi B. Time in the United States, social support and health behaviors during pregnancy among women of Mexican descent. Soc Sci Med. 2006;62: 3048–3061. doi: 10.1016/j.socscimed.2005.11.036 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Instituto Nacional de Estadistica. Nota Técnica Encuesta Nacional de Salud. España 2017 Principales resultados. 2017. Available: https://www.mscbs.gob.es/estadEstudios/estadisticas/encuestaNacional/encuestaNac2017/ENSE2017_notatecnica.pdf [Google Scholar]
  • 44.Koenig HG, Westlund RE, Linda George MK, Hughes DC, Blazer DG, Hybels C. Abbreviating the Duke Social Support Index for Use in Chronically mElderly Individuals. 1993. [DOI] [PubMed] [Google Scholar]
  • 45.Parkerson GR, Gehlbach SH, Wagner EH, James SA, Clapp NE, Muhlbaier LH. The Duke-UNC Health Profile: An Adult Health Status Instrument for Primary Care. Med Care. 1981;19: 806–28. doi: 10.1097/00005650-198108000-00002 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Gian Mauro Manzoni

10 Feb 2023

PONE-D-22-28641Functional Social Support: a systematic review and standardized comparison of different versions of the DUFSS questionnaire using the EMPRO tool.PLOS ONE

Dear Dr. Lozano-Hernández,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Mar 27 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Gian Mauro Manzoni, Ph.D., Psy.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that in order to use the direct billing option the corresponding author must be affiliated with the chosen institute. Please either amend your manuscript to change the affiliation or corresponding author, or email us at plosone@plos.org with a request to remove this option.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 

Additional Editor Comments:

#Reviewer 1:

The manuscript is very interesting. I have some concerns - but fully addressables.

- Considering the topic of your research, the introduction is quite short and should be improved.

- The EMPRO tool is very intersting and i think that should be describe with major details - allowing other researchers to use the same procedure for their studies (using supplementary material, maybe?).

- Considering that the research of the studies was made on 4th April - and now it's december - I suggest to made another research in order to not exclude new-published studies

- I suggest the Authors to enlarge the limitations and stregths section

#Reviewer 2:

This article provides a thorough analysis and evaluation to compare psychometric performances of different DUFFS versions. As such it provides groundings for researches to justify taking one version over another. Overall, the article is well structured and consistent. Having a summary table reporting psychometrics for each version would help readers interprete results beyond the EMPRO tool with which most readers are not familiar.

Abstract

1. Consider adding more details in the methods section such as the search terms, how many ran the selection and extraction procedure, what data was extracted and what the inclusion criteria where (language restriction, type of publication, etc.).

2. The search method identified 54 articles of which 15 were retained. The actual statement can be confusing as it suggests 54 were retained and only 15 analysed.

3. In the results section, please provide more details on what explains the score difference and in what way the DUFFS-11 was better than the others. Furthermore, provide some indication on compatibility between the studies in the sense of what population was tested and what underlying conditions were present.

4. Consider revising the conclusion as recommendation for choosing one of the versions in an epidemiological study.

Background

5. Consider adding a small paragraph on different expected psychometric measures and in what way they can inform on DUFFS performances.

6. In the wording of the study objectives, it is unclear what is meant by “a standardised way” please consider specifying in what way.

Methods

7. Line 95 – Please give more details on what articles were to be eligible. Are they accepted, published online, in paper, do they have to be peer reviewed? Are there any language restrictions? Are all language versions of the DUFFS to be included?

8. Line 98 – What I semant by “underage population”? Under the age of what?

9. Please consider moving lines 103-107 to the next section “search strategy”.

10. Discrepencies in what? Please reword to make clear if the selection process was done based on titles, than on abstracts than on full text and if discrepancies in choices for selection were treated during each step.

11. Line 116 – Please provide further details on what is meant by “main results”. It is useful to understand what psychometric measures were of interest and how they were extracted.

12. For compatibility between studies, were some measures recalculated based on the information made available?

13. Line 122 – Consider rewording the section heading to “EMPRO tool” and then “Experts”. Please provide further information about the EMPRO tool including its available psychometrics. What does this instrument measure and how valid is this measurement?

14. How was study quality assessed for the included articles?

Results

15. Consider providing more detailed results of extracted psychometrics from the studies in the result section for Conceptual and measurement model, reliability, validity, responsiveness, and interpretability as reported in the discussion.

15b. Adding a table with psychometric values for each version would help.

Discussion

16. Please consider synthesising the results in a paragraph, then to identify limitation of the actual study, and then discuss the results compared to other similar studies on similar instruments, then discuss the practical implications and then conclude.

17. Lines 293-301, please provide deeper insights on cultural and language differences between versions of a questionnaire. To what extend can we assume we are measuring the same thing? What does the literature say about this and in what way could the results between different versions be explained by cultural differences rather than version differences? Do the different versions have similar psychometric values between English and Spanish (to be reported in the results section)?

18. The main limitation is the reliance on the EMPRO tool. It is unclear whether a score can be affected only by good-bad psychometric performances or if it can be affected by the fact some psychometric measures were not made available by research. It is also unclear why each criteria has similar weight for the general score. In what way does that relate to the test value. A test that performed very badly in reliability will remain bad even if it performs well for other psychometric characteristics. Another limitation is combining psychometrics of tests run in different languages. Another limitation is that the version are compared only on psychometric performance but other advantages might exist on some versions over others such as time needed to answer questions, understandability, etc.

19. Lines 311-319; This does not seem to be a limitation as it does not seem to affect the results of this study. Misclassification of studies as been the DUFFS would be a limitation. What measures were taken to make sure that the retained article truly tested the DUFFS?

20. Please consider adding a section on practical implication for research, education and public health.

Conclusion

21. In what way can the 5-item version compete as it only was evaluated by two studies. How can authors justify this classification based on a score that could change in time depending of future available studies on other versions. How can we go beyond saying that with the actual available evidence, the DUFFS that shows the best performances is the 11-item version.

Figures

22. Some text within the boxes have not been translated from Spanish.

Tables and supplement data

23. Extracted psychometric values for each version is not made available.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The manuscript is very interesting. I have some concerns - but fully addressables.

- Considering the topic of your research, the introduction is quite short and should be improved.

- The EMPRO tool is very intersting and i think that should be describe with major details - allowing other researchers to use the same procedure for their studies (using supplementary material, maybe?).

- Considering that the research of the studies was made on 4th April - and now it's december - I suggest to made another research in order to not exclude new-published studies

- I suggest the Authors to enlarge the limitations and stregths section

Reviewer #2: This article provides a thorough analysis and evaluation to compare psychometric performances of different DUFFS versions. As such it provides groundings for researches to justify taking one version over another. Overall, the article is well structured and consistent. Having a summary table reporting psychometrics for each version would help readers interprete results beyond the EMPRO tool with which most readers are not familiar.

Abstract

1. Consider adding more details in the methods section such as the search terms, how many ran the selection and extraction procedure, what data was extracted and what the inclusion criteria where (language restriction, type of publication, etc.).

2. The search method identified 54 articles of which 15 were retained. The actual statement can be confusing as it suggests 54 were retained and only 15 analysed.

3. In the results section, please provide more details on what explains the score difference and in what way the DUFFS-11 was better than the others. Furthermore, provide some indication on compatibility between the studies in the sense of what population was tested and what underlying conditions were present.

4. Consider revising the conclusion as recommendation for choosing one of the versions in an epidemiological study.

Background

5. Consider adding a small paragraph on different expected psychometric measures and in what way they can inform on DUFFS performances.

6. In the wording of the study objectives, it is unclear what is meant by “a standardised way” please consider specifying in what way.

Methods

7. Line 95 – Please give more details on what articles were to be eligible. Are they accepted, published online, in paper, do they have to be peer reviewed? Are there any language restrictions? Are all language versions of the DUFFS to be included?

8. Line 98 – What I semant by “underage population”? Under the age of what?

9. Please consider moving lines 103-107 to the next section “search strategy”.

10. Discrepencies in what? Please reword to make clear if the selection process was done based on titles, than on abstracts than on full text and if discrepancies in choices for selection were treated during each step.

11. Line 116 – Please provide further details on what is meant by “main results”. It is useful to understand what psychometric measures were of interest and how they were extracted.

12. For compatibility between studies, were some measures recalculated based on the information made available?

13. Line 122 – Consider rewording the section heading to “EMPRO tool” and then “Experts”. Please provide further information about the EMPRO tool including its available psychometrics. What does this instrument measure and how valid is this measurement?

14. How was study quality assessed for the included articles?

Results

15. Consider providing more detailed results of extracted psychometrics from the studies in the result section for Conceptual and measurement model, reliability, validity, responsiveness, and interpretability as reported in the discussion.

15b. Adding a table with psychometric values for each version would help.

Discussion

16. Please consider synthesising the results in a paragraph, then to identify limitation of the actual study, and then discuss the results compared to other similar studies on similar instruments, then discuss the practical implications and then conclude.

17. Lines 293-301, please provide deeper insights on cultural and language differences between versions of a questionnaire. To what extend can we assume we are measuring the same thing? What does the literature say about this and in what way could the results between different versions be explained by cultural differences rather than version differences? Do the different versions have similar psychometric values between English and Spanish (to be reported in the results section)?

18. The main limitation is the reliance on the EMPRO tool. It is unclear whether a score can be affected only by good-bad psychometric performances or if it can be affected by the fact some psychometric measures were not made available by research. It is also unclear why each criteria has similar weight for the general score. In what way does that relate to the test value. A test that performed very badly in reliability will remain bad even if it performs well for other psychometric characteristics. Another limitation is combining psychometrics of tests run in different languages. Another limitation is that the version are compared only on psychometric performance but other advantages might exist on some versions over others such as time needed to answer questions, understandability, etc.

19. Lines 311-319; This does not seem to be a limitation as it does not seem to affect the results of this study. Misclassification of studies as been the DUFFS would be a limitation. What measures were taken to make sure that the retained article truly tested the DUFFS?

20. Please consider adding a section on practical implication for research, education and public health.

Conclusion

21. In what way can the 5-item version compete as it only was evaluated by two studies. How can authors justify this classification based on a score that could change in time depending of future available studies on other versions. How can we go beyond saying that with the actual available evidence, the DUFFS that shows the best performances is the 11-item version.

Figures

22. Some text within the boxes have not been translated from Spanish.

Tables and supplement data

23. Extracted psychometric values for each version is not made available.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Paul Vaucher

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Sep 15;18(9):e0291635. doi: 10.1371/journal.pone.0291635.r002

Author response to Decision Letter 0


15 Apr 2023

Dear reviewers:

We greatly appreciate the comments and submissions made on this paper. We certainly feel that the quality of the document has been greatly enhanced.

Below are the details of each of the responses requested.

#Reviewer 1:

The manuscript is very interesting. I have some concerns - but fully addressables.

1. Considering the topic of your research, the introduction is quite short and should be improved.

We appreciate the suggestion. We have added information in the introduction that helps to better understand the concept of social support, its relationship to health and aspects of its psychometric measurement.

2. The EMPRO tool is very intersting and i think that should be describe with major details - allowing other researchers to use the same procedure for their studies (using supplementary material, maybe?).

Thank you very much for the proposal. We have prepared a table on the attributes studied by EMPRO. It details each attribute with its name, definition, the number of items it is composed of, and with examples of scoring.

We have included it in the file "Tables and Figures" under the name "Table 2. Attributes assessed using the Evaluating the Measurement of Patient-Reported Outcomes (EMPRO) tool".

3. Considering that the research of the studies was made on 4th April - and now it's December. I suggest to made another research in order to not exclude new-published studies

The search for this systematic review was conducted on 4 April. Following your recommendation, we have included the search again and found that there are no new articles on this topic.

4. I suggest the Authors to enlarge the limitations and stregths section.

Thank you for your suggestion. We have followed your instructions and have enlarged different aspects of the limitations and strengths of this work.

#Reviewer 2:

This article provides a thorough analysis and evaluation to compare psychometric performances of different DUFFS versions. As such it provides groundings for researches to justify taking one version over another. Overall, the article is well structured and consistent. Having a summary table reporting psychometrics for each version would help readers interprete results beyond the EMPRO tool with which most readers are not familiar.

Abstract:

1. Consider adding more details in the methods section such as the search terms, how many ran the selection and extraction procedure, what data was extracted and what the inclusion criteria where (language restriction, type of publication, etc.).

Following your recommendations, we have partially expanded the information in this section. But taking into consideration the word limit established by the journal's rules for the abstract, the rest of the details have been expanded in detail in the body of the paper.

2. The search method identified 54 articles of which 15 were retained. The actual statement can be confusing as it suggests 54 were retained and only 15 analysed.

Thank you for notifying us this mistake. This may be due to a translation error. We have replaced the term "retrieved" with "identified".

3. In the results section, please provide more details on what explains the score difference and in what way the DUFFS-11 was better than the others. Furthermore, provide some indication on compatibility between the studies in the sense of what population was tested and what underlying conditions were present.

As requested, we have provided more detailed information in the results section of the summary.

4. Consider revising the conclusion as recommendation for choosing one of the versions in an epidemiological study.

We welcome this suggestion and have incorporated further contributions to the conclusions. As recommended in another point, we will also include a section on "applicability" which will allow focus more on this aspect.

Background:

5. Consider adding a small paragraph on different expected psychometric measures and in what way they can inform on DUFFS performances.

Thank you for the suggestion. We have included a paragraph on this issue in the introduction.

6. In the wording of the study objectives, it is unclear what is meant by “a standardised way” please consider specifying in what way.

A standardised assessment of the different versions of the DUFSS questionnaire was performed using the Evaluating the Measurement of Patient-Reported Outcomes (EMPRO) tool. To clarify this, the objective has been modified to read as follows: The objective of this study is to systematically review the available evidence on the psychometric and administration characteristics of the different versions of the DUFSS and to evaluate them through the standardised EMPRO tool.

Methods:

7. Line 95 – Please give more details on what articles were to be eligible. Are they accepted, published online, in paper, do they have to be peer reviewed? Are there any language restrictions? Are all language versions of the DUFFS to be included?

Thank you for giving us the opportunity to clarify this part of the methodology. We have tried to broaden the search as much as possible. Therefore, there are few restrictions, only the population under 18 years of age. There is no paper restriction, the older articles are on paper and have been digitised by scanning, so they are now available in digital format as well. We have also not restricted by language. But all the ones we have found were written in English or Spanish. The language in which they used the questionnaire to collect the data was the original language of each country, and that has not been considered excluding. This information has been included in the main document. In addition, following this proposal, we have modified Table 3 and have included a column detailing the language version of each article.

8. Line 98 – What I semant by “underage population”? Under the age of what?

Thank you for pointing out this term. We are referring to the population under 18 years of age. I am replacing the term in the manuscript.

9. Please consider moving lines 103-107 to the next section “search strategy”.

Thank you for this suggestion. The highlighted lines have been moved to the "Search strategy" section.

10. Discrepencies in what? Please reword to make clear if the selection process was done based on titles, than on abstracts than on full text and if discrepancies in choices for selection were treated during each step.

Thank you for allowing us to explain this point in more detail.

We have included a detailed description of the process in the main manuscript. The paragraph reads as follows:

Initially, two reviewers screened the titles according to the inclusion criteria. Then, the same two reviewers did the same for the abstracts. Once the duplicates had been removed and based on the selection criteria, the eligibility of the full articles was assessed. Discrepancies that arose at each of the selection stages were resolved by discussion and consensus between the two researchers, and a third reviewer was consulted when consensus could not be reached between the two previous reviewers.

11. Line 116 – Please provide further details on what is meant by “main results”. It is useful to understand what psychometric measures were of interest and how they were extracted.

Thank you for this proposal. We have included more details in this section on the data extracted from each study in the "data items" section.

12. For compatibility between studies, were some measures recalculated based on the information made available?

The EMPRO tool is completed by expert evaluators in metric properties considering both information on results and on the quality of the methods applied in each study, so it was not necessary to recalculate any measure.

13. Line 122 – Consider rewording the section heading to “EMPRO tool” and then “Experts”. Please provide further information about the EMPRO tool including its available psychometrics. What does this instrument measure and how valid is this measurement?

Thank you for this recommendation which will help for a better understanding.

We have made the indicated change of section.

14. How was study quality assessed for the included articles?

EMPRO contains several criteria aimed at assessing study quality in each of the following 5 attributes: conceptual and measurement model, reliability, validity, responsiveness and interpretability. Some of the EMPRO items assessing the results and methodological quality are: item 8-Linguistic equivalence; item 12-Cronbach’s alpha adequate; item 17-Reproducibility coefficients adequate; item 22-Prior hypothesis stated; and item 25-Adequacy of methods. Following the comment of the evaluator, we have considered it convenient to include in Table 2, where the EMPRO items are specified, information on the items that value results and those that value more the methodological quality of the studies.

The authors of the EMPRO (Valderas et al., 2008) defined the quality of a PRO measure as the ''degree of confidence that all possible biases have been minimised and that information about the process leading to its development and evaluation is clear and accessible''. The EMPRO combines 3 key aspects: (1) well-described and established attributes for assessment, (2) expert reviewers to conduct the assessment, and (3) scores that allow direct comparison between outcome measures. It is based on a comprehensive set of recommendations on the ideal attributes of PRO measures. The EMPRO is a valid and reliable tool that has proven useful for comparing the performance of generic PROs and disease-specific PROs, such as heart failure and shoulder disorders.

For this reason, we thought that the EMPRO items jointly assess both the metric properties and the methodological quality of the included studies and were therefore selected for this study.

Results:

15. Consider providing more detailed results of extracted psychometrics from the studies in the result section for Conceptual and measurement model, reliability, validity, responsiveness, and interpretability as reported in the discussion.

We are grateful for this recommendation, which helps to make the results easier to understand. Following your advice, we have rewritten the results along the lines of the discussion section. This makes the results section more focused and allows us to follow the rest of your recommendations.

15b. Adding a table with psychometric values for each version would help.

Table 4 contains the results of the EMPRO assessment according to each version of the DUFSS questionnaire. For clarification purposes, the name of the table has been changed in the table document and in the results section of the main document. The name assigned to the table is: Table 4. EMPRO results: psychometric values of each version of the DUFSS questionnaire.

Discussion:

16. Please consider synthesising the results in a paragraph, then to identify limitation of the actual study, and then discuss the results compared to other similar studies on similar instruments, then discuss the practical implications and then conclude.

We have followed your indications and the discussion has been reduced. The structure you have indicated has been followed and aspects of similar studies have been included throughout the discussion.

17. Lines 293-301, please provide deeper insights on cultural and language differences between versions of a questionnaire. To what extend can we assume we are measuring the same thing? What does the literature say about this and in what way could the results between different versions be explained by cultural differences rather than version differences? Do the different versions have similar psychometric values between English and Spanish (to be reported in the results section)?

Thank you for focusing on this important aspect. We agree that we should pay more attention to the description of these differences in the results and include a more detailed section in the discussion. Therefore, in addition to having included a detailed column on the language versions of the included questionnaires in table 3, the results and the discussion have been written with a focus on this aspect.

18. The main limitation is the reliance on the EMPRO tool. It is unclear whether a score can be affected only by good-bad psychometric performances or if it can be affected by the fact some psychometric measures were not made available by research. It is also unclear why each criteria has similar weight for the general score. In what way does that relate to the test value. A test that performed very badly in reliability will remain bad even if it performs well for other psychometric characteristics. Another limitation is combining psychometrics of tests run in different languages. Another limitation is that the version are compared only on psychometric performance but other advantages might exist on some versions over others such as time needed to answer questions, understandability, etc.

The data provided in the work of Valderas et al. suggest that the EMPRO instrument is valid. This tool demonstrated excellent reliability in terms of internal consistency (Cronbach's alpha = 0.95) and inter-rater concordance (intraclass correlation coefficient: 0.87-0.94).

The expected associations between EMPRO scores and the proposed variables were observed, supporting the construct validity of the tool. However, these relationships are consistent with the hypothesis that EMPRO scores depend on the quantity and quality of information provided for the assessment. Thus, psychometric measures that are not made available to the reader are not taken into account and therefore penalise the assessment. On the other hand, EMPRO offers the possibility to answer with "no information" or "not applicable". Items answered as ''no information'' are assigned a score of 1 (the lowest possible score) if at least 50 % of all items of an attribute were scored; b) items scored as ''not applicable'' (an option that is only available as a response for 5 items) were not taken into account in the scoring of the attributes.

Another aspect that has been evaluated is the burden of administering the questionnaire. In this study, the two shorter versions were found to score better on this attribute.

Following this recommendation, we have synthesised the limitations of the study to the most relevant aspects of the evaluation through the EMPRO tool and have included the aspects indicated.

19. Lines 311-319; This does not seem to be a limitation as it does not seem to affect the results of this study. Misclassification of studies as been the DUFFS would be a limitation. What measures were taken to make sure that the retained article truly tested the DUFFS?

We agree with your point as it did not affect the results and did not lead to a misclassification of the DUFSS. In order to ensure that all validation articles on the DUFSS were located, we included the possible names it may acquire in the search strategy (this part is explained in the methodology); and once we had the search results, we went on to read the full text of any paper that might raise doubts, in order to ensure that it was the right questionnaire.

Although this is not a limitation for the results of our work, we think it is interesting to point out that the names and acronyms used for this questionnaire are different depending on each author, and that there are other different questionnaires with similar names that can be misleading.

20. Please consider adding a section on practical implication for research, education and public health.

Thank you for this suggestion, we think it is very appropriate to include this section in our manuscript. It will read as follows:

This paper can shed light on the study of social support as a PRO in different domains and help to unravel the current complexity that exists around this questionnaire.

To know the different versions of the DUFSS questionnaire and providing relevant information about each one will allow researchers who wish to study this subject to choose the version that best suits their interests and to be aware of the evaluation of its quality. In addition, improving knowledge about this PRO will allow progress to be made and give greater strength to the work in the field of epidemiology and public health on person-centred care.

Furthermore, in the educational field, this study has two applications: on the one hand, to train specific tools in the study of PROs; on the other hand, it focuses on the importance of consulting the original sources and investigating the work that has been done previously on the research question.

Conclusion:

21. In what way can the 5-item version compete as it only was evaluated by two studies. How can authors justify this classification based on a score that could change in time depending of future available studies on other versions. How can we go beyond saying that with the actual available evidence, the DUFFS that shows the best performances is the 11-item version.

Following your previous indications, we have discussed this topic in the limitations section. leaving a related conclusion on the outcome in the conclusions section. The text we have included is as follows:

There are 4 versions of the EMPRO questionnaire with different numbers of items: 14, 11, 8 and 5 items. All of them have been validated in very specific populations and not in the general population.

Among the 4 versions the DUFSS questionnaire, the 11-item version has been the most studied, especially in Spanish-speaking countries. This version scored higher than the others because it was the version with the largest number of studies and therefore more likely to address all the attributes taken into account by the EMPRO tool.

In order to be able to state with certainty that the 11-item version is more appropriate than the other versions, more studies are needed to evaluate each of the other versions. Although, a priori, we could prioritise its use over the other versions, it should be taken into account that this version should also be used with caution because there are attributes that have not been studied.

Figures:

22. Some text within the boxes have not been translated from Spanish.

Thank you for notifying us of this error. The untranslated texts have been corrected.

Tables and supplement data:

23. Extracted psychometric values for each version is not made available.

The results are not available for each of the items. The studies included in this work have been grouped by type of version based on the number of items obtained as a result of their validation of the DUFSS (14-item, 11-item, 8-item and 5-item version). Once each version has been grouped and studied through EMPRO, the results are shown in table 5. In order to clarify this aspect, the following paragraph has been included in the methodology:

The selected studies were grouped by version type according to the number of items that made up the version resulting from their study. The unit of analysis through EMPRO was each version type of the DUFSS.

Another aspect is the aforementioned quality of the methods applied in each study. The EMPRO tool is completed by expert evaluators in metric properties who take into account both the outcome information and the methodological quality of the evaluated studies.

References:

Argimon Pallás, J. M. a. (2019). Métodos de investigación clínica y epidemiológica (S. L. U. Elsevier España, Ed.; Elsevier España, Vol. 5).

Ayala, A., Rodríguez-Blázquez, C., Frades-Payo, B., Forjaz, M. J., Martínez-Martín, P., Fernández-Mayoralas, G., Rojo-Pérez, F., Grupo Español de Investigación en Calidad de Vida y Envejecimiento, A., A., C., R.-B., B., F.-P., M.J., F., P., M.-M., G., F.-M., & F., R.-P. (2012). Psychometric properties of the Functional Social Support Questionnaire and the Loneliness Scale in non-institutionalized older adults in Spain. Gaceta Sanitaria, 26(4), 317–324. https://doi.org/10.1016/j.gaceta.2011.08.009

de la Revilla Ahumada, L., Bailón, E., de Dios Luna, J., Delgado, A., Prados, M. A., & Fleitas, L. (1991). Validation of a functional social support scale for use in the family doctor’s office. Atencion Primaria, 8(9), 688–692.

Services, H. (2006). Guidance for industry: Patient-reported outcome measures: Use in medical product development to support labeling claims: Draft guidance. Health and Quality of Life Outcomes, 4, 1–20. https://doi.org/10.1186/1477-7525-4-79

Valderas, J. M., Ferrer, M., Mendívil, J., Garin, O., Rajmil, L., Herdman, M., & Alonso, J. (2008). Development of EMPRO: A tool for the standardized assessment of patient-reported outcome measures. Value in Health, 11(4), 700–708. https://doi.org/10.1111/j.1524-4733.2007.00309.x

W.E. Broadhead, H Stephen Gehlbach, Frank V de Gruy, & Berton H Kaplan. (1988). The Duke-UNC Funtional Social Support Questionnaire. Measurement of Social Support in Family Medicine Patients. Medical Care, 26(7), 709–723.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Marianne Clemence

2 Aug 2023

PONE-D-22-28641R1Functional Social Support: a systematic review and standardized comparison of different versions of the DUFSS questionnaire using the EMPRO tool.PLOS ONE

Dear Dr. Lozano-Hernández,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewers have re-assessed your manuscript and are satisfied that you have addressed their comments. However, there are concerns that your submission does not yet fully comply with our data availability policy (https://journals.plos.org/plosone/s/data-availability). Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods. Additionally, PLOS requires that authors comply with field-specific standards for preparation, recording, and deposition of data when applicable. We would expect, for example, that sharing the completed data extraction form may be sufficient for another researcher to replicate your analysis. This should be labelled clearly for readers to understand and ideally in English. In addition, you may wish to review our competing interest form available at https://journals.plos.org/plosone/s/competing-interests to ensure that your competing interest statement is compliant.

Please submit your revised manuscript by Sep 15 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Marianne Clemence

Staff Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: Dear authors,

The article is relevant, complete and valuable to both research community in social epidemiology and for accounting for social factors in any clinical study. The changes brought to the “Practical application” section seems to hit spot on to make readers understand why this study is so important. Table 5 provides a clear transparent view of the available information and the EMPRO results. I also particularly appreciated the detailed response to all our comments. This has been thoroughly done and very well argued making the review process so much easier at this stage.

--- Minor suggestions ---

1. Competing interests - Authors might consider relying on a statement of interest rather than a statement of conflict of interest. Indeed, it is up to the reader to judge whether or not whatever is stated constitutes a conflict. For this reason, it is often recommended to state education interests and academic positions in the field, grant obtention, payed fees for consultations, participation to interest groups such as association, foundations, etc, contribution as scientific advisor for working groups, think tanks, etc., clinical activities in relation to the topic, etc. Readers slowly tend to find it suspicious if no interest are declared. Siting many interests that could constitute a better understanding of the underlying cognitive biases helps readers feel that the work is trustworthy.

2. Data availability: It could be seen as misleading to say all data is made available without restriction. Indeed, the working documents are not made available. It would be interesting to have the individual study assessments made using EMPRO made available. If you want to do this, consider depositing them on a repository such as on Zenodo.org

3. Line 342, the citation to the reference for Broadhead et al. still needs to be added.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Paul Vaucher

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Sep 15;18(9):e0291635. doi: 10.1371/journal.pone.0291635.r004

Author response to Decision Letter 1


16 Aug 2023

Dear reviewers:

Thank you for the thorough review. We appreciate the improvements made to our manuscript.

In addition to including the contributions requested by the reviewers, considerations made by the editor regarding format and style have been reviewed and incorporated into the manuscript.

Each of the requested responses is detailed below.

Reviewer #2:

Dear authors,

The article is relevant, complete and valuable to both research community in social epidemiology and for accounting for social factors in any clinical study. The changes brought to the “Practical application” section seems to hit spot on to make readers understand why this study is so important. Table 5 provides a clear transparent view of the available information and the EMPRO results. I also particularly appreciated the detailed response to all our comments. This has been thoroughly done and very well argued making the review process so much easier at this stage.

--- Minor suggestions ---

1. Competing interests - Authors might consider relying on a statement of interest rather than a statement of conflict of interest. Indeed, it is up to the reader to judge whether or not whatever is stated constitutes a conflict. For this reason, it is often recommended to state education interests and academic positions in the field, grant obtention, payed fees for consultations, participation to interest groups such as association, foundations, etc, contribution as scientific advisor for working groups, think tanks, etc., clinical activities in relation to the topic, etc. Readers slowly tend to find it suspicious if no interest are declared. Siting many interests that could constitute a better understanding of the underlying cognitive biases helps readers feel that the work is trustworthy.

As recommended, a declaration of interests has been included.

2. Data availability: It could be seen as misleading to say all data is made available without restriction. Indeed, the working documents are not made available. It would be interesting to have the individual study assessments made using EMPRO made available. If you want to do this, consider depositing them on a repository such as on Zenodo.org

Following your recommendation, the dataset corresponding to the evaluations of the different versions of the DUFSS questionnaire using the EMPRO tool has been published in Zenodo.

The publication is under DOI:

https://doi.org/10.5281/zenodo.8211219

3. Line 342, the citation to the reference for Broadhead et al. still needs to be added.

The requested reference has been included. In addition, the reference list has been revised and updated.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Victor Manuel Mendoza-Nuñez

4 Sep 2023

Functional Social Support: a systematic review and standardized comparison of different versions of the DUFSS questionnaire using the EMPRO tool.

PONE-D-22-28641R2

Dear Dr. Cristina M Lozano-Hernández

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Victor Manuel Mendoza-Nuñez, PhD

Academic Editor

PLOS ONE

**********

Acceptance letter

Victor Manuel Mendoza-Nuñez

7 Sep 2023

PONE-D-22-28641R2

Functional Social Support: a systematic review and standardized comparison of different versions of the DUFSS questionnaire using the EMPRO tool.

Dear Dr. Lozano-Hernández:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Victor Manuel Mendoza-Nuñez

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Checklist. PRISMA 2020 checklist.

    (DOCX)

    S1 Dataset

    (XLSX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files. The dataset corresponding to the evaluations of the different versions of the DUFSS questionnaire using the EMPRO tool has been published in Zenodo. The publication is under DOI: https://doi.org/10.5281/zenodo.8211219.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES