Skip to main content
PLOS One logoLink to PLOS One
. 2020 Jun 5;15(6):e0233998. doi: 10.1371/journal.pone.0233998

Confirmatory factor analysis and exploratory structural equation modelling of the factor structure of the Depression Anxiety and Stress Scales-21

Rapson Gomez 1, Vasileios Stavropoulos 2,*, Mark D Griffiths 3
Editor: Manuel Fernández-Alcántara4
PMCID: PMC7274426  PMID: 32502165

Abstract

The Depression Anxiety and Stress Scales-21 (DASS-21) involves a simple structure first-order three-factor oblique model, with factors for depression, anxiety, and stress. Recently, concerns have been raised over the value of using confirmatory factor analysis (CFA) for studying the factor structure of scales in general. However, such concerns can be circumvented using exploratory structural equation modeling (ESEM). Consequently, the present study used CFA and ESEM with target rotation to examine the factor structure of the DASS-21 among an adult community. It compared first-order CFA, ESEM with target rotation, bi-factor CFA (BCFA), and bi-factor BESEM with target rotation models with group/specific factors for depression, anxiety, and stress. A total of 738 adults (males = 374, and females = 364; M = 25.29 years; SD = 7.61 years) completed the DASS-21. While all models examined showed good global fit values, one or more of the group/specific factors in the BCFA, ESEM with target rotation and BESEM with target rotation models were poorly defined. As the first-order CFA model was most parsimonious, with well-defined factors that were also supported in terms of their reliabilities and validities, this model was selected as the preferred DASS-21 model. The implications of the findings for use and revision of the DASS-21 are discussed.

Confirmatory factor analysis and exploratory structural equation modeling of the structure of attention-deficit/hyperactivity disorder symptoms

The Depression Anxiety Stress Scales-21 (DASS-21) is a widely used questionnaires. It has three separate, but correlated, scales (with seven items for each scale) for depression (assessing dysphoria, low self-esteem, and lack of incentive), anxiety (assessing somatic and subjective responses to anxiety and fear), and stress (assessing negative affectivity responses, such as nervous tension and irritability). Thus, the theoretically proposed factor model for the DASS-21 is a simple structure first-order three-factor oblique model, with factors for depression, anxiety, and stress. Asparouhov and Muthén [1] (see also [2]) have developed an advanced approach for testing the factor structure of a measure called exploratory structural equation modeling (ESEM) with target rotation. To date, this technique has been applied to valuate the factor structure of DASS-21 in a community sample. Consequently, the major aim of this research was to use this approach to examine the factor structure of the DASS-21 in adults from the general community.

Initial development and validation of the DASS-21

In the initial development and validation study of the DASS-21, based on students, Lovibond and Lovibond [3] suggested sufficient reliability, convergent and discriminant validity. Calculations suggested that all items associated to their allocated three dimensions, with rates inter-relating moderately. Using the same population, Lovibond and Lovibond [3] supported a better fit for a three-factor oblique structure than a uni-dimensional (all items associating to a single dimension) or a two-dimensional oblique structure (depression items associating on one dimension, and anxiety and stress items associating on another dimension). Therefore, the DASS-21 has been conceptualized as a three-dimensional oblique structure, involving dimensions for depression, anxiety and stress ([3]; see Fig 1). In this model, all the different depression, anxiety, and stress items only associate to their allocated latent dimensions (simple structure). Also, the latent dimensions are correlated with each other, and there are no cross-loadings and all error variances are freely estimated and all covariances between error variances are set to zero. Although there have been some minor misspecifications in terms of cross-loadings, other PCA and exploratory factor analysis (EFA) studies of the DASS-21 have generally supported the theorized three-factor model (e.g., [49]).

Fig 1. Conceptual representation of CFA, ESEM, BCFA, and BESEM models.

Fig 1

D = depression; A = anxiety; S = stress; G = general factor; ESEM = exploratory structural equation modeling.

Confirmatory factor analysis of the DASS-21

In addition to PCA and EFA, CFA has also been used to test the DASS-21 dimensionality. Methodologically, EFA procedures allow unlimited item-factor cross-loadings. Therefore, items are enabled to associate with various dimensions concurrently. In contrast, CFA, which follows a model-based process, allows the researcher to test for a priori defined factor structure. Thus, in this approach (more specifically the ICM-CFA approach), items only load on their a priori designated target factors, with no cross-loadings [10,11]. A bi-factor model or a BCFA model is also an a priori CFA model. In this model, there is one general factor and two or more specific factors. Typically, all the items in the measure load on the general latent, and also items belonging to each group load on their own specific latent factors, and all the latent factors are not correlated with each other (orthogonal model). Thus, the overall general dimension indicates the common (shared) variances between all the items in the measure and the specific dimensions reflect variances in them that are not accounted for by the general factor.

Since the publication of DASS-21, numerous CFA studies have supported the theorized three-factor model [8,12,13,14]. More recently, many studies have examined the structure of DASS-21 in terms of a BCFA model with three-specific factors, namely depression, anxiety, and stress [15,16,17]. Fig 1 also includes the path diagram for the DASS-21 BCFA model with the three specific factors. In brief, it has one general distress latent factor on which all the DASS-21 items load, a specific factor for depression on which only the depression items load, a specific factor for anxiety on which only the anxiety items load, and a specific factor for stress on which only the stress items load. In this model, the overall and specific dimensions are not correlated (orthogonal model). Consequently, the overall dimension reflects the shared variances between all the 21 DASS-21 items and the specific dimensions reflect variances in them that are not accounted for by the general factor. Generally, the findings in many of the BCFA studies, cited earlier, have supported this model, with it (when examined) having better global fit than the three-factor CFA model. Despite the fact that the BCFA model has shown better global fit for the DASS-21 than the CFA model, it is uncertain what underlies this finding. Related to this, there have been recent criticisms over the application of CFA and BCFA approaches for testing the factor structure of complex measures, such as the DASS-21.

Limitations of CFA and bi-factor approaches

The restriction in CFA that items are associate exclusively to their designated dimensions, and all the loadings on non-designated dimensions (cross-loadings) are limited to zero [10,11] is viewed as highly compromising because items are infrequently exclusive indicators of their designated dimensions [18]. Indeed, cross-loadings for DASS-21 items have often been observed in EFA studies of the DASS-21 [6,8,9]. For example, the study by Oei et al. [7] found significant cross-loadings for seven of the DASS-21 items. Therefore, CFA may not accurately reflect the dataset, often resulting to poor fit and could consequently show poor fit even when this is not the case [2,19]. In relation to BCFA, Bonifay, Lane, and Reise2 [20] have pointed out other concerns. One concern is the difficulty in the interpretation of the meaning of specific factors in a bi-factor model. This is because statistically, the specific factors are seen as ‘nuisance’ variables, whereas from a theoretical viewpoint, the specific factors constitute substantially meaning factors that are not accounted for by the general factor. According to Park et al. [21], the appropriate explanation will depend on the reliability, stability, and incrementally validities of the specific factors. Another concern relates to the fact that the bi-factor model will always fit better than the corresponding first-order factor model because they can better accommodate nonsense response patterns in the dataset, thereby making it appear better even if this is not actually the case. A third concern is that because more parameters are estimated in a bi-factor model, this is a less parsimonious and potentially an unnecessary model compared to the corresponding CFA model.

To overcome the limitation of the CFA approach (i.e., all cross-loadings constrained to zero), the exploratory structural equation modeling (ESEM) with target rotation procedure has been developed [1,2]. ESEM with target rotation is a synergy of the EFA and CFA approaches, incorporating the advantages of the EFA approach (allowing cross-loadings) and CFA approach (model-based, and testing a priori defined structure). Fig 1 shows the ESEM with target rotation model as applied to DASS-21. As shown, the targeted symptoms load on their own factors as well as non-targeted factors (at values close to zero). Available evidence has demonstrated the advantages of the ESEM with target rotation over the EFA and CFA processes [2,19]. The basic ESEM with target rotation model can be expanded to a bi-factor ESEM (BESEM) with target rotation model, thereby combining concurrently the advantage of the CFA approach (a priori model specifications), the EFA approach (fallibility of the items and allowing cross-loadings), and the bi-factor approach (allowing a general factor and specific factors that are not correlated) [18,19]. Therefore, compared to the EFA, CFA, and ESEM with target rotation approaches, the BESEM approach with target rotation offers a more sophisticated method to explore different sources of construct-relevant multidimensionality [18]. This is especially relevant to the DASS-21 items because they have demonstrated multidimensionality, cross-loadings, and the presence of general and specific factors. Fig 1 includes the BESEM model with target rotation, as applied to the DASS-21 items, with specific factors for depression, anxiety, and stress. Additionally, for a bi-factor model, it is also important to evaluate if the factors in a bi-factor model are substantially meaningful. Morin et al. [18] have proposed an integrated test of multidimensionality procedure for this purpose. Additionally, Park et al. [21] have suggested evaluation of the reliabilities and validities of the factors in this model, especially the incremental validities of the specific factors.

As far as it can be ascertained by the present authors, the ESEM with target rotation approach has been applied in at least two studies involving the DASS-21 [6,7]. Johnson [6] examined the three-factor CFA, ESEM with target rotation, and BCFA models with three group factors (depression, anxiety, and stress) for ratings provided by individuals with Parkinson’s disease. Although the findings indicated good fit for the ESEM with target rotation model, the anxiety factor was poorly defined, with several items loading significantly onto the depression or stress factors. Kyriazos [7] used CFA, ESEM with target rotation, and BCFA to examine the factor structure of the DASS-21 (with group factors for depression, anxiety, and stress) among an adult community. Although there was good support for all three models, the authors concluded most support for the CFA three-factor model. The primary reason for this was that in the BCFA and ESEM with target rotation models, the factor loadings were unsatisfactory (low and/or negative item loadings on targeted factors). However, because this study did not use BESEM with target rotation, which as has been argued, could be potentially a more sophisticated and arguably more advanced approach for studying the factor structure of the DASS-21, the support claimed for the CFA model needs to be viewed cautiously. In this respect, the study by Johnson [6] that involved individuals with Parkinson’s disease used BESEM with target rotation to examine the structure of the DASS-21. Two different bi-factor models were examined: Henry and Crawford’s bifactor Model (specific factors for depression, anxiety, and stress and a general factor) [22]; and Tully, Zajac, and Venning’s bifactor model (specific factors for depression and stress, and a general factor) [23]. Although the BESEM with target rotation, based on Henry and Crawford’s bifactor model showed slightly better fit than their eventually preferred ESEM with target rotation model, the authors concluded that the improvement was not sufficient to justify the loss of parsimony [22]. However, as this study examined individuals with Parkinson’s disease, it cannot be assumed this this will also be the case for individuals from the general community. Consequently, further studies involving community samples are needed.

Aims of the present study

Given the aforementioned limitations and omissions, the major aim of the present study was to examine the structure of the DASS-21 items among adults from the general community using the BESEM with target rotation approach. For this, the integrated test of multidimensionality procedure proposed by Morin et al. [18] was closely adhered to. Consequently, the present study also tested the CFA three-factor model, the ESEM three-factor model with target rotation, the BCFA model with three specific factors, and the BESEM with three specific factors target rotation model. The three factors in all models were depression, anxiety, and stress. These models were described earlier (see also Fig 1). For all four models tested, model-based reliabilities and support for the external validities of the different factors in them were also examined. In terms of predictions, most support for the BESEM with three specific factors with target rotation model in terms of global fit was expected, but it was also speculated that one or more of its specific factors may not be well defined.

Method

Participants

The sample comprised 738 adult individuals, from the general community, with ages ranging between 18 and 72 years (M = 25.29 years; SD = 7.61 years). There were 374 males (50.7%; M = 25.28 years, SD = 7.71 years), and 364 females (49.3%; M = 25.30 years, SD = 7.51 years). Table 1 shows additional background information. As shown in the table, more than half the participants were employed, and had completed technical or university education, Also, on average, participants completed at least 12 years of education. In relation to the DASS-21 items, their mean scores ranged from 0.54 to 1.42. On a scale of 0 (“did not apply to me at all”) to 3 (“applied to me very much or most of the time”), these scores indicate low levels for all the items.

Table 1. Frequencies and descriptive statistics of background variables, and descriptive for the DASS items.

Background variables Frequency/ Descriptive Statistics
Employed Frequency (percentage) = 529 (71.7%)
Highest Educational Level
    Primary Frequency (percentage) = 42 (5.7%)
    Secondary Frequency (percentage) = 268 (36.3%)
    Technical Frequency (percentage) = 193 (26.2%)
    University Frequency (percentage) = 235 (31.8%)
Years of education Mean (SD) = 12.01 (5.12)
ADHD—Inattention Mean (SD) = 14.43 (6.20)
ADHD—Hyperactivity/Impulsivity Mean (SD) = 14.28 (6.09)
DASS-21—Stress Mean (SD) = 6.86 (4.34)
DASS-21—Anxiety Mean (SD) = 5.36 (4.29)
DASS-21—Depression Mean (SD) = 7.41 (5.65)

Based on the normative scores for the DASS-21, published by Henry and Crawford [22], the scores for the participants in the present study for depression, anxiety, and stress (see Table 1) were less than 1 SD from the mean, around 1.5 SD from the mean, and just above 1 SD from the mean, respectively. According to Kessler et al. [24], a total score of 24 or more for IA or HI symptom groups on the Adult ADHD Self-Report Scale Symptom Checklist (ASRS) is indicative of a high likelihood of an ADHD (attention deficit hyperactivity disorder) diagnosis. In the present study, the total mean scores for IA and HI were 14.43 and 14.28 respectively (see Table 1). Therefore, on the whole, the participants in the present study can be seen as reasonably well adjusted, with no problematic levels of depression, anxiety, stress, or ADHD symptoms. All relevant data are within the manuscript and its Supporting Information file" has now been added to the cover letter and the article's accompanying information.

Measures

Depression Anxiety Stress Scales-1 (DASS-21; [3])

As noted above, the DASS-21 constitutes a self-addressed instrument involving 21 items, subdivided into three equal-size sub-scales reflecting depression, anxiety, and stress. Items are rated on a 4-point Likert (0 = did not apply-3 = applied most of the time) sequence in terms of how often the individual experiences the behavior they refer to during the past week. Past evidence has favored acceptable convergent and discriminant validities, and high internal reliabilities for the three DASS-21 sub-dimensions [3, Norton, 2007]. The internal reliability for the depression, anxiety, and stress dimensions in the present sample were .84, .71 and .83 respectively.

Procedure

The study was approved by the Cairnmillar Institute Human Research Ethics Committee. Participants were recruited online via advertisements on various platforms, including social media (e.g., Facebook) and internet chatrooms (e.g., Discord). Individuals who clicked the survey link were debriefed on the first page with a description of the study and its aims via the Plain Language Information Statement. Verification that participants’ data would be recorded anonymously was provided, and a statement ensuring that they had the choice to stop participating in the survey at any point in time was also included. Participants digitally provided their informed consent by clicking to proceed to take part in the survey. Apart from age being adults, no other exclusion criteria were applied. Participants were not compensated in any way for their participation, or disadvantage for withdrawing from the study.

Statistical analysis

All statistical calculations were implemented using Mplus Version 7.3 and R [25,26]. The weighted least square mean and variance adjusted (WLSMV) estimator was applied. WLSMV can correct for non-normality in the dataset and is suited for responses with four or less response categories [26,27,28], as is the case with the items listed in the DASS-21.

To establish the best DASS-21 model, the integrated test of multidimensionality procedure proposed by Morin et al. [18] was closely followed. Therefore, in the initial phase, the global fit of the all the models tested was examined and compared. In a subsequent phase, the correlations between the factors in the CFA and ESEM with target rotation models were compared. The next phase involved evaluating the presence of construct-relevant psychometric multidimensionality due to the fallible nature of the indicators by comparing the factor loadings in the CFA and ESEM with target rotation models. In the next phase, the presence of a hierarchically superior latent construct was evaluated by comparing the fit of ESEM with target rotation and the BESEM with target rotation models, and then the patterns of loadings and cross-loadings in the BESEM with target rotation model.

Because χ2 values, such as WLSMVχ2, are sample dependent, the fit here was additionally evaluated by approximate fit indices. These involved the root mean squared error of approximation (RMSEA), the comparative fit index (CFI), and the Tucker Lewis Index (TLI). Hu and Bentler [29] supported RMSEA rates approximating .06, or lower as good fit, 0.07 to 0.08 as acceptable fit, 0.08 to .10 as limited fit, and >.10 as unfit. Considering CFI and TLI, rates approximating .95 or higher were indicative of sufficient fit, between .90 and .95 as acceptable fit, and below .90 as poor fit. As the DASS-21 models assessed were nested, the chi-square difference test (Δχ2) was employed for examining differences in model fit. However, because the Δχ2 test is also highly sensitive to large sample sizes, for the present study, models were also compared using change in the RMSEA and CFI values, Generally, differences in CFI values of 0.01 or more and/or RMSEA values of 0.015 or more are interpreted as difference for the models being compared [30,31]. For all models tested, model-based reliabilities for the different factors were computed. More specifically, the categorical omega (ω) values for the factors were computed alongside their explained Explained Common Variance (ECV) [32,33]. The ECV in the general factor of a bi-factor model reflects the degree of uni-dimensionality of the indicators, with higher values indicating more support for uni-dimensionality. ECV values > .70 for the general factor indicate that the factor loadings on this factor are close to that expected for a one-factor model [32,33]. The ω should be viewed as an index describing the amount of variance in summed (standardized) scores related to the specific dimension [34]. For a bi-factor model, the ω values for the general and specific factors are referred to as omega hierarchical (ωh) and omega-subscale (ωs) respectively [32,34]. As our data was categorical, we computed categorical omega [33,35,36,37]. The confidence intervals for categorical omega were computed using the “ci.reliability” function in the MBESS R package version 4.4.0 [33]. The values for all types of ω values fluctuate from 0 to 1, with 0 revealing no reliability and 1 revealing perfect reliability [33]. According to Reise, Bonifay, and Haviland [38], in a bi-factor model, ωh values need to be at least .50 with values of at least .75 being preferable.

The external validity of the factors in all models tested were also examined. To test for the validity of the DASS-21 factors, the ADHD IA and HI total scales scores were regressed on the relevant factors in the DASS-21 models. To control for possible confounding effects of age and gender, both these variables were entered as covariates in the analyses. Significant and positive predictions of either IA or HI total score by a DASS-21 factor can be taken as support for the validity of that factor, In this context, significant and positive prediction by the general factor can be interpreted as supportive of the validity of that factor, and significant and positive predictions of IA or HI total score by the specific/group factors (depression, anxiety, and stress) can be taken as support for the incremental validity of the relevant specific factors.

Results

There was no missing value in the data set.

Comparison of global fit

Table 2 shows the fit values for all the DASS-21 models tested in the study. For all models, their CFI, TLI and RMSEA values indicated good fit. At the statistical level, CFA 3-F model differed from the ESEM 3-F (Δdf = 36; ΔWLSMVχ2 = 152.04, p < .001), BCFA 3-s-F (Δdf = 186; ΔWLSMVχ2 = 80.31, p < .001), and BESEM 3-s-F (Δdf = 186; ΔWLSMVχ2 = 218.82, p < .001) models. The BCFA 3-s-F model differed from the ESEM 3-F (Δdf = 18; ΔWLSMVχ2 = 84.32, p < .001) and BESEM 3-s-F (Δdf = 36; ΔWLSMVχ2 = 152.30, p < .001) model. The ESEM 3-F model differed significantly from the BESEM 3-s-F (Δdf = 18; ΔWLSMVχ2 = 74.11, p < .001) model. This means that the fit values for all models differed significantly from each other, with the BESEM 3-s-F model showing the best fit, followed in sequence by ESEM 3-F, BCFA 3-s-F, and ESEM 3-F models. In relation to approximate fit indices, the models did not differ from each other in terms of ΔRMSEA values (< O.015). However, the ΔCFI was greater than 0.01 for the comparison involving the CFA 3-F with the ESEM 3-F and the BESEM 3-s-F models. There was no difference in the CFI values between the CFA 3-F and the BCFA 3-s-F models, or between the ESEM 3-F and the BESEM 3-s-F models. Taken together, the findings can be interpreted as showing that both the ESEM models (ESEM3-F and BESEM 3-s-F) showed better fit that the both the first-order models (CFA3-F and BCFA 3-s-F).

Table 2. Fit of all the models tested in the study.

Fit Values Factor correlation
Models χ2 (df) CFI TLI RMSEA (90% CI) D-A D-S A-S
CFA 3-F 515.31 (186) . 969 .965 . 049 (.044 - .054) .73 .79 .90
ESEM 3-F 357.79 (150) . 980 .973 . 043 (.038 - .049) .59 .53 .45
BCFA 3-s-F 438.08 (168) . 975 .968 047 (.041 - .052)
BESEM 3-s-F 281.99 (132) . 986 .978 . 039 (.033 - .046)

F = factor; D = depression; A = anxiety; S = stress; CI = confidence interval; χ2 = chi-square; df = degrees of freedom; CFA = confirmatory factor analysis; ESEM = exploratory structural equation modeling; BCFA = bi-factor confirmatory factor analysis; BESEM = bi-factor exploratory structural equation modeling; s = specific; RMSEA = root mean square error of approximation; CFI = comparative fit index; TLI = Tucker-Lewis Index.

CFA versus ESEM models

Correlations between latent factors

Table 2 also includes the correlations of the latent factors in the CFA and ESEM with target rotation models. As shown, the correlations of the latent factors were much lower in the ESEM with target rotation model than the CFA model. Generally, when this is the case, the ESEM with target rotation model is considered a better model than the CFA model to reflect the variances in the items [2,11].

Factor loadings

Table 3 shows the factor loadings for the CFA and ESEM with target rotation models. As shown, for the CFA model, all 21 items loaded significantly and saliently (> .30) on their targeted (designated) factors. For the ESEM with target rotation model, all seven depression items loaded on their targeted depression factor, and the loadings for all seven items were salient. Four anxiety items (Items 9, 15, 19 and 20) also had significant loadings on the depression factor, with the loading for one of them (Item 19) being negative. Also, with the exception of one item (Item 11), the other six stress items loaded significantly on the depression factor, with the loadings for three of them being positive and salient (Items 6, 8, and 12), one (Item 11) being close to salience, and one (Item 1) being salient and negative. In relation to the anxiety items, all seven items loaded significantly on their targeted anxiety factor. Of these, with the exception of one of them (Item 2), all the other items has salient and positive loadings. The loading for Item 2 was negative and not salient. Four depression items (Items 3, 13, 16 and 17) loaded significantly on the anxiety factor, with the loadings for two of them (Items 3 and 16) being negative. With the exception of one item (Item 14), the other six stress items loaded significantly on the anxiety factor, with three of them (Items 6, 8 and 12) also being salient and positive, one (Item 11) close to salience and positive, and one salient and negative (Item 1). In relation to the stress items, all seven of them had significant loadings on the stress factor. With the exception of one item (Item 1), the loadings for the other items were all salient and positive. Four depression items (Items 3, 5, 16 and 21) loaded significantly on the stress factor. Five anxiety items (Items 2, 4, 7, 15 and 19) loaded significantly on stress factor, with three of them (Items 2, 4 and 19) being salient.

Table 3. Factor loadings of the first-order and bi-factor CFA and ESEM models.
Items CFA three-factor ESEM three-factor Bi-factor CFA three-factor BESEM three-factor
# Brief description D A S D A S G D A S G D A S
3 Experiencing positive feelings .74*** .74*** -.21*** .24*** .58*** .48*** .64*** .44*** -.19* -.21*
5 Working up initiative .65*** .44*** .09 .19*** .58*** .23*** .56*** .24*** -.03 .11*
10 Nothing to look forward to .80*** .80*** .03 -.03 .63*** .49*** .61*** .52*** .04 .10**
13 Felt down-hearted .80*** .74*** .09* -.02 .64*** .45*** .63*** .47*** .09* -.01
16 Unable to become enthusiastic .77*** .75*** -.11* .16** .60*** .48*** .63*** .45*** -.11* -.02
17 Not worth much .81*** .80*** .09* -.08 .64*** .49*** .60*** .53*** .10* .11**
21 Life was meaningless .81*** .92*** .00 -.15*** .60*** .59*** .57*** .62*** .08 .07
2 Dryness of mouth .40*** .08 -.14* .58*** .39*** -.01 .47*** -.06 -.32** -.11
4 Difficulty breathing .65*** -.04 .46*** .36*** .56*** -.11*** .66*** -.14*** .12 -.07
7 Trembling .52*** .09 .38*** .12* .48*** -.05*** .50*** .00 .18* -.03
9 Worry about embarrassing self .71*** .20*** .58*** .02 .67*** -.03* .62*** .08* .33*** .12
15 Felt close to panic .81*** .13*** .66*** .12* .75*** -.05*** .74*** .01 .34*** .05
19 Awareness of action of heart .58*** -.10* .47*** .34*** .50*** -.11*** .61*** -.18*** .13 -.05
20 Felt scared without reason .75*** .17*** .57*** .10 .70*** -.0*** .70*** .04 .32** -.06
1 Hard to wind down .09* .06 -.33*** .41*** .09* .02 .16** -.02 -.35*** -.18
6 Tended to over-react .69*** .14** .36*** .32*** .68*** .27** .64*** .01 .05 .32***
8 Using nervous energy .75*** .08 .49*** .33*** .74*** -.01 .72*** -.04 .13* .18**
11 Getting agitated .69*** .20*** .29*** .31*** .68*** .08 .63*** .06 .01 .28***
12 Difficult to relax .70*** .20*** .38*** .24*** .71*** -.19 .66*** .06 .11* .14**
14 Intolerant of restrictions .52*** .30*** .00 .30*** .51*** .23** .47*** .15** -.19** .29**
18 Felt rather touchy .63*** .18*** .21*** .36*** .62*** .16* .60*** .05 -.06 .22**

CFA = confirmatory factor analysis; ESEM = exploratory structural equation model; BESEM = bi-factor exploratory structural equation model.

*p < .05.

**p < .01.

Therefore, taken as a whole, the considerable significant cross-loadings (many being also salient) as well as the negative loading of items on their targeted factors indicate that all the specific factors were not well defined. Psychometrically, the substantial overlap in variances across the items belonging to the different factors raises the possibility that a higher-order factor model (such as BCFA) is be needed to better model the variances in the DASS-21 items.

BCFA versus BESEM models

Factor loadings and cross-loadings

Table 3 shows the factor loadings for the BCFA model. For the BCFA model, all 21 items loaded significantly on the general factor, with 20 loading being salient. The exception was Item 1. All seven depression items loaded significantly and positively on the specific depression factor, with six items having salient loadings (the exception being Item 5). All seven anxiety items loaded negatively on the specific anxiety factor, with six items (Items 4, 7, 9, 15, 19 and 20) being significant, but not salient. Only three stress items (Items 6, 14 and 18) loaded significantly on the specific stress factor, and none of them were salient. Also, two items (Items 8 and 12) loaded negatively. Taken together, the pattern of negative loadings, and the strength of loadings can be taken to indicate that although the BCFA model showed good global fit, and the general factor and the specific depression factors were at least adequately defined, the specific factors for stress and anxiety in this model were not well defined.

Table 3 also includes the factor loadings for the BESEM with target rotation model. For this model, all 21 items loaded significantly on the general factor, with 20 loading being salient. The exception was Item 1. All seven depression items loaded significantly on the specific depression factor, with six items having salient loadings. The exception was Item 5. Two anxiety items (Items 14 and 19) loaded significantly and negatively, and one anxiety item (Item 9) loaded significantly and positively on the specific depression factor. Also, one stress item (Item 15) loaded significantly and positivity on the specific depression factor. Thus, the specific depression factor was at best moderately well defined. With the exception of two anxiety items (Items 4 and 19), the other five anxiety items loaded significantly on the specific anxiety factor. Three items (Items 9, 15 and 20) loaded positively and saliently, and the other one item (Item 2) loaded negatively and saliently. Three depression items (Items 3, 13, and 16) loaded significantly (but not saliently) on the specific anxiety factor, with two of them (Items 3 and 16) being negative. With the exception of one stress item (Item 6), the remaining six stress items loaded significantly and positively on the specific anxiety factor. The loading for Item 1 was salient and negative, and the loading for Item 14 was negative. Thus the specific anxiety factor was poorly defined. With the exception of one stress item (Item 1), the other six stress items loaded significantly and positively on the specific stress factor, with only one loading (Item 6) being salient. With the exception of three depression items (Items 13, 16 and 21), the other four anxiety items loaded significantly (but not saliently) on the specific stress factor, with one of them (Item 3) being negative. None of the anxiety items loaded significantly on the specific stress factor. Four of these items loaded negatively. Therefore, the specific stress factor can also be considered to be poorly defined.

Overall, like the ESEM with target rotation model, the BESEM with target rotation model also indicated poorly defined stress and anxiety factors. Additionally, several items in the BESEM with target rotation model lacked significant and salient positive loading on the targeted factors, or loaded significantly and saliently on non-target factors. Overall, with the exception of Item 5, none of the other depression items fell into any of these categories. For the anxiety items, Items 2, 4, 7, 19 and 20 fell into one or more of these categories. For the stress items, Items 1, 8, 11, 12, 14 and 18 fell into one or more of these categories.

Reliabilities of the factors in all the DASS-21 models tested

For the three-factor CFA model, the ECV for the depression, anxiety, and stress were .43, .30, and .26, respectively. The categorical omega reliability coefficients (95% percentile bootstrapped confidence intervals) for the factors this model were depression = .88 (CI = .87–90), anxiety = .78 (CI = .75–81), and stress = .75 (CI—.71–77). These values can be considered at least acceptable (≥ 0.70). For the BCFA model, the ECV for the general factor, and the specific factors for depression, anxiety, and stress were .81, .16, .00, and .02 respectively. The ωh value for the general factor was .92, and the ωs values for the specific factors for depression, anxiety, and stress were .26, .22, and .22, respectively. Thus although the reliability of the general factor was good, they were low for all three specific factors. Overall therefore, across these models, the CFA model showed the better reliabilities for its factors.

Validities of the factors in all the DASS-21 models tested

The standardized path coefficients (probability) for prediction of IA and HI symptoms by gender, age, general distress, depression, anxiety, and stress are shown in Table 4. In the CFA model, IA was associated with stress and anxiety, whereas HI was associated with stress and depression. The differential associations for the DASS-21 factors with IA and H can be interpreted as supportive of the external validity of the DASS-21 factors, especially depression and anxiety. For the ESEM with target rotation model, IA was associated with stress, and HI was associated with stress and anxiety. Thus there was support for the externality validity of only the anxiety factor. For the BCFA model, IA and HI were both associated with the distress, depression, and anxiety factors, HI ws also associated with the stress factors. These findings indicate support for the external validity of only the distress factor and specific stress factor. For the BESEM with target rotation model, the findings indicate that both IA and HI were associated positively with distress, and negatively with anxiety. The findings can be interpreted to indicate support for the external validity of the general ADHD factor, but not the specific factors for depression, anxiety, and stress. Taken together, the findings can be interpreted as providing most support for the external validity of the factors in the CFA model.

Table 4. Standardized beta coefficients for the predictions of the DASS scale (depression, anxiety, and stress) scores by the factors in the bi-factor ESEM model, with gender and age as covariates.

CFA ESEM BCFA BESEM
IA HI IA HI IA HI IA HI
Gender 0.12** 0.08* 0.12** 0.08* 0,12 0.08 0.12** 0.08*
Age -0.12** -0.10** -0.12** -0.10** -0.12 -0.10 -0.12** -0.10**
General/Distress - - - - 0.79*** 0.86*** 0.60*** 0.59***
Depression 0.06 0.18** 0.10 -0.05 -0.33*** -0.56*** -0.07 -0.17
Anxiety 0.15** 0.06 0.06 0.18** -0.51** -0.64** -0.17** -0.12*
Stress 0.57*** 0.57*** 0.52*** 0.52*** 0.96 1.28* 0.08 0.14

IA = Inattention; HI = Hyperactivity/Impulsivity BESEM = bi-factor exploratory structural equation model.

***p < .001

**p < .01

*p < .05

Discussion

Selection of the optimum DASS-21 model

The present study examined the structure of the DASS-21 items among adults from the general community using CFA, BCFA, ESEM with target rotation, and BESEM with target rotation approaches. The three group/specific factors in all models were depression, anxiety, and stress. A general factor was also included in the bi-factor models. In terms of global fit values, all the models showed good fit. However, given that for the ESEM with target rotation, BCFA, and BESEM with target rotation models: (i) one or more of their specific/group factors were poorly defined; (ii) the specific factors for depression, anxiety and stress in the BESEM with target rotation model had low reliabilities; (iii) and the lack of support for their external validities, the ESEM with target rotation, BCFA, and BESEM with target rotation models were deemed as unacceptable. Because the CFA model had good global fit; (ii) loadings that were significant and salient; (iii) factors with acceptable reliabilities and validities; and (iv) the most parsimonious of all the models tested, this model was selected as the preferred model.

The findings here are in part consistent with past studies that have compared the factor structure of the DASS-21 using CFA, BCFA, and ESEM with target rotation and BESEM with target rotation approaches [6,7]. Johnson [6] found that although the ESEM with target rotation model showed the best fit (compared to CFA and BCFA model), the anxiety factor was poorly defined, with several items loading significantly onto the depression or stress factors. They also found that although the BESEM with target rotation model with specific factors for depression, anxiety and stress and a general factor showed slightly better fit than the ESEM model, the improvement was not sufficient to justify the loss of parsimony. Kyriazos [7] found that although the CFA, ESEM with target rotation, and BCFA models showed good fit, the factor loadings in the BCFA and ESEM with target rotation models were unsatisfactory (low and/or negative item loadings on targeted factors). Consequently, as with the findings of the present study, they concluded most support for the CFA model. In addition to the comparability of the findings here to previous findings, the present study’s findings also extend the extant findings in this area because unlike the two previous studies, the present study also examined the factor structure of DASS-21 in terms of the BESEM with target rotation model in a community-based sample. Despite being a more advanced model for testing the factor structure of a measure, the DASS-21 BESEM model was also not satisfactory because the anxiety and stress factors in this model were poorly defined, and there was a lack of support for the reliabilities and validities of its specific factors. However, it is worthy of note that there was support for the general factor in this model.

Although, the present study adopted the CFA 3-factor model as the preferred model for DASS-21, this model is not without limitations. First, the findings in the study did provide support for modeling a higher order factor because the correlations of latent factors in the CFA model were higher than the corresponding correlations in the ESEM with target rotation model. Second, because the ESEM with target rotation model showed considerable cross-loadings, it follows that there is a need to include a general factor to capture the shared variances in the DASS-21 items. Despite the fact that the BCFA and BESEM with target rotation modeled a general factor, both models showed poorly defined anxiety and stress factors. Consequently, it is possible that the description and/or content of the items the DASS-21 may not be appropriate. Given that the BESEM models provides a more advanced method to model the variances among the DASS-21 items, the factor loadings of the items in the specific factors this model can be used to identify problematic items. In this respect, items that (i) lack significant and salient positive loading on the targeted factors, (ii) do not load significantly on the targeted factors, or (iii) loaded significantly and saliently on non-target factors, can be considered as problematic. The findings here showed that with the exception of Item 5, none of the other depression items fell into any of these categories. For the anxiety items, Items 2, 4, 7, 19 and 20 fell into one or more of these categories; and for the stress items, Items 1, 8, 11, 12, 14 and 18 fell into one or more of these categories. Therefore, Items 1, 2, 4, 5, 7, 8, 11, 12, 14, 19, 18, and 20 in the DASS-21 are probably the items that may need to be reviewed for inclusion in the DASS-21. If the suggested items are removed, it would mean a shorter questionnaire with just six depression items, three anxiety items, and one stress item. The depression items would include Items 3 (“expressing positive feelings”), 10 (“nothing to look forward to”), 13 (“felt down-hearted”), 16 (“unable to become enthusiastic”), 17 (“not worth much”), and 21 (“life was meaningless”); and the anxiety items could include Items 9 (“worry about embarrassing self”), 15 (“felt close to panic”), and 20 (felt scared without reason”). Therefore, as intended by the developers of the DASS-21 [3], and in line with the tripartite model [39,40], the remaining depression items will still be assessing dysphoria, low self-esteem, and lack of incentive; and the remaining anxiety items will still be assessing subjective responses to anxiety and fear. The one stress item (Item 6) will be assessing “tended to over-react”, that is, as intended, negative affectivity response reflecting nervous tension and irritability.

Limitations and conclusions

The current research conclusions may require to be perceived in the light of some restrictions. Specifically, provided that the ethic approval did not enable the gathering of participants’ information before inviting them to enroll in the study, no information is accessible about any potential communalities between those who were exposed to the study invitation but decided not to contribute. Thus, it is likely that the final sample examined in the study may not be representative of the general population. Furthermore, as the current research involved a community sample, it may be questionable how the results may apply to clinical samples. Third, because the DASS-21 is a self-report questionnaire, it is possible the ratings may have been influence by the method used to collect them, thereby subjecting participants to common method variance effect. Fourth, the conclusions made in this study are based on a single study. To explore the robustness of the study’s findings, they need to be replicated. Given these limitations, the results of the study may be viewed as preliminary. Despite these potential restrictions, it is argued that the present study findings do support more studies in this area, accounting for the limitations identified here.

Supporting information

S1 Data

(SAV)

Data Availability

All relevant data are within the manuscript and its Supporting Information file.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Asparouhov T., & Muthén B. (2009). Exploratory structural equation modeling. Structural Equation Modeling, 16, 397–438. 10.1080/10705510903008204 [DOI] [Google Scholar]
  • 2.Marsh H. W., Muthén B., Asparouhov T., Lüdtke O., Robitzsch A., Morin A. J. S., et al. (2009). Exploratory structural equation modeling, integrating CFA and EFA: Application to students' evaluations of university teaching. Structural Equation Modeling, 16, 439–476. 10.1080/10705510903008220 [DOI] [Google Scholar]
  • 3.Lovibond S.H. & Lovibond P.F. (1995). Manual for the Depression Anxiety Stress Scales. (2nd. ed). Sydney: Psychology Foundation. [Google Scholar]
  • 4.Antony M. M., Bieling P. J., Cox B. J., Enns M. W., & Swinson R. P. (1998). Psychometric properties of the 42-item and 21-item versions of the Depression Anxiety Stress Scales in clinical groups and a community sample. Psychological Assessment, 10, 176–181. 10.1037/1040-3590.10.2.176 [DOI] [Google Scholar]
  • 5.Brown T. A., Chorpita B. F., Korotitsch W., & Barlow D. H. (1997). Psychometric properties of the Depression Anxiety Stress Scales (DASS) in clinical samples. Behaviour Research and Therapy, 35, 79–89. 10.1016/s0005-7967(96)00068-x [DOI] [PubMed] [Google Scholar]
  • 6.Johnson A. R., Lawrence B. J., Corti E. J., Booth L., Gasson N., Thomas M. G., et al. (2016). Suitability of the Depression, Anxiety, and Stress Scale in Parkinson's disease. Journal of Parkinson's Disease, 6, 609–616. 10.3233/JPD-160842 [DOI] [PubMed] [Google Scholar]
  • 7.Kyriazos T. A., Stalikas A., Prassa K., & Yotsidi V. (2018). Can the Depression Anxiety Stress Scales Short be shorter? Factor structure and measurement invariance of DASS-21 and DASS-9 in a Greek, non-clinical sample. Psychology, 9, 1095–1127. 10.4236/psych.2018.95069 [DOI] [Google Scholar]
  • 8.Oei T. P., Sawang S., Goh Y. W., & Mukhtar F. (2013). Using the Depression Anxiety Stress Scale 21 (DASS-21) across cultures. International Journal of Psychology, 48, 1018–1029. 10.1080/00207594.2012.755535 [DOI] [PubMed] [Google Scholar]
  • 9.Wise F. M., Harris D. W., & Olver J. H. (2017). The DASS-14: Improving the construct validity and reliability of the Depression, Anxiety, and Stress Scale in a Cohort of Health Professionals. Journal of Allied Health, 46, 85E–90E. 10.1037/t66738-000 [DOI] [PubMed] [Google Scholar]
  • 10.Joreskog K.G. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34, 183–202. 10.1007/BF02289343 [DOI] [Google Scholar]
  • 11.Morin A. J. S., Marsh H. W., & Nagengast B. (2013). Exploratory structural equation modeling In Hancock G. R. & Mueller R. O. (Eds.), Structural equation modeling: A second course (pp. 395–436). Charlotte, NC: Information Age Publishing. [Google Scholar]
  • 12.Ruiz F. J., Garcia Martin M., Suarez Falcon J. C., & Odriozola Gonzalez P. (2017). The hierarchical factor structure of the Spanish version of Depression Anxiety and Stress Scale-21. International Journal of Psychology and Psychological Therapy, 17, 97–105. [Google Scholar]
  • 13.Scholten S., Velten J., Bieda A., Zhang X. C., & Margraf J. (2017). Testing measurement invariance of the Depression, Anxiety, and Stress Scales (DASS-21) across four countries. Psychological Assessment, 29, 1376–1390. 10.1037/pas0000440 [DOI] [PubMed] [Google Scholar]
  • 14.Sinclair S. J., Siefert C. J., Slavin-Mulford J. M., Stein M. B., Renna M., & Blais M. A. (2012). Psychometric evaluation and normative data for the depression, anxiety, and stress scales-21 (DASS-21) in a nonclinical sample of US adults. Evaluation & the Health Professions, 35, 259–279. 10.1177/0163278711424282 [DOI] [PubMed] [Google Scholar]
  • 15.Le M. T. H., Tran T. D., Holton S., Nguyen H. T., Wolfe R., & Fisher J. (2017). Reliability, convergent validity and factor structure of the DASS-21 in a sample of Vietnamese adolescents. PloS One, 12(7), e0180557 10.1371/journal.pone.0180557 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Moore S. A., Dowdy E., & Furlong M. J. (2017). Using the Depression, Anxiety, Stress Scales-21 with US adolescents: An alternate models analysis. Journal of Psychoeducational Assessment, 35, 581–598. 10.1177/0734282916651537 [DOI] [Google Scholar]
  • 17.Osman A., Wong J. L., Bagge C. L., Freedenthal S., Gutierrez P. M., & Lozano G. (2012). The Depression Anxiety Stress Scales-21 (DASS-21): Further examination of dimensions, scale reliability, and correlates. Journal of Clinical Psychology, 68, 1322–1338. 10.1002/jclp.21908 [DOI] [PubMed] [Google Scholar]
  • 18.Morin A., Tran A., & Caci H. (2016). Factorial validity of the ADHD Adult Symptom Rating Scale in a French community sample: Results from the ChiP-ARD Study. Journal of Attention Disorders, 20, 530–541. 10.1177/1087054713488825 [DOI] [PubMed] [Google Scholar]
  • 19.Marsh H. W., Morin A. J., Parker P. D., & Kaur G. (2014). Exploratory structural equation modeling: An integration of the best features of exploratory and confirmatory factor analysis. Annual Review of Clinical Psychology, 10, 85–110. 10.1146/annurev-clinpsy-032813-153700 [DOI] [PubMed] [Google Scholar]
  • 20.Bonifay W., Lane S. P., & Reise S. P. (2017). Three concerns with applying a bifactor model as a structure of psychopathology. Clinical Psychological Science, 5, 184–186. 10.1177/2167702616657069 [DOI] [Google Scholar]
  • 21.Park J. L., Silveira M. M., Elliott M., Savalei V. & Johnston C. H. (2018) Confirmatory factor analysis of the structure of adult ADHD symptoms. Journal of Psychopathology and Behavioral Assessment, 40, 573–585. 10.1007/s10862-018-9698-y [DOI] [Google Scholar]
  • 22.Henry J. D., & Crawford J. R. (2005). The short‐form version of the Depression Anxiety Stress Scales (DASS‐21): Construct validity and normative data in a large non‐clinical sample. British journal of clinical psychology, 44(2), 227–239. [DOI] [PubMed] [Google Scholar]
  • 23.Tully P. J., Zajac I. T., & Venning A. J. (2009). The structure of anxiety and depression in a normative sample of younger and older Australian adolescents. Journal of abnormal child psychology, 37(5), 717 10.1007/s10802-009-9306-4 [DOI] [PubMed] [Google Scholar]
  • 24.Kessler R. C., Adler L., Ames M., Barkley R. A., Birnbaum H., Greenberg P., et al. (2005). The prevalence and effects of adult attention deficit/hyperactivity disorder on work performance in a nationally representative sample of workers. Journal of occupational and environmental medicine, 47(6), 565–572. 10.1097/01.jom.0000166863.33541.39 [DOI] [PubMed] [Google Scholar]
  • 25.Chambers J. (2008). Software for data analysis: programming with R. Springer Science & Business Media. [Google Scholar]
  • 26.Muthen L. K., & Muthen B. O. (2012). Mplus user's guide (7th ed). Los Angeles, CA: Author. [Google Scholar]
  • 27.DiStefano C. (2002) The impact of categorization with confirmatory factor analysis. Structural Equation Modeling, 9, 327–346. 10.1207/S15328007SEM0903_2 [DOI] [Google Scholar]
  • 28.Lubke G. H., & Muthen B. O. (2004). Applying multigroup confirmatory factor models for continuous outcomes to Likert scale data complicates meaningful group comparisons. Structural Equation Modeling, 11, 514–534. 10.1207/s15328007sem1104_2 [DOI] [Google Scholar]
  • 29.Hu L. T., & Bentler P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55. 10.1080/10705519909540118 [DOI] [Google Scholar]
  • 30.Chen F.F. (2007) Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14, 464–504. 10.1080/10705510701301834 [DOI] [Google Scholar]
  • 31.Cheung G.W., Rensvold R.B. (2002) Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233–255. 10.1207/S15328007SEM0902_5 [DOI] [Google Scholar]
  • 32.Zinbarg R. E., Revelle W., Yovel I., & Li W. (2005). Cronbach's alpha, Revelle's beta, and McDonald's omega h: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70, 123–133. 10.1007/s11336-003-0974-7 [DOI] [Google Scholar]
  • 33.Kelley K. (2007). Methods for the behavioral, educational, and social sciences: An R package. Behavior Research Methods, 39(4), 979–984. 10.3758/bf03192993 [DOI] [PubMed] [Google Scholar]
  • 34.McDonald R. P. (1970). The theoretical foundations of principal factor analysis, canonical factor analysis, and alpha factor analysis. British Journal of Mathematical and Statistical Psychology, 23, 1–21. 10.1111/j.2044-8317.1970.tb00432.x [DOI] [Google Scholar]
  • 35.Green S. B., & Yang Y. (2009). Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika, 74, 155–167. 10.1007/s11336-008-9099-3 [DOI] [Google Scholar]
  • 36.Kelley K, Pornprasertmanit S. (2016). Confidence intervals for population reliability coefficients: Evaluation of methods, recommendations, and software for composite measures. Psychol Methods, 21, 69–92, 10.1037/a0040086 [DOI] [PubMed] [Google Scholar]
  • 37.Revelle W. (2011). An overview of the psych package. Department of Psychology Northwestern University; Accessed on March, 3(2012), 1–25. [Google Scholar]
  • 38.Reise S. P., Bonifay W. E., & Haviland M. G. (2013). Scoring and modeling psychological measures in the presence of multidimensionality. Journal of Personality Assessment, 95, 129–140. 10.1080/00223891.2012.725437 [DOI] [PubMed] [Google Scholar]
  • 39.Mineka S., Watson D., & Clark L. A. (1998). Comorbidity of anxiety and unipolar mood disorders. Annual Review of Psychology, 49, 377–412. 10.1146/annurev.psych.49.1.377 [DOI] [PubMed] [Google Scholar]
  • 40.Watson D., Clark L. A., & Tellegen A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 47, 1063–1070. 10.1037/0022-3514.54.6.1063 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Manuel Fernández-Alcántara

27 Jan 2020

PONE-D-19-32096

Confirmatory Factor Analysis and Exploratory Structural Equation Modeling of the Factor Structure of the Depression Anxiety Stress Scales-21

PLOS ONE

Dear Dr. Stavropoulos,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Mar 12 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Manuel Fernández-Alcántara, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that some of the partcipants in your study were under the age of 18. Please state in your methods section whether you obtained consent from parents or guardians of the minors included in the study or whether the research ethics committee or IRB approved the lack of parent or guardian consent.

3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Overall, this was a robust and interesting evaluation of the measurement structure of the DASS-21. I have only minor comments for clarification.

The authors state several times that they are the first to test a bifactor ESEM of the DASS-21, but it should be noted that Johnson et al. (2016) also tested two Bifactor ESEM Models, including the BESEM Model tested in the present paper. See the following from the Methods section in Johnson et al. (2016):

Five established DASS-21 factor structures were fit first in CFA, and then in ESEM with ‘Target’ rotation:

1. 1-factor Model

2. 2-factor Model – Oblique model, two correlated factors representing physiological hyperarousal and generalised negativity, Duffy et al., (2005)

3. 3-factor Model – Oblique model, three correlated factors representing depression, anxiety, and stress, Lovibond and Lovibond (1995)

4. Bifactor Model A – Nested model, 3 independent factors representing depression, anxiety, and stress and a general negative affect factor, Henry and Crawford (2005)

5. Bifactor Model B – Nested model, 2 independent factors representing depression and stress and a general negative affect factor, Tully, Zajac, and Venning (2009)

A note on the wording: there is extensive reference to the method used as simply ‘ESEM’, but ESEM is a general framework for using EFA within structural models and does not indicate the type of EFA rotation used. The authors should note that the analyses used ‘Target’ rotation.

The calculation of the Omega coefficient needs clarification. When using the WLSMV estimator, the Omega statistic should not be calculated in the same way as is done with the analysis of continuous items (Yang & Xia, 2019). A method for estimating coefficient omega for ordinal items has been proposed by Green and Yang (2009) and should be used instead.

The authors should clarify the extent of missingness present in the data. WLSMV is a limited-information estimator and handles missingness using pairwise deletion. This approach may not be suitable for samples with large proportions of missingness.

Minor correction: the estimator is not named ‘Weighted least square mean and variance adjusted chi-square’, just ‘Weighted least square mean and variance adjusted’, as both the standard errors and the chi-square statistic are adjusted.

While the authors provide fit indices for each model, they do not provide the model comparison statistics (especially the Δχ2). While the authors mention that ‘The models did not differ from each other because the ΔCFI and ΔRMSEA values between all model pairs were less than 0.01, and 0.015 respectively.’, this is not the case. The ΔCFI was greater than 0.01 for the comparison between the CFA 3-F and all models except the BCFA 3-s-F.

The authors state in the discussion ‘Consequently, it is possible that there may be another non-modeled higher order factor needed for the DASS-21.’ This is a simple hypothesis to test, as the authors could add a higher-order 3-factor model (i.e. the 3 factors as indicators of another latent variable).

The authors mention several times that BESEM ‘provides the most advanced method to model the variances among the DASS-21 items…’. However, this title would likely go to Bayesian SEM, which can estimate both cross-loadings and residual covariances (whereas ESEM & BESEM can only estimate cross-loadings).

The authors’ conclusion that ‘…the total scores of the depression, anxiety, and stress scales could be used as measures of these constructs’ is incongruent with the findings of multiple cross-loadings in the ESEM models. Using sum scores for scale totals assumes that the scale is measured only by the items being summed, and that severity on these items is indicative only of severity on the construct of interest (measurement error aside). By identifying several cross-loading items, the authors have indicated that these assumptions do not hold, and that total scores may not be reliable.

Green, S. B., & Yang, Y. (2009). Reliability of Summed Item Scores Using Structural Equation Modeling: An Alternative to Coefficient Alpha. Psychometrika, 74(1), 155-167. doi:10.1007/s11336-008-9099-3

Yang, Y., & Xia, Y. (2019). Categorical Omega With Small Sample Sizes via Bayesian Estimation: An Alternative to Frequentist Estimators. Educational and Psychological Measurement, 79(1), 19-39. doi:10.1177/0013164417752008

Reviewer #2: The subject is of interest and the manuscript has been performed with quality methodological. However, some aspects should be worked on, namely the quality of the writing and the content to be expressed in each part of the manuscript.

Introduction

- The authors don't have provided literature to justify the need for testing the DASS-21. The beginning "The DASS-21 has been derived from...." is not an appropriate way to start a manuscript. It should include a brief introduction of the topic to be addressed, of the importance of this scale... And not start with a psychometric approach. In this same line, concepts such as CFA and ESSEM are first introduced to be explained in the final part of the introduction; this does not facilitate the understanding of the subject.

- The authors make multiple statements throughout the translation that should be referenced, giving rise to confusion because they seem to have given it.

- The introduction is excessively long and confusing in its presentation. It needs to be worked on in-depth. It is not properly structured, with ideas repeated throughout the text that do not facilitate understanding and a vision of progression and organization.

- Acronyms must be specified the first time they appear. Even the name of a scale.

- On page 6, refer to "minor misspecifications", please clarify such general expressions.

Method:

- There is no need to indicate age by gender.

- Better express the statistical results (M = , SD = )

- The expression (t[736]=0.05, ns), is incorrectly expressed. Please include the statistic t correctly and the p.

- It is unnecessary to include the descriptive for the DASS items.

- It would be convenient to include other types of sociodemographics in the table, such as the year, the gender, the nationality, socioeconomics level.

- The last paragraph of the participants is for discussion, not methods.

- More detailed information needs to be provided on sample and recruitment, procedure.

- It would be recommended to reduce the analysis data section.

- It has not been included which type of statistical tests have been carried out in the validity of the tool.

Results

- Summarize the results in the text if they are set out in the tables.

- It is not necessary to explain in detail in the text the factorial loads of each of the items. Summarize and specify the different and/or relevant results. Also, organize it in a detailed way by the factor in each of the models. As it is now, it is confusing and disorganized.

- The results of the statistical tests of the validation of the instrument are not properly expressed.

- The sentence: "Overall, these findings indicate support for the external validity..." is not methodologically correct. This is the internal validity of the instrument, not external validity (generalization of the results to the general population).

- The paragraph where the authors explain the preferred model is more appropriate in the discussion section. In the results section, the results are not discussed, only a statement of the results should be made.

- In the tables, all abbreviations should be specified in notes

Discussion

- The discussion must be worked on, providing a greater comparison with other studies and interpretation of results. They should avoid re-summarizing the results, putting paragraphs where not a single reference appears.

- The conclusion of the discussion section should not include interpretation of the results.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Andrew R Johnson

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: revision_PLOSone.docx

PLoS One. 2020 Jun 5;15(6):e0233998. doi: 10.1371/journal.pone.0233998.r002

Author response to Decision Letter 0


23 Feb 2020

04th Feburary 2020

The Editor

PLOS ONE

Re: Revision of Manuscript PONE-D-19-32096

Confirmatory Factor Analysis and Exploratory Structural Equation Modeling of the Factor Structure of the Depression Anxiety Stress Scales-21

Dear Editor,

Thank you very much for arranging the review of the above mentioned paper that was submitted to PLOS ONE. We also like the reviewers for their valuable comments. These have been extremely useful for the revision of the paper. In the new revised paper we have considered these comments, and we believe that we have responded to all the revisions suggested. Our revisions are listed below, and are highlighted in yellow and green in the revised paper

Responses to Reviewer #1:

Overall, this was a robust and interesting evaluation of the measurement structure of the DASS-21. I have only minor comments for clarification.

Comment: The authors state several times that they are the first to test a bifactor ESEM of the DASS-21, but it should be noted that Johnson et al. (2016) also tested two Bifactor ESEM Models, including the BESEM Model tested in the present paper. See the following from the Methods section in Johnson et al. (2016):

Five established DASS-21 factor structures were fit first in CFA, and then in ESEM with ‘Target’ rotation:

1. 1-factor Model

2. 2-factor Model – Oblique model, two correlated factors representing physiological hyperarousal and generalised negativity, Duffy et al., (2005)

3. 3-factor Model – Oblique model, three correlated factors representing depression, anxiety, and stress, Lovibond and Lovibond (1995)

4. Bifactor Model A – Nested model, 3 independent factors representing depression, anxiety, and stress and a general negative affect factor, Henry and Crawford (2005)

5. Bifactor Model B – Nested model, 2 independent factors representing depression and stress and a general negative affect factor, Tully, Zajac, and Venning (2009)

Response. We have removed all instances where we have claimed (previously) that we are the first to test a bifactor ESEM of the DASS-21. Related to this, we have cited and discussed the work of Johnson et al. (2016), in particular their exploration and findings related to the bifactor ESEM model (p. 10, yellow highlight I n para 1).

Comment: A note on the wording: there is extensive reference to the method used as simply ‘ESEM’, but ESEM is a general framework for using EFA within structural models and does not indicate the type of EFA rotation used. The authors should note that the analyses used ‘Target’ rotation.

Response. As suggested we no longer refer to ESEM, but to ESEM (and also BESEM) with target rotation (highlighted in yellow throughout the paper).

Comment: The calculation of the Omega coefficient needs clarification. When using the WLSMV estimator, the Omega statistic should not be calculated in the same way as is done with the analysis of continuous items (Yang & Xia, 2019). A method for estimating coefficient omega for ordinal items has been proposed by Green and Yang (2009) and should be used instead.

Response. We have not made any change in this respect, for the reasons given in p. 15 (yellow highlight).

Comment: The authors should clarify the extent of missingness present in the data. WLSMV is a limited-information estimator and handles missingness using pairwise deletion. This approach may not be suitable for samples with large proportions of missingness.

Response. We covered this in p. 15 (yellow highlight).

Comment Minor correction: the estimator is not named ‘Weighted least square mean and variance adjusted chi-square’, just ‘Weighted least square mean and variance adjusted’, as both the standard errors and the chi-square statistic are adjusted.

Response. Corrected (p. 13, yellow highlight).

Comment: While the authors provide fit indices for each model, they do not provide the model comparison statistics (especially the Δχ2). While the authors mention that ‘The models did not differ from each other because the ΔCFI and ΔRMSEA values between all model pairs were less than 0.01, and 0.015 respectively.’, this is not the case. The ΔCFI was greater than 0.01 for the comparison between the CFA 3-F and all models except the BCFA 3-s-F.

Response. This mistake has been correct (p. 16, yellow highlight)

Comment: The authors state in the discussion ‘Consequently, it is possible that there may be another non-modeled higher order factor needed for the DASS-21.’ This is a simple hypothesis to test, as the authors could add a higher-order 3-factor model (i.e. the 3 factors as indicators of another latent variable).

Response. We now consider that based on our findings such an hypothesis is inappropriate, and have remove this statement. Thus the computation of the model suggested is irrelevant.

Comment: The authors mention several times that BESEM ‘provides the most advanced method to model the variances among the DASS-21 items…’. However, this title would likely go to Bayesian SEM, which can estimate both cross-loadings and residual covariances (whereas ESEM& BESEM can only estimate cross-loadings).

Response. We no longer refer to the BESEM approach as the most advanced method to model the variances among the DASS-21 items…’. Instead we say it is a “could be potentially a more sophisticated and arguably more advanced approach for studying the factor structure of the DASS-21” (p. 10, lines 11 & 12).

Comment: The authors’ conclusion that ‘…the total scores of the depression, anxiety, and stress scales could be used as measures of these constructs’ is incongruent with the findings of multiple cross-loadings in the ESEM models. Using sum scores for scale totals assumes that the scale is measured only by the items being summed, and that severity on these items is indicative only of severity on the construct of interest (measurement error aside). By identifying several cross-loading items, the authors have indicated that these assumptions do not hold, and that total scores may not be reliable.

Response. Given the suggestion by Reviewer 2 that the “conclusion of the discussion section should not include interpretation of the results” we have removed the paragraph that contains the information referred to by this reviewer. Thus this point in not made in the current version of the paper.

Responses to Reviewer #2:

The subject is of interest and the manuscript has been performed with quality methodological. However, some aspects should be worked on, namely the quality of the writing and the content to be expressed in each part of the manuscript.

Comment: Introduction

- The authors don't have provided literature to justify the need for testing the DASS-21. The beginning "The DASS-21 has been derived from...." is not an appropriate way to start a manuscript. It should include a brief introduction of the topic to be addressed, of the importance of this scale... And not start with a psychometric approach. In this same line, concepts such as CFA and ESSEM are first introduced to be explained in the final part of the introduction; this does not facilitate the understanding of the subject.

Response: Justification for the study is now provided (p. 10, green highlights). We begin the paper now with a a brief introduction of the topic to be addressed (p. 3 para 1, green highlights). We have revised the paper such that concepts such as CFA and ESSEM are not introduced first without explanation.

Comment: The authors make multiple statements throughout the translation that should be referenced, giving rise to confusion because they seem to have given it.

Response. Done throughout.

Comment: The introduction is excessively long and confusing in its presentation. It needs to be worked on in-depth. It is not properly structured, with ideas repeated throughout the text that do not facilitate understanding and a vision of progression and organization.

Response. We have reorganized the introduction, providing better structure, and removing repetitions.

Comment: Acronyms must be specified the first time they appear. Even the name of a scale.

Response. This has been conformed to.

Comment: On page 6, refer to "minor misspecifications", please clarify such general expressions.

Response. This has been clarified (p. 5, green highlight).

Method:

Comment - There is no need to indicate age by gender.

- Better express the statistical results (M = , SD = )

- The expression (t[736]=0.05, ns), is incorrectly expressed. Please include the statistic t correctly and the p.

Response. All the above have been revised (see “Participants” section, p. 11)

Comment: It is unnecessary to include the descriptive for the DASS items.

Response: Removed (see Table 1)

Comment: It would be convenient to include other types of sociodemographics in the table, such as the year, the gender, the nationality, socioeconomics level.

Response: As the information on age and gender have been provided in the text, we hve not provided this in the table. There is no information on nationality and socioeconomics level.

Comment: The last paragraph of the participants is for discussion, not methods.

Response: It still remain under participants, we fell that it provides information on the participants that is better at this point of the paper.

Comment: More detailed information needs to be provided on sample and recruitment, procedure.

Response: Additional information is now provided (p. 13, green highlight)

Comment: It would be recommended to reduce the analysis data section.

Response: We have done so.

Comment: It has not been included which type of statistical tests have been carried out in the validity of the tool.

Response: We have done so (p. 15, green highlight).

Results

Comment: Summarize the results in the text if they are set out in the tables.

Response: We have done so.

Comment: It is not necessary to explain in detail in the text the factorial loads of each of the items. Summarize and specify the different and/or relevant results. Also, organize it in a detailed way by the factor in each of the models. As it is now, it is confusing and disorganized.

Response: We have not made much changes in this respect. Our intention in the results section is to show how well the factors are defined by the individual items, and latter to discuss which items are problematic. Dscussion of these will make little sense if details of individual items are not covered in the results section.

Comment: The results of the statistical tests of the validation of the instrument are not properly expressed.

Response: We have rewritten this section (pp. 20 & 21, green highlight).

Comment: The sentence: "Overall, these findings indicate support for the external validity..." is not methodologically correct. This is the internal validity of the instrument, not external validity (generalization of the results to the general population).

Response. We think that examining the relations of the DASs-21 factors with IA and HI test the external (and not internal) validity of the DASS-21 factors.

- The paragraph where the authors explain the preferred model is more appropriate in the discussion section. In the results section, the results are not discussed, only a statement of the results should be made.

Response: This is now only in the discussion.

Comment: In the tables, all abbreviations should be specified in notes

Response: Done

Discussion

Comment: The discussion must be worked on, providing a greater comparison with other studies and interpretation of results. They should avoid re-summarizing the results, putting paragraphs where not a single reference appears.

Response: In our discussion, we have providing a greater comparison with other studies and made interpretation of results. Although there are paragraphs without reference, we consider these paragraphs as essential to explaining our results.

Comment: The conclusion of the discussion section should not include interpretation of the results.

Response: the interpretation of results has been removed.

As you will notice we have addressed all the comments and suggestions made by the reviewer. We hope that the changes meet with your satisfaction and we look forward to hearing from you soon.

Thank you.

Yours sincerely,

Authors

Attachment

Submitted filename: 2020 Response to Reviewers feedbak.docx

Decision Letter 1

Manuel Fernández-Alcántara

20 Mar 2020

PONE-D-19-32096R1

Confirmatory Factor Analysis and Exploratory Structural Equation Modeling of the Factor Structure of the Depression Anxiety Stress Scales-21

PLOS ONE

Dear Dr. Stavropoulos,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Thank you for submitting your research to PlosOne. Before final acceptance of the manuscript please address the minor revisions suggested by Reviewer 1. With this clarification the manuscript will be ready for publication. Congratulations for your interesting and applied work.

We would appreciate receiving your revised manuscript by May 04 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Manuel Fernández-Alcántara, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In their revision, the authors stated that:

"It may be worth noting that Yang and Green (2015) have argued that when

using the WLSMV estimator, the omega should not be calculated in the same way as is done

with the analysis of continuous items (Yang & Xia, 2019). This is especially so if the sample size

is less than 500 as omega values would be underestimated. Instead, Bayesian estimation methods

have been proposed (Yang & Xia, 2019). However, as our sample size was well above 500 (N =

738), we used the usual method applied to continuous items for computing omega values."

However, these are not the conclusions of the studies that they are citing. The referenced studies show that *categorical* Omega (i.e. the correct treatment of omega with WLSMV estimation) can be biased in small sample sizes, not that the standard (continuous) version is biased. In fact, the referenced studies do not predicate the appropriateness of the continuous method on sample size at all:

"However, x and factor scores have different metrics and their relationship is no longer linear. Describing the relation of the observed scores to the true scores using the Pearson correlation (or the square of the correlation) is thus inappropriate (Lord & Novick, 1968). Consequently, Definition 2 and Definition 3 are not appropriate for defining a reliability coefficient for the scale scores." (Yang & Xia, 2018, p. 21).

The authors need to use categorical Omega here, otherwise the assumptions on which they estimate reliability (continuous observations) are different from the assumptions used to estimate the models (ordinal observations). If unsure, the simplest way to achieve this is using the ci.reliability() function in the MBESS R package.

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Andrew R. Johnson

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Jun 5;15(6):e0233998. doi: 10.1371/journal.pone.0233998.r004

Author response to Decision Letter 1


15 May 2020

15h May 2020

The Editor

PLOS ONE

Re: Revision of Manuscript PONE-D-19-32096

Confirmatory Factor Analysis and Exploratory Structural Equation Modeling of the Factor Structure of the Depression Anxiety Stress Scales-21

Dear Editor,

Thank you very much for arranging the second review of the above mentioned paper that was submitted to PLOS ONE. We also like the reviewer 2 (who did not request any additional modifications) and reviewer 1 for recommending the calculation of the categorical omega via the R-MBESS ci-reliability function. In the new revised paper we have considered this comment raised by reviewer 1. We believe that we have now responded to this final revision suggested. The revision(s) is listed below, and is highlighted in yellow in this second revised version of the paper.

Responses to Reviewer #1:

Comment: In their revision, the authors stated that:

"It may be worth noting that Yang and Green (2015) have argued that when

using the WLSMV estimator, the omega should not be calculated in the same way as is done

with the analysis of continuous items (Yang & Xia, 2019). This is especially so if the sample size

is less than 500 as omega values would be underestimated. Instead, Bayesian estimation methods

have been proposed (Yang & Xia, 2019). However, as our sample size was well above 500 (N =

738), we used the usual method applied to continuous items for computing omega values."

However, these are not the conclusions of the studies that they are citing. The referenced studies show that *categorical* Omega (i.e. the correct treatment of omega with WLSMV estimation) can be biased in small sample sizes, not that the standard (continuous) version is biased. In fact, the referenced studies do not predicate the appropriateness of the continuous method on sample size at all:

"However, x and factor scores have different metrics and their relationship is no longer linear. Describing the relation of the observed scores to the true scores using the Pearson correlation (or the square of the correlation) is thus inappropriate (Lord & Novick, 1968). Consequently, Definition 2 and Definition 3 are not appropriate for defining a reliability coefficient for the scale scores." (Yang & Xia, 2018, p. 21).

The authors need to use categorical Omega here, otherwise the assumptions on which they estimate reliability (continuous observations) are different from the assumptions used to estimate the models (ordinal observations). If unsure, the simplest way to achieve this is using the ci.reliability() function in the MBESS R package.

Response. Thank you for your comment. The relevant part of the text has now been modified. Furthermore, we have computed and reported categorical Omega using the the ci.reliability function in the MBESS R package and perc interval type (see p.17, yellow highlight; and p. 22, yellow highlight). As we have been using M-Plus explicitly until now, your review point has provided us the opportunity to get more familiar with R and we are honestly grateful to you for that.

Responses to Reviewer #2:

Comment: No additional comments.

Response: Thank you for the time you devoted in the consideration of our manuscript and the effort you made to help us improve it.

As you will notice we have addressed all the comments and suggestions made by the reviewer. We hope that the changes meet with your satisfaction and we look forward to hearing from you soon.

Thank you.

Yours sincerely,

Authors

Attachment

Submitted filename: 2020 Response 2 to Reviewers feedbak.docx

Decision Letter 2

Manuel Fernández-Alcántara

18 May 2020

Confirmatory Factor Analysis and Exploratory Structural Equation Modeling of the Factor Structure of the Depression Anxiety Stress Scales-21

PONE-D-19-32096R2

Dear Dr. Stavropoulos,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Manuel Fernández-Alcántara, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Manuel Fernández-Alcántara

22 May 2020

PONE-D-19-32096R2

Confirmatory Factor Analysis and Exploratory Structural Equation Modelling of the Factor Structure of the Depression Anxiety Stress Scales-21

Dear Dr. Stavropoulos:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Manuel Fernández-Alcántara

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Data

    (SAV)

    Attachment

    Submitted filename: revision_PLOSone.docx

    Attachment

    Submitted filename: 2020 Response to Reviewers feedbak.docx

    Attachment

    Submitted filename: 2020 Response 2 to Reviewers feedbak.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information file.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES