Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 31.
Published in final edited form as: J Affect Disord. 2021 Apr 30;295:1360–1370. doi: 10.1016/j.jad.2021.04.089

Cross-cultural and gender invariance of emotion regulation in the United States and India

Natalia Van Doren 1,*, Nur Hani Zainal 1, Michelle G Newman 1
PMCID: PMC8802756  NIHMSID: NIHMS1768090  PMID: 34706449

Abstract

Background:

The ability to effectively regulate one’s emotions has been established as an important transdiagnostic mechanism in the development and maintenance of psychopathology. To date, much of the research on emotion regulation (ER) has been conducted in Western, educated, industrialized, rich, and democratic (WEIRD) samples. Specifically, there is a dearth of cross-cultural construct equivalence studies on measures of ER. Establishing measurement equivalence is an important first step to facilitate future research on ER in culturally diverse samples.

Methods:

The present study sought to validate the latent structures of three commonly used ER measures: the Emotion Regulation Questionnaire (ERQ), Ruminative Responses Scale (RRS-10), and Acceptance subscale of the Five-Facet Mindfulness Questionnaire (FFMQ-AS). Measurement equivalence was examined across 123 American and 121 Indian participants (Mage = 36.60) and across gender.

Results:

Cross-cultural confirmatory factor analyses revealed configural equivalence (i.e., same factor structures) in both cultural groups across all three measures. The RRS-10 met weak invariance across cultures; however, factor loadings were not equal across the two samples for all items on the ERQ or FFMQ-AS. Consequently, a partial invariance solution was identified, and all measures subsequently met criteria for Level 2 strict cross-cultural invariance. Across gender, full invariance was found on all measures except the FFMQ-AS.

Conclusion:

Findings suggest that the structure of ER processes is largely invariant across these two cultural groups, with a few notable exceptions, pointing to the importance of continued work in this area.

Keywords: Emotion regulation, Culture, Measurement invariance, Structural equation modeling


The past two decades have witnessed a growing interest in a transdiagnostic approach to the study of psychopathology (e.g., Kring and Sloan, 2010). Transdiagnostic factors refer to pathological mechanistic processes that are shared across various mental disorders (Cuthbert, 2015). One such transdiagnostic construct is emotion regulation (ER; Cludius et al., 2020; Fernandez et al., 2016). Within Gross’ (1998) Process Model, ER is broadly defined as the strategies that individuals may use to increase, maintain or decrease their affective experience, including the feelings, behaviors or physiological responses that make up a given emotion (Gross, 1999). ER is viewed as central to the development and maintenance of psychopathology (Aldao et al., 2010, 2016; Carpenter and Trull, 2013; Newman and Llera, 2011). Accordingly, building a greater understanding of ER constructs is critical.

In particular, four ER strategies—cognitive reappraisal, acceptance, suppression, and rumination—feature prominently in the literature (Sloan et al., 2017), and are the focus of the present paper. Whereas cognitive reappraisal and acceptance have been widely considered more adaptive and are linked to lower rates of affective disorders, suppression and rumination are maladaptive processes that exacerbate symptoms and maintain depression and other forms of psychopathology (Aldao, 2012). Furthermore, reappraisal, acceptance, suppression, and rumination have been amongst the strategies that have received the most support as transdiagnostic processes in the literature (Cludius et al., 2020). As such, measures of these constructs warrant further investigation to refine transdiagnostic assessment and treatment.

Despite their widespread importance, few studies have examined measurement invariance of commonly used ER strategy assessments. Measurement invariance refers to the generalizability element of construct validity (Putnick and Bornstein, 2016). Evaluating measurement invariance answers the question: “Am I measuring the construct in a similar way for each group?” Establishing measurement invariance (degree of similarity of psychometric properties of assessments) is important, as it allows researchers to have confidence that mean cultural differences in a given construct are based on true group differences, rather than a product of cross-cultural differential item response tendencies (Dimitrov, 2017). This is particularly important when conducting cross-cultural ER research, as differences in mean scores are often interpreted without testing measurement invariance (e.g., Mehta et al., 2017), and could therefore lead to spurious conclusions.

In the present study, we tested three measures of ER that assess four ER strategies: the Emotion Regulation Questionnaire (ERQ; Gross and John, 2003), that measures reappraisal and suppression; the Ruminative Response Scale (RRS-10; Treynor et al., 2003), that measures rumination; and the acceptance (“non-judgement”) subscale of the Five-Facet Mindfulness Questionnaire (FFMQ-AS; Baer, 2006) that assesses acceptance. To date, these widely used scales have been highly cited in the literature (e.g., by more than 8200 reports since their formation based on Google Scholar citations), yet few studies have examined their measurement invariance across Western and non-Western populations. Specifically, the psychometric data collected on these measures have been mainly derived from Western, educated, industrialized, rich and democratic (WEIRD) countries, which house just 12% of the world’s population (Henrich et al., 2010).

Furthermore, much of the non-Western research on ER has utilized samples from China and other East-Asian nations (e.g., Japan; Korea) to draw sweeping generalizations and conclusions about transdiagnostic ER processes. However, South Asian countries have been largely ignored. India is a case in point. India contains a massive 17.9% of the entire world’s population (Patierno et al., 2019), yet remains understudied. This figure does not include the vast Indian diaspora consisting of another 20 million people worldwide (Safran et al., 2008) that warrant further investigation to develop a generalizable science of ER processes. Cross-cultural measurement equivalence studies are facilitated by India having the largest population of English speakers in the world outside of the U.S. (Parshad et al., 2016), enabling the recruitment of Indian participants who are fluent in English, thereby sidestepping the additional complexity of establishing linguistic equivalence.

In addition to the lack of studies on cross-cultural measurement invariance, there are a paucity of gender invariance studies. This is particularly problematic given the widespread use of these measures to compare mean levels of strategy use endorsement across gender to draw conclusions about gender differences in ER processes (e.g., Nolen--Hoeksema and Aldao, 2011; Webb et al., 2012). Without testing measurement equivalence, one cannot determine whether such mean differences reflect true differences in the latent construct, or are a product of cross-gender differential item response tendencies. Studies on gender differences in ER processes are predicated on the assumption that these emotion regulation measures are equivalent across gender. If this assumption does not hold true, then the fact that measurement invariance has not been well-established across gender represents a potentially serious problem, because mean differences on these measures between groups or group differences in patterns of correlations between variables and the emotion regulation measures in question could be artifactual and misleading (Whisman et al., 2013). For this reason, we also included gender invariance in our study.

The present study aimed to examine the latent factor structures of transdiagnostic ER constructs in community samples from the U.S. and India. Specifically, we explored the degree of cross-cultural equivalence for the two-factor ERQ (Gross and John, 2003), two-factor RRS-10 (Treynor et al., 2003), and the one-factor FFMQ-AS (Baer, 2006). Given the dearth of research on ER constructs in Indian community samples, we did not have strong hypotheses as to whether these measures would be invariant, but rather, aimed to explore the factor structure of these measures in this understudied group. A secondary aim was to determine gender invariance of the factor models, given potential gender differences in ER (Martín-Albo et al., 2020). We hypothesized that our measures would be invariant across gender, as some research suggests this is the case (e.g., Preece et al., 2021; Whisman et al., 2020; Zainal et al., 2021).

1. Method

1.1. Participants and procedure

All study procedures were approved by the Institutional Review Board (IRB). Participants were recruited from Amazon’s Mechanical Turk (MTurk) using the TurkPrime feature (Litman et al., 2017). Participants had to denote their current country of residence as U.S. or India, respectively, to be eligible for inclusion. Following informed consent, participants completed emotion regulation and demographic measures on the Qualtrics survey software platform. They also completed an item pool for the development of a new measure not reported on in the present study. Originally, there were N = 174 participants in the Indian sample, and N = 159 participants in the U.S. sample. To ensure data quality, we adhered to recommended practices for data integrity from crowdsourcing platforms, including filtering based on response times (Buchanan and Scofield, 2018; Höhne et al., 2017; Mason and Suri, 2012), incomplete responses (Behrend et al., 2011), and attention check responses (Lovett et al., 2018). Response times were captured in seconds and transformed into minutes. Average time to completion was 21.71 min. Outliers were calculated using Z-scores for time, and we removed anyone with an absolute Z-score of ≥ 3. Five attention check items were interspersed within the questionnaire battery, such as “Please choose the second option for this question” (c.f. Paolacci et al., 2010). Those who completed less than 50% of the survey (n = 29 in the U.S. sample; n = 21 in the Indian sample), took more than three standard deviations longer than the sample mean time to complete the study (i.e., had a Z-score of 3 or above; n = 2 in the U.S. sample; n = 3 in the Indian sample), or failed any of five attention check items (n = 7 in the U.S. sample; n = 29 in the Indian sample) were removed.

The final U.S. sample comprised 123 participants with a mean age of 43.98 years (SD = 12.46; range 23 to 74), 52.08% female, and 88.54% White, 6.25% Asian American, 2.08% Hispanic, and 1.04% Black or African American. The final Indian sample included 121 respondents with an average age of 29.96 years (SD = 6.34; range 21 to 60), 35.19% female, 83.33% Indian, 6.48% Southeast Asian (e.g., Vietnamese, Cambodian, Thai, etc.), 3.70% East Asian (e.g., Chinese, Korean, Japanese, etc.), 3.70% White, and 5.55% other races/ethnicities.

1.2. Measures

1.2.1. Suppression and reappraisal

The ERQ is a 10-item scale used to measure respondents’ tendency to regulate their emotions in two ways: (1) Cognitive Reappraisal and (2) Expressive Suppression (Gross and John, 2003). Respondents answered each item on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Gross and John’s (2003) original validation paper for the ERQ used exploratory and confirmatory factor analysis (EFA and CFA, respectively) to provide evidence for a 2-factor structure (reappraisal and suppression), with average alpha reliability of .79 for reappraisal and .73 for suppression across four samples. Three-month retest reliability was .69 for both factors. Convergent validity was evidenced by high correlations with measures of regulatory success, coping, and negative mood regulation for both factors; whereas discriminant validity had substantially smaller relations with theoretically unrelated constructs (e.g., cognitive ability; rs = −.09–.10) than with conceptually related constructs (e.g., emotional attention, clarity and repair; rs = −.41–.36; Gross and John, 2003). In the present study, across both gender and countries, subscale Cronbach’s αs ranged from .78 to .99 and Macdonald’s ωs ranged from .77 to .88.

1.2.2. Rumination

The RRS-10 is a shortened version of the original 22-item scale (Nolen-Hoeksema and Morrow, 1991) developed to eliminate overlapping content with depression symptoms (Treynor et al., 2003). EFA and CFA revealed a two-factor structure: Brooding (pondering on negative mood and personal shortcomings; e.g., think “Why can’t I handle things better?”) and Reflection (active efforts to understand one’s negative feelings; e.g., “Analyze your personality to try to understand why you are depressed”). Items are rated on a 4-point scale ranging from 1 (never or almost never) to 4 (always or almost always), with higher scores indicating greater use of rumination. Internal consistency reliability indicated αs = .72–.77, with a 6-month retest reliability for both factors of r = .60–.62 (Treynor et al., 2003). Two-week retest reliability was > .70 for both factors (Lei et al., 2017). Convergent validity was evidenced by strong relationships with theoretically-related constructs (e.g., depression, emotional distress, anxiety; rs = .42–.70; Nolen-Hoeksema and Morrow, 1991; Thanoi and Klainin-Yobas, 2015), and discriminant validity with tests of non-theoretically-relevant constructs (e.g., motivation; Siegle et al., 2004). In the present study, across both gender and countries, subscale Cronbach’s αs ranged from .70 to .87 and Macdonald’s ωs ranged from .70 to .91.

1.2.3. Acceptance

The FFMQ-AS includes eight items that assess acceptance of emotions (e. g., “I tell myself I shouldn’t be feeling the way that I’m feeling”) and thoughts (e.g., “I tell myself I shouldn’t be thinking the way that I’m thinking”). Items are rated from 1 (never or very rarely true) to 5 (very often or always true), where higher scores indicate greater acceptance. The FFMQ-AS subscale demonstrated internal consistency using the full scale (αs = .86; Baer, 2006; Christopher et al., 2012) as well as when the FFMQ-AS was used as a measure of emotional acceptance (αs = .89–.91; Ford et al., 2018). It showed convergent validity with related constructs (e.g., satisfaction with life, emotional intelligence.; r = .30–.53; Christopher et al., 2012) and discriminant validity with theoretically-divergent constructs (e.g., depression, experiential avoidance, absent-mindedness, alexithymia; rs = −.61 to −.27; Baer, 2006). In the present study, Cronbach’s αs ranged from .69 to .87; Macdonald’s ωs ranged from .88 to .96.

1.3. Data analyses

Preliminary data analysis identified and evaluated a baseline measurement model, and tested for measurement invariance. Using R (Version 3.5.1; R Core Development Team, 2019), we first examined descriptive statistics (e.g., mean, standard deviation). The remaining steps utilized CFA using the lavaan package (Version 0.6.4; Rosseel, 2012) in RStudio.

All indicators of the data were rank-ordinal in nature. Therefore, we used diagonally weighted least square (WLS) estimators with mean and variance adjusted (WLSMV) χ2 statistic with theta parametrization that inputs the polychoric correlation matrix (Rhemtulla et al., 2012; Wang and Russell, 2016) to examine the latent factor structures of the ERQ, RRS-10, and FFMQ-AS.

To judge each model’s goodness-of-fit, we used practical goodness-of-fit indices with heuristic cutoffs (Kline, 2016a, 2016b): confirmatory fit indices (CFI ≥ 0.95; Bentler, 1990; McDonald and Marsh, 1990); root mean square error of approximation (RMSEA ≤ 0.08; Browne and Cudeck, 1993; Steiger, 1990); and square root mean residual (SRMR ≤ 0.08; Hu and Bentler, 1999). For the metric of latent constructs, we specified one of the unstandardized factor loadings to 1.0. We first tested for configural invariance (i.e., same factor structure without placing between-group constraints on any parameter estimates) by conducting CFAs in each country and gender separately (Muthén and Muthén, 2013). To test gender invariance, data were pooled across the U.S. and India, following others (Hong et al., 2017; Taylor et al., 2007; Whisman et al., 2020; Zainal et al., 2021). Similarly, when testing cross-cultural equivalence, data from the U.S. and India were pooled. Subsequently, we performed multiple-group CFAs across countries and gender concurrently based on the factor structures specified according to the original validation studies: two-factor ERQ (Gross and John, 2003), two-factor RRS-10 (Treynor et al., 2003), and 1-factor FFMQ-AS (Baer, 2006).

Next, we progressively tested the more restrictive multiple-group CFAs to determine if factor loadings (λs) were equal across groups (weak metric invariance) and if both λs and item thresholds (τs) were equal across groups (strong scalar invariance). We then examined if λs, τs, and item residual variances (εs) were equal across groups (strict invariance). If scalar or strict invariance was attained for any scale, we assessed for equality of factor variance and covariance as well as factor means (Steenkamp and Baumgartner, 1998). A statistically significant WLSMV Δχ2 difference test (χ2 for the constrained model is greater than the unconstrained model) indicated the data fit substantively worse than the unconstrained model (Bollen, 1989). However, as Δχ2 is sensitive to sample size despite trivial misfit changes, we used change in practical fit indices to evaluate measurement invariance at each step (Cheung and Rensvold, 2002; Meade and Bauer, 2007; Meade et al., 2008). Values of ΔCFI ≤ −.01, ΔRMSEA < +0.015, and ΔSRMR < +0.03 from the unconstrained to constrained model indicated multiple-group measurement invariance (Chen, 2007; Cheung and Rensvold, 2002). Furthermore, model fit differences were judged to be nonsignificant if they had overlapping 90% RMSEA confidence intervals (Wang and Russell, 2005).

2. Results

Descriptive statistics and correlations amongst all study variables by country and gender can be found in Tables S1 and S2, respectively.

2.1. Multiple-group CFA across countries

Table 1 presents tests of each step of measurement invariance analyses for the three ER measures in the U.S. and India. First, we established baseline models with no equality constraints across both samples to test configural invariance. Based on the pattern of fit indices, the ERQ, RRS-10, and FFMQ-AS showed good configural model fit in both samples. Furthermore, on all of the measures, most standardized factor loadings exceeded 0.60 and all surpassed 0.40 in both groups (all p values were < 0.001). Table 2 displays the comparison of measurement invariance models shown in Table 1. Based on the multiple-group CFA across countries, for the ERQ, we found evidence for a lack of metric invariance. Compared to the configural invariance model, the weak invariance model produced a significant χ2 difference test (χ2(df = 7) = 65.75, p < .05). While the two models did not differ substantially on CFI (Δ = −.008), RMSEA (Δ = .03) and SRMR (Δ = .02) were notably worse, indicating a poor model fit. Examination of factor loadings revealed that factor loadings were substantially lower in the U.S. sample compared to the Indian sample on ERQ Item 4, “When I am feeling positive emotions, I am careful not to express them” (U.S.: 0.58 vs. India: 1.18). Accordingly, we fit a partial invariance model by freeing up the factor loadings for Item 4 and the resulting model fit the data well (Table 2) and did not differ substantially from the full configural model, indicating partial metric (i.e., weak) invariance. With partial weak invariance satisfied, we proceeded to test subsequent levels of measurement invariance while leaving Item 4 unconstrained, and found that the model met partial strong invariance, partial strict measurement invariance at Level 1 (equal error variances) and Level 2 (equal factor variances). However, ERQ with freed Item 4 factor loading failed to meet partial Level 3 strict invariance, indicating that latent factor means were substantially different across the two groups.

Table 1.

Configural, weak, strong, and strict partial invariance models for each of three emotion regulation measures in the United States and India.

Model WLSMV χ2 df p RMSEA (90% CI) CFI SRMR
Emotion Regulation Questionnaire (10 item, 2-factor model)
 1a. ERQ – India 22.84 26 .642 .000 (.000, .061) 1.000 .080
 1b. ERQ – United States 21.29 26 .727 .000 (.000, .054) 1.000 .074
 1. Configural: ERQ across countries 44.13 52 .773 .000 (.000, .041) 1.000 .071
 2. Weak (metric: loadings equal)* 59.02 58 .438 .012 (.000, .057) .999 .084
 3. Strong (scalar: thresholds equal)* 62.21 64 .540 .000 (.000, .051) 1.000 .086
 4. Error variances equal* 72.75 72 .453 .000 (.000, .051) .999 .086
 5. Factor variances equal* 108.68 74 .005 .062 (.035, .086) .959 .116
 6. Factor means equal* 232.76 75 <.001 .131 (.112, .151) .815 .164
Five-Facet Mindfulness Questionnaire-Acceptance Subscale (8-item; 1-Factor Model)
 1a. FFMQ-AS – India 9.53 20 .976 .000 (.000, .000) 1.000 .034
 1b. FFMQ-AS – United States 3.61 20 .727 .000 (.000, .000) 1.000 .042
 1. Configural: FFMQ-AS across countries 13.14 40 1.000 .000 (.000, .000) 1.000 .034
 2. Weak (metric: loadings equal)** 24.25 45 .995 .000 (.000, .000) .999 .057
 3. Strong (scalar: thresholds equal)** 27.93 50 .995 .000 (.000, .000) 1.000 .060
 4. Error variances equal** 33.66 56 .992 .000 (.000, .000) 1.000 .060
 5. Factor variances equal** 125.63 57 <.001 .108 (.083, .134) .963 .131
 6. Factor means equal** 404.72 63 <.001 .230 (.209, .252) .814 .225
Ruminative Response Scale (10-item; 2-Factor Model)
 1a. RRS – India 31.48 34 .592 .000 (.000, .063) 1.000 .080
 1b. RRS – United States 20.97 34 .961 .000 (.000, .000) 1.000 .067
 1. Configural: ERQ across countries 52.45 68 .918 .000 (.000, .022) 1.000 .069
 2. Weak (metric: loadings equal) 76.70 97 .456 .010 (.000, .057) .999 .820
 3. Strong (scalar: thresholds equal)*** 86.37 83 .378 .020 (.000, .059) .997 .087
 4. Error variances equal*** 95.95 92 .368 .020 (.000, .058) .996 .093
 5. Factor variances equal*** 181.49 94 < .001 .096 (.075, .117) .920 .127
 6. Factor means equal*** 240.20 97 < .001 .121 (.102, .140) .869 .145

Note. N = 246. WLSMV = weighted least squares estimator with means and variances adjusted; RMSEA = root mean square error of approximation; CI = confidence interval; CFI = confirmatory factor analysis; SRMR = square root mean squared residual.

*

Item 4 was left free to vary: “When I am feeling positive emotions, I am careful not to express them.”

**

Items 3 and 4 were left free to vary: “I believe some of my thoughts are abnormal or bad and I shouldn’t think that way.”; “I make judgments about whether my thoughts are good or bad.”

***

Item 5 was left free to vary: “Write down what you are thinking and analyze it.”

Table 2.

Tests of partial measurement invariance models for each of three emotion regulation measures across the United States and India.

Model Comparisons ΔWLSMV χ2 Δdf p ΔRMSEA ΔCFI ΔSRMR
Emotion Regulation Questionnaire (10-item; 2-Factor Model)
 Configural vs. Metric invariance 14.89 6.000 .080 .010 −.001 .010
 Metric invariance vs. Scalar invariance 3.18 6.000 .480 −.010 .001 .001
 Scalar invariance vs. Error variances equal 3.18 6.000 .070 .000 .001 .010
 Factor variances equal vs. Error variances equal 35.93 2.000 .080 .050 .040 .020
Five-Facet Mindfulness Questionnaire–Acceptance Subscale (8-item; 1-Factor Model)
 Configural vs. Metric invariance 11.12 5.000 .080 .000 .000 .010
 Metric invariance vs. Scalar invariance 3.68 5.000 .240 .000 .000 .003
 Scalar invariance vs. Error variances equal 5.73 6.000 .100 .000 .000 .007
 Factor variances equal vs. Error variances equal 91.97 1.000 .010 .110 .040 .060
Ruminative Response Scale (10-item; 2-Factor Model)
 Configural vs. Metric invariance 24.26 8.000 .040 .009 −.001 .010
 Metric invariance vs. Scalar invariance 9.67 7.000 .150 .010 −.002 .005
 Scalar invariance vs. Error variances equal 9.58 9.000 .150 .001 −.001 .006
 Factor variances equal vs. Error variances equal 85.54 2.000 < .001 .080 .080 .030

Note. N = 246. WLSMV = weighted least squares estimator with means and variances adjusted; RMSEA = root mean square error of approximation; CI = confidence interval; CFI = confirmatory fit index; SRMR = square root mean residual. Bold values indicate significant changes in the practical fit dices (ΔCFI values of ≤ −.010, ΔRMSEA values of ≥ +.015, and ΔSRMR of ≥ +.030 from the unconstrained to constrained model).

Results of multiple-group CFA across countries for the RRS-10 comparing the configural model to the weak invariance revealed a significant χ2 difference test (χ2(df = 8) = 76.70, p < .05). However, compared to the configural invariance model, the weak invariance model did not differ substantially in terms of fit, RMSEA (Δ = .01), SRMR (Δ = .01) or CFI (Δ = .00), indicating that the factor loadings were equivalent across the two cultural groups. With weak invariance satisfied, we proceeded to test strong (scalar) measurement invariance. Compared to the weak invariance model, the strong invariance model produced a significant χ2 difference test (χ2(df = 8) = 50.55, p < .001) and differed substantially on CFI (Δ = −.04), RMSEA (Δ = .06), and SRMR (Δ = .02), indicating that the RRS-10 did not pass the test of strong cross-cultural measurement invariance. Examination of item thresholds revealed that item thresholds for Item 5 “Write down what you are thinking and analyze it” differed significantly across the two groups. Specifically, the threshold (intercept) for Item 5 was significantly lower in the U.S. sample (b = 1.51, SE = 0.08) compared to the Indian sample (b = 3.02, SE = 0.08). Accordingly, we proceeded to test the additional levels of invariance while leaving Item 5 freely estimated at each level of invariance. The RRS-10 showed evidence of partial strong invariance, partial strict invariance at Level 1, and Level 2, although it failed to meet partial Level 3 strict invariance, indicating factor means were not equal even after freeing up Item 5.

Results of multiple-group CFA across countries for the FFMQ-AS comparing the configural model to the weak invariance model produced a significant χ2 difference test (χ2(df = 7) = 60.68, p < .001) Change in RMSEA (Δ = .07) and SRMR (Δ = .06) was substantial between the two models, though change in CFI was not as notable (Δ = −.01). Examination of λs in both groups indicated that factor loadings were lower in the Indian sample compared to the U.S. sample on Item 3 and Item 4: “I believe some of my thoughts are abnormal or bad and I shouldn’t think that way.”; “I make judgments about whether my thoughts are good or bad.” Thus, we fit a partial weak invariance model leaving Items 3 and 4 freely estimated. The resulting model had a good fit (Table 1) and did not differ substantially from the full configural invariance model (Table 2). With partial weak invariance satisfied, we proceeded to test strong and strict invariance while keeping Items 3 and 4 freely estimated throughout. Results revealed that the FFMQ-AS met partial strong invariance, partial Level 1 strict invariance and partial Level 2 strict invariance. However, like the ERQ and RRS-10, the FFMQ-AS failed to meet partial Level 3 strict invariance, indicating factor means of three measures across countries were not invariant. This also implies that the relations among latent factors within the RRS-10 and ERQ were not equivalent across groups. Latent inter-factor correlations (rs) within each (multi-factor) measure were as follows: ERQ (U.S.: r = −.141; India: r = .623); RRS-10 (The U.S.: r = .228; India: r = .136).1,2

2.2. Multiple-group CFA across gender

Tables 3 and 4 display gender invariance findings. Overall, global fit indices showed good fit in both males and females for all measures. On all the measures, most standardized λs exceeded .60 and all surpassed .40 in both samples (all p values < .001). Based on the multiple-group CFA, across gender, we found full strict measurement invariance for all measures of ER (equivalent factor loadings, item thresholds, error variances, factor variances and covariances, and factor means) except for the FFMQ-AS. Based on ΔRMSEA = .03, the FFMQ-AS failed to meet Level 2 strict invariance (equal factor variances) across gender. For each multi-factor ER scale (ERQ and RRS-10), the interitem relations among the latent factors were as follows: ERQ (Males: r = .312; Females: r = .189); RRS-10 (Males: r = .311; Females: r = .414).

Table 3.

Configural, weak, strong, and strict invariance models for each of three emotion regulation measures in the United States and India.

Model WLSMV χ2 df p RMSEA (90% CI) CFI SRMR
Emotion Regulation Questionnaire (10-item, 2-factor model)
 1a. ERQ – Males 23.73 26 .592 .000 (.000, .066) 1.000 .079
 1b. ERQ – Females 14.78 26 .961 .000 (.000, .000) 1.000 .069
 1. Configural: ERQ across gender 38.51 52 .918 .000 (.000, .024) 1.000 .068
 2. Weak (metric: loadings equal) 44.36 59 .922 .000 (.000, .022) 1.000 .073
 3. Strong (scalar: thresholds equal) 50.49 66 .921 .000 (.000, .021) 1.000 .077
 4. Error variances equal 54.53 75 .964 .000 (.000, .000) 1.000 .082
 5. Factor variances equal 68.84 77 .735 .000 (.000, .043) 1.000 .092
 6. Factor means equal 71.27 78 .692 .000 (.000, .046) 1.000 .094
Five-Facet Mindfulness Questionnaire–Acceptance Subscale (8-item; 1-Factor Model)
 1a. FFMQ-A – Males 7.52 20 .995 .000 (.000, .000) 1.000 .042
 1b. FFMQ-A – Females 4.49 20 1.000 .000 (.000, .000) 1.000 .034
 1. Configural: Acceptance across gender 12.01 40 1.000 .000 (.000, .000) 1.000 .035
 2. Weak (metric: loadings equal) 17.61 47 1.000 .000 (.000, .000) 1.000 .042
 3. Strong (scalar: thresholds equal) 19.75 54 1.000 .000 (.000, .000) 1.000 .045
 4. Error variances equal 22.28 62 1.000 .000 (.000, .000) 1.000 .048
 5. Factor variances equal 69.70 63 .262 .032 (.000, .070) 0.998 .083
 6. Factor means equal 69.70 63 .262 .032 (.000, .070) 0.998 .083
Ruminative Response Scale (10-item; 2-Factor Model)
 1a. RRS – Males 22.76 34 .929 .000 (.000, .021) 1.000 .061
 1b. RRS – Females 8.97 34 1.000 .000 (.000, .000) 1.000 .043
 1. Configural: RRS across gender 31.74 68 1.000 .000 (.000, .000) 1.000 .049
 2. Weak (metric: loadings equal) 44.96 76 .998 .000 (.000, .000) 1.000 .058
 3. Strong (scalar: thresholds equal) 49.12 84 .999 .000 (.000, .000) 1.000 .061
 4. Error variances equal 57.90 94 .999 .000 (.000, .000) 1.000 .067
 5. Factor variances equal 75.23 96 .942 .000 (.000, .010) 1.000 .076
 6. Factor means equal 98.51 97 .438 .000 (.012, .054) 1.000 .088

Note. N= 246. WLSMV = weighted least squares estimator with means and variances adjusted; RMSEA = root mean square error of approximation; CI = confidence interval; CFI = confirmatory factor analysis; SRMR = standardized root mean square residual.

Table 4.

Tests of measurement invariance models for emotion regulation measures across gender.

Model Comparisons ΔWLSMV χ2 Δdf p ΔRMSEA ΔCFI ΔSRMR
Emotion Regulation Questionnaire (10-item; 2-Factor Model)
 Configural vs. Metric invariance 5.85 7.000 .560 .000 .000 .010
 Metric invariance vs. Scalar invariance 6.13 7.000 .520 .000 .000 .000
 Scalar invariance vs. Error variances equal 4.04 9.000 .910 .000 .000 .010
 Factor variances equal vs. Error variances equal 14.31 2.000 < .001 .000 .000 .010
 Factor means equal vs. Factor variances equal 2.43 1.000 .120 .000 .000 .000
Five-Facet Mindfulness Questionnaire–Acceptance Subscale (8-item; 1-Factor Model)
 Configural vs. Metric invariance 5.59 7.000 .590 .000 .000 .010
 Metric invariance vs. Scalar invariance 2.14 7.000 .950 .000 .000 .000
 Scalar invariance vs. Error variances equal 2.54 8.000 .960 .000 .000 .000
 Factor variances equal vs. Error variances equal 47.42 1.000 < .001 .030 .000 .040
Ruminative Response Scale (10-item; 2-Factor Model)
 Configural vs. Metric invariance 13.22 8.000 .110 .000 .000 .010
 Metric invariance vs. Scalar invariance 4.16 8.000 .840 .000 .000 .000
 Scalar invariance vs. Error variances equal 8.79 10.000 .550 .000 .000 .010
 Factor variances equal vs. Error variances equal 17.32 2.000 < .001 .000 .000 .010
 Factor means equal vs. Factor variances equal 23.28 1.000 < .001 .010 .000 .010

Note. N = 246. WLSMV = weighted least squares estimator with means and variances adjusted; RMSEA = root mean square error of approximation; CI = confidence interval; CFI = confirmatory fit index; SRMR = square root mean residual. Bold values indicate significant changes in the practical fit indices (ΔCFI values of ≤ −.010, ΔRMSEA values of ≥ +.015, and ΔSRMR of ≥ +.030 from the unconstrained to constrained model).

3. Discussion

The present study provides the first investigation of cross-cultural measurement invariance of two established (ERQ and RRS-10) and one novel (FFMQ-AS) instrument of ER across community samples in the U.S. and India. Establishing cross-cultural and gender measurement equivalence of transdiagnostic constructs is an essential yet overlooked part of developing a generalizable science of transdiagnostic factors in the study of psychopathology. The configural factor models provided a good and parsimonious representation of the data. Across countries, at least partial invariant factor loadings, item thresholds, and item residual variances were established for the ERQ, RRS-10, and FFMQ-AS. Overall, the measures exhibited a high degree of construct compatibility across the two cultural groups, in line with recent work on replicating the factor structure of the Difficulties with Emotion Regulation Scale (DERS) in India (Bhatnagar et al., 2020). Such findings of construct compatibility are likely due to the longstanding influence of British occupation of India, as well as the rapid globalization of India, which has Westernized many aspects of Indian society while still embracing traditional collectivistic principles (Rao et al., 2013).

Nevertheless, no measure achieved full cross-cultural invariance. This result is unsurprising. Full measurement equivalence is an exceedingly stringent criterion that may not be applicable to diverse cultural contexts (De Beuckelaer and Swinnen, 2011; Hong et al., 2017). Indeed, whereas measurement invariance facilitates establishment of sufficient similarities to lay the groundwork for cross-cultural research, it can also teach us about how constructs may differ across cultures, thereby aiding in theory building for future research. One particularly interesting result in this respect was that the latent inter-factor associations in the ERQ and RRS-10 differed across countries. Specifically, in the U.S. sample, suppression and reappraisal were modestly negatively correlated (r = −.14), whereas in India, suppression and reappraisal were somewhat highly and positively correlated (r = .62). These results dovetail with prior findings that culture can influence the strength of norms around emotion regulation strategy selection (e.g., as would be evidenced by mean differences across groups), and also the functional relationship between emotion regulation strategies (e.g., as would be evinced by their relations; Matsumoto et al., 2008). For example, in India, a culture that values the maintenance of social order, utilizing suppression as an initial strategy while one considers how to reappraise the event may serve to foster the most socially appropriate emotional response, resulting in a greater linkage between the two strategies. Whereas in the U.S., a country that scores higher on values such as egalitarianism and autonomy, suppression and reappraisal may be more orthogonal to one another, as their usage may be more linked to individual preference and habit (Matsumoto et al., 2008). Another interesting implication of these findings is consideration of how the differences in interfactor correlations between the measures may be linked to differences in how these ER strategies relate to outcome. For example, prior work suggests that suppression may be more maladaptive for European Americans than for Asian Americans or Hong Kong Chinese (Butler et al., 2007; Kwon and Kim, 2019; Nam et al., 2017; Soto et al., 2011; Su et al., 2012). Future work that investigates the sequential / temporal relationship between suppression and reappraisal across cultures may serve as an exciting future direction that could shed light on these research questions.

For each measure, at least one item was not invariant across culture. Specifically, the ERQ–Item 4, which endorses suppressing positive emotions, was not invariant across the two cultural groups, despite remaining items showing strong construct compatibility. Higher loadings for this item on the suppression factor in the Indian sample may suggest that suppressing positive emotions may play a stronger role in the latent construct of emotional suppression than for Americans. Findings dovetail with prior work on emotional expression and display rules across cultures, which suggests that Americans are more likely to express positive (vs. negative) emotions (Oishi, 2016; Tsai et al., 2006). However, East Asians tend to suppress both positive and negative emotions (Markus and Kitayama, 1991). Results of the present study may suggest that similar display rules vis-à-vis positive emotional expression play a role in Indian culture as well, perhaps owing to similarity in self-construal across East Asia and India (Kapoor et al., 2003).

For the RRS-10, although the full measure met metric invariance, Item 5, “Write down what you are thinking and analyze it”, failed to meet criteria for strong scalar invariance, even though the rest of the items displayed strong invariance as well as Level 1 and Level 2 strict invariance. For Item 5, however, the intercept (latent mean) was higher in the Indian sample as compared to the U.S. sample. Future research should investigate the nature of differential item functioning across the two groups by collecting qualitative data on how the item is interpreted. What we do know from the present study is that the full RRS-10 met metric invariance, and that all items but Item 5 met up through Level 2 strict invariance, suggesting the measure can be used in cross-cultural comparisons in India and the U.S. if Item 5 is removed.

The FFMQ-AS demonstrated the least equivalence of the measures examined. Specifically, two of the eight items from this measure failed to exhibit measurement invariance, namely: Item 3, “I believe some of my thoughts are abnormal or bad and I shouldn’t think that way” and Item 4, “I make judgments about whether my thoughts are good or bad” failed to meet metric or further levels of measurement equivalence. We were, however, able to establish a partial invariance solution by leaving these items free to vary across groups. This indicates that the remaining 6 items had good, if not excellent, cross-national construct equivalence, having met through Level 2 strict invariance. Both of the non-invariant items herein for the FFMQ-AS dealt with accepting thoughts, rather than emotions. Although the configural model showed excellent fit in both samples (indicating the factor structure of the items behaved similarly), factor loadings were lower in the Indian sample compared to the U.S. sample for the two items noted above. One possible interpretation of these findings may be that acceptance of thoughts and emotions are more separable in the Indian sample than in the U.S. sample.

We did find full gender invariance for the ERQ and RRS-10, whereas the FFMQ-AS displayed up through Level 2 strict invariance. These findings confer strong confidence in these assessments producing equal measurement properties across men and women. It also parallels and extends prior research that found gender equivalence for the ERQ (Liu et al., 2017), RRS-10 (Arana and Rice, 2020; Whisman et al., 2020; Zainal et al., in 2021), and the FFMQ-AS (Abujaradeh et al., 2019). Our findings of partial cross-cultural measurement invariance and consistent measurement equivalence across gender are also similar to findings from the personality literature (Dong and Dumas, 2020).

The present study is tempered by several limitations. First, findings from community adults recruited through an online survey platform may not extend to other U.S. or Indian populations. Whereas MTurk samples have been shown to provide a good representation of the general populations of the U.S. and India respectively (Boas et al., 2020; Redmiles et al., 2019), with a population of 1.66 billion people, it is possible that our sample does not capture the experiences of non-English speaking Indians from communities without internet access, for example. Thus, future research should seek to replicate our work in additional Indian samples. Second, the sample sizes were relatively small for both groups, and should be replicated in larger samples in future work to confer greater confidence in the generalizability of our findings. Nonetheless, simulation studies suggest that our strong factor loadings (mostly ≥ 0.60), high indicator-to-factor ratio per construct (i. e., 5-item ERQ-Suppression, 5-item ERQ-Reappraisal, 10-item RRS-10, 8-item FFMQ-AS), low differential item functioning, and sample size of > 100 participants per group offer ≥ 80% power to detect measurement invariance (Meade and Bauer, 2007; Meade et al., 2008). Further, the current study parameters provided 100% chance of attaining proper converging estimates (Marsh et al., 1998) and very low probability (0.7–5%) of Type I error (French and Finch, 2006). Thus, our final sample sizes were adequately powered to test for varying degrees of measurement invariance. Third, the recruitment of an English-speaking sample enabled us to test the measures in the original language of validation, thereby sidestepping the issue of linguistic equivalence. However, ongoing research on the applicability of latent factor structures in Indian populations who are not English-speaking should be pursued. Also, the compared samples differed on age and gender breakdown. Although we found lack of invariance for age and gender across the two samples, we were unable to conduct such analyses within samples due to lack of power to do so. Future studies should try to match samples on demographic variables when examining measurement invariance. Such efforts should also involve collecting data on education level.

Limitations notwithstanding, the present study has multiple strengths. This is the first study to offer preliminary support for measurement invariance of the ERQ, RRS-10, and FFMQ-AS across the U.S. and India. Noteworthy is that no other research group has conducted cross-cultural invariance studies on the ERQ or RRS-10 in India to date. Moreover, although prior studies have attempted to establish cross-cultural equivalence of the FFMQ-AS in Indian samples in the past, the data failed to meet even configural invariance, and was thus excluded from final analyses (Karl et al., 2020). The hope is that this study will catalyze ongoing investigation in juxtaposing transdiagnostic ER processes across distinct cultural groups to contribute to building meaningful theories about ER across cultures. Such research can aid in advancing work on ER interventions and their cultural adaptations.

Supplementary Material

supplement

Acknowledgements

Funding

The first author was supported by the National Institute on Drug Abuse [T32 DA017629, PI: Linda M. Collins].

Role of funding source

The NIDA did not have any role in study design, collection, analysis, and interpretation of the data; writing the report; and the decision to submit the report for publication.

Footnotes

Declaration of Competing Interest

The authors have no conflicts of interest to report.

Supplementary materials

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.jad.2021.04.089.

1

To address a reviewer’s concern about whether individuals who identified as ethnic / racial categories other than “Indian” should be included in the Indian sample, we dropped the participants who self-identified as any racial / ethnic category other than “Indian” in the Indian sample (N = 21) and re-ran the cross-cultural measurement invariance analyses. Results remain the same and are reported in the supplement (Tables S5 & S6).

2

Due to sample differences in age, we ran an additional age invariance analysis where age was coded as a dichotomous variable using a median split (Mdnage = 33.00), where age < 33 was coded as 0, and age > 33 was coded as 1. Results showed that all measures met up through Level 1 Strict invariance across age, in addition to the results across culture and sex as noted in the manuscript. The invariance analysis for age is included in the supplement (Tables S3 & S4).

References

  1. Abujaradeh H, Colaianne BA, Roeser RW, Tsukayama E, Galla BM, 2019. Evaluating a short-form Five Facet Mindfulness Questionnaire in adolescents: evidence for a four-factor structure and invariance by time, age, and gender. Int. J. Behav. Dev 44, 20–30. 10.1177/0165025419873039. [DOI] [Google Scholar]
  2. Aldao A, 2012. Emotion regulation strategies as transdiagnostic processes: a closer look at the invariance of their form and function [Estrategias de regulación emocional como procesos transdiagnosticos: una visión más detenida sobre la invarianza de su forma y función]. Revista de Psicopatologia y Psicologia Clinica 17. 10.5944/rppc.vol.17.num.3.2012.11843. [DOI] [Google Scholar]
  3. Aldao A, Nolen-Hoeksema S, Schweizer S, 2010. Emotion-regulation strategies across psychopathology: a meta-analytic review. Clin. Psychol. Rev 30, 217–237. 10.1016/j.cpr.2009.11.004. [DOI] [PubMed] [Google Scholar]
  4. Aldao A, Gee DG, De Los Reyes A, Seager I, 2016. Emotion regulation as a transdiagnostic factor in the development of internalizing and externalizing psychopathology: current and future directions. Dev. Psychopathol 28, 927–946. 10.1017/S0954579416000638. [DOI] [PubMed] [Google Scholar]
  5. Arana FG, Rice KG, 2020. Cross-Cultural Validity of the Ruminative Responses Scale in Argentina and the United States. Assessment 27, 309–320. 10.1177/1073191117729204. [DOI] [PubMed] [Google Scholar]
  6. Baer RA, 2006. Using self-report assessment methods to explore facets of mindfulness. Assessment 13, 27–45. 10.1177/1073191105283504. [DOI] [PubMed] [Google Scholar]
  7. Behrend TS, Sharek DJ, Meade AW, Wiebe EN, 2011. The viability of crowdsourcing for survey research. Behav. Res. Methods 43, 800–813. 10.3758/s13428-011-0081-0. [DOI] [PubMed] [Google Scholar]
  8. Bentler PM, 1990. Comparative fit indexes in structural models. Psychol. Bull 107, 238–246. 10.1037/0033-2909.107.2.238. [DOI] [PubMed] [Google Scholar]
  9. Bhatnagar P, Shukla M, Pandey R, 2020. Validating the factor structure of the Hindi version of the Difficulties in Emotion Regulation Scale. J. Psychopathol. Behav. Assess 42, 377–396. 10.1007/s10862-020-09796-6. [DOI] [Google Scholar]
  10. Boas TC, Christenson DP, Glick DM, 2020. Recruiting large online samples in the United States and India: facebook, Mechanical Turk, and Qualtrics. Political Sci. Res. Methods 8, 232–250. 10.1017/psrm.2018.28. [DOI] [Google Scholar]
  11. Bollen KA, 1989. Structural Equations With Latent Variables. Wiley, New York. [Google Scholar]
  12. Browne MW, Cudeck R, 1993. Alternate ways of assessing model fit. In: Bollen KA, Long JS (Eds.), Testing Equation Models. Sage, Newbury Park, CA, pp. 136–162. [Google Scholar]
  13. Buchanan EM, Scofield JE, 2018. Methods to detect low quality data and its implication for psychological research. Behav. Res. Methods 50, 2586–2596. 10.3758/s13428-018-1035-6. [DOI] [PubMed] [Google Scholar]
  14. Butler EA, Lee TL, Gross JJ, 2007. Emotion regulation and culture: are the social consequences of emotion suppression culture-specific? Emotion 7, 30–48. 10.1037/1528-3542.7.1.30. [DOI] [PubMed] [Google Scholar]
  15. Carpenter RW, Trull TJ, 2013. Components of emotion dysregulation in borderline personality disorder: a review. Curr. Psychiatry Rep 15, 335. 10.1007/s11920-012-0335-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Chen FF, 2007. Sensitivity of goodness of fit indexes to lack of measurement invariance. Struct. Equ. Model 14, 464–504. 10.1080/10705510701301834. [DOI] [Google Scholar]
  17. Cheung GW, Rensvold RB, 2002. Evaluating goodness-of-fit indexes for testing measurement invariance. Struct. Equ. Model 9, 233–255. 10.1207/S15328007SEM0902_5. [DOI] [Google Scholar]
  18. Christopher MS, Neuser NJ, Michael PG, Baitmangalkar A, 2012. Exploring the psychometric properties of the Five Facet Mindfulness Questionnaire. Mindfulness 3, 124–131. 10.1007/s12671-011-0086-x. [DOI] [Google Scholar]
  19. Cludius B, Mennin D, Ehring T, 2020. Emotion regulation as a transdiagnostic process. Emotion 20, 37–42. 10.1037/emo0000646. [DOI] [PubMed] [Google Scholar]
  20. Cuthbert BN, 2015. Research domain criteria (RDoC): toward future psychiatric nosologies. Dialogues Clin. Neurosci 17, 89–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. De Beuckelaer A, Swinnen G, 2011. Biased latent variable mean comparisons due to measurement non-invariance: a simulation study. In Davidov E, Schmidt P & Billiet J (Eds.). Methods and Applications in Cross-Cultural Analysis. Taylor & Francis, New York, NY, pp. 117–148. [Google Scholar]
  22. Dimitrov DM, 2017. Testing for factorial invariance in the context of construct validation. Meas. Eval. Counsel. Dev 43, 121–149. 10.1177/0748175610373459. [DOI] [Google Scholar]
  23. Dong Y, Dumas D, 2020. Are personality measures valid for different populations? A systematic review of measurement invariance across cultures, gender, and age. Pers. Individual Diff 160. 10.1016/j.paid.2020.109956. [DOI] [Google Scholar]
  24. Fernandez KC, Jazaieri H, Gross JJ, 2016. Emotion regulation: a transdiagnostic perspective on a new RDoC domain. Cognit. Ther. Res 40, 426–440. 10.1007/s10608-016-9772-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Ford BQ, Lam P, John OP, Mauss IB, 2018. The psychological health benefits of accepting negative emotions and thoughts: laboratory, diary, and longitudinal evidence. J. Pers. Soc. Psychol 115, 1075–1092. 10.1037/pspp0000157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. French BF, Finch WH, 2006. Confirmatory factor analytic procedures for the determination of measurement invariance. Struct. Equ. Model.: A Multidiscip. J 13, 378–402. 10.1207/s15328007sem1303_3. [DOI] [Google Scholar]
  27. Gross JJ, 1999. Emotion regulation: past, present, future. Cognit. Emotion 13, 551–573. 10.1080/026999399379186. [DOI] [Google Scholar]
  28. Gross JJ, 1998. The emerging field of emotion regulation: an integrative review. Rev. Gen. Psychol 2, 271–299. 10.1037/1089-2680.2.3.271. [DOI] [Google Scholar]
  29. Gross JJ, John OP, 2003. Individual differences in two emotion regulation processes: implications for affect, relationships, and well-being. J. Pers. Soc. Psychol 85, 348–362. 10.1037/0022-3514.85.2.348. [DOI] [PubMed] [Google Scholar]
  30. Henrich J, Heine SJ, Norenzayan A, 2010. The weirdest people in the world? SSRN Electronic J. 10.2139/ssrn.1601785. [DOI] [PubMed] [Google Scholar]
  31. Höhne JK, Schlosser S, Krebs D, 2017. Investigating cognitive effort and response quality of question formats in web surveys using paradata. Field methods 29, 365–382. . [DOI] [Google Scholar]
  32. Hong RY, Riskind JH, Cheung MWL, Calvete E, González-Díez Z, Atalay AA, Kleiman EM, 2017. The Looming Maladaptive Style Questionnaire: measurement invariance and relations to anxiety and depression across 10 countries. J. Anxiety Disord 49, 1–11. 10.1016/j.janxdis.2017.03.004. [DOI] [PubMed] [Google Scholar]
  33. Hu L, Bentler PM, 1999. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model 6, 1–55. 10.1080/10705519909540118. [DOI] [Google Scholar]
  34. Kapoor S, Hughes PC, Baldwin JR, Blue J, 2003. The relationship of individualism–collectivism and self-construals to communication styles in India and the United States. Int. J. Intercultural Rel 27, 683–700. 10.1016/j.ijintrel.2003.08.002. [DOI] [Google Scholar]
  35. Karl JA, Prado SMM, Gračanin A, Verhaeghen P, Ramos A, Mandal SP, Fischer R, 2020. The cross-cultural validity of the Five-Facet Mindfulness Questionnaire across 16 countries. Mindfulness 11, 1226–1237. 10.1007/s12671-020-01333-6. [DOI] [Google Scholar]
  36. Kline RB, 2016a. Chapter 11: estimation and local fit testing. In Kline RB (Ed.). Principles and Practice of Structural Equation Modeling, 4th ed. The Guilford Press, New York, pp. 231–261. [Google Scholar]
  37. Kline RB, 2016b. Chapter 12: global fit testing. In Kline RB (Ed.). Principles and Practice of Structural Equation Modeling, 4th ed. The Guilford Press, New York, pp. 262–299. [Google Scholar]
  38. Kring AM, Sloan DM (Eds.), 2010. Emotion Regulation and psychopathology: A transdiagnostic Approach to Etiology and Treatment. Guilford Press, N.Y. [Google Scholar]
  39. Kwon H, Kim YH, 2019. Perceived emotion suppression and culture: effects on psychological well-being. Int. J. Psychol 54, 448–453. 10.1002/ijop.12486. [DOI] [PubMed] [Google Scholar]
  40. Lei X, Zhong M, Liu Y, Xi C, Ling Y, Zhu X, Yi J, 2017. Psychometric properties of the 10-item ruminative response scale in Chinese university students. BMC Psychiatry 17, 152. 10.1186/s12888-017-1318-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Litman L, Robinson J, Abberbock T, 2017. TurkPrime.com: a versatile crowdsourcing data acquisition platform for the behavioral sciences. Behav. Res. Methods 49, 433–442. 10.3758/s13428-016-0727-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Liu W, Chen L, Tu X, 2017. Chinese adaptation of Emotion Regulation Questionnaire for Children and Adolescents (ERQ-CCA): a psychometric evaluation in Chinese children. Int. J. Psychol 52, 398–405. 10.1002/ijop.12233. [DOI] [PubMed] [Google Scholar]
  43. Lovett M, Bajaba S, Lovett M, Simmering MJ, 2018. Data quality from crowdsourced surveys: a mixed method inquiry into perceptions of amazon’s mechanical turk masters. Appl. Psychol 67, 339–366. 10.1111/apps.12124. [DOI] [Google Scholar]
  44. Markus HR, Kitayama S, 1991. Culture and the self: implications for cognition, emotion, and motivation. Psychol. Rev 98, 224–253. 10.1037/0033-295X.98.2.224. [DOI] [Google Scholar]
  45. Marsh HW, Hau KT, Balla JR, Grayson D, 1998. Is more ever too much? The number of indicators per factor in confirmatory factor analysis. Multivariate Behav. Res 33, 181–220. 10.1207/s15327906mbr3302_1. [DOI] [PubMed] [Google Scholar]
  46. Martín-Albo J, Valdivia-Salas S, Lombas AS, Jiménez TI, 2020. Spanish validation of the Emotion Regulation Questionnaire for Children and Adolescents (ERQ-CA): introducing the ERQ-SpA. J. Res. Adolesc 30, 55–60. 10.1111/jora.12465. [DOI] [PubMed] [Google Scholar]
  47. Mason W, & Suri S (2012). Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods, 44, 1–23. doi: 10.3758/s13428-011-0124-6. [DOI] [PubMed] [Google Scholar]
  48. Matsumoto D, Yoo SH, Nakagawa S, members of the Multinational Study of Cultural Display, R., 2008. Culture, emotion regulation, and adjustment. J. Pers. Soc. Psychol 94, 925–937. 10.1037/0022-3514.94.6.925. [DOI] [PubMed] [Google Scholar]
  49. McDonald RP, Marsh HW, 1990. Choosing a multivariate model: noncentrality and goodness of fit. Psychol. Bull 107, 247–255. 10.1037/0033-2909.107.2.247. [DOI] [Google Scholar]
  50. Meade AW, Bauer DJ, 2007. Power and precision in confirmatory factor analytic tests of measurement invariance. Struct. Equ. Model 14, 611–635. 10.1080/10705510701575461. [DOI] [Google Scholar]
  51. Meade AW, Johnson EC, Braddy PW, 2008. Power and sensitivity of alternative fit indices in tests of measurement invariance. J. Appl. Psychol 93, 568–592. 10.1037/0021-9010.93.3.568. [DOI] [PubMed] [Google Scholar]
  52. Mehta A, Young G, Wicker A, Barber S, Suri G, 2017. Emotion regulation choice: differences in US and Indian populations. Int. J. Indian Psychol 4, 203–219 doi: 18.01.160/20170402. [Google Scholar]
  53. Muthén L, Muthén B, 2013. Mplus: Statistical analysis With Latent variables. User’s guide (Version 7.11). Muthén and Muthén, Los Angeles, CA. [Google Scholar]
  54. Nam Y, Kim Y−H, Tam KK, 2017. Effects of emotion suppression on life satisfaction in Americans and Chinese. J. Cross-Cultural Psychol 49, 149–160. 10.1177/0022022117736525. [DOI] [Google Scholar]
  55. Newman MG, Llera SJ, 2011. A novel theory of experiential avoidance in generalized anxiety disorder: a review and synthesis of research supporting a Contrast Avoidance Model of worry. Clin. Psychol. Rev 31, 371–382. 10.1016/j.cpr.2011.01.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Nolen-Hoeksema S, Aldao A, 2011. Gender and age differences in emotion regulation strategies and their relationship to depressive symptoms. Personal. Individual Diff 51, 704–708. 10.1016/j.paid.2011.06.012. [DOI] [Google Scholar]
  57. Nolen-Hoeksema S, Morrow J, 1991. A prospective study of depression and posttraumatic stress symptoms after a natural disaster: the 1989 Loma Prieta earthquake. J. Pers. Soc. Psychol 61, 115–121. 10.1037/0022-3514.61.1.115. [DOI] [PubMed] [Google Scholar]
  58. Oishi S, 2016. The experiencing and remembering of well-being: a cross-cultural analysis. Pers. Soc. Psychol. Bull 28, 1398–1406. 10.1177/014616702236871. [DOI] [Google Scholar]
  59. Paolacci G, Chandler J, Ipeirotis PG, 2010. Running experiments on Amazon Mechanical Turk. Judgment Decis. Mak 5, 411–419 [Google Scholar]
  60. Parshad RD, Bhowmick S, Chand V, Kumari N, Sinha N, 2016. What is India speaking? Exploring the “Hinglish” invasion. Physica A 449, 375–389. 10.1016/j.physa.2016.01.015. [DOI] [Google Scholar]
  61. Patierno K, Kaneda T, & Greenbaum C (2019). World population data sheet. Retrieved April 19, 2020, from https://www.prb.org/worldpopdata/.
  62. Preece DA, Becerra R, Hasking P, McEvoy PM, Boyes M, Sauer-Zavala S, Gross JJ, 2021. The Emotion Regulation Questionnaire: psychometric properties and relations with affective symptoms in a United States general community sample. J. Affect. Disord 284, 27–30. 10.1016/j.jad.2021.01.071. [DOI] [PubMed] [Google Scholar]
  63. Putnick DL, Bornstein MH, 2016. Measurement invariance conventions and reporting: the state of the art and future directions for psychological research. Dev. Rev 41, 71–90. 10.1016/j.dr.2016.06.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. R Core Development Team, 2019. R: A language and Environment For Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. Retrieved from. https://www.r-project.org. [Google Scholar]
  65. Rao MA, Berry R, Gonsalves A, Hastak Y, Shah M, Roeser RW, 2013. Globalization and the identity remix among urban adolescents in India. J. Res. Adolesc 23, 9–24. 10.1111/jora.12002. [DOI] [Google Scholar]
  66. Redmiles EM, Kross S, Mazurek ML, 2019. How well do my results generalize? Comparing security and privacy survey results from mturk, web, and telephone samples. In: Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP). San Francisco, CA, pp. 1326–1343. 10.1109/sp.2019.00014. [DOI] [Google Scholar]
  67. Rhemtulla M, Brosseau-Liard PÉ, Savalei V, 2012. When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychol. Methods 17, 354–373. 10.1037/a0029315. [DOI] [PubMed] [Google Scholar]
  68. Rosseel Y, 2012. Lavaan: an R package for structural equation modeling. J. Stat. Softw 48, 1–36. 10.18637/jss.v048.i02. [DOI] [Google Scholar]
  69. Safran W, Kumar Sahoo A, Lal BV, 2008. Indian diaspora in transnational contexts: introduction. J. Intercultural Stud 29, 1–5. 10.1080/07256860701759907. [DOI] [Google Scholar]
  70. Siegle GJ, Moore PM, Thase ME, 2004. Rumination: one construct, many features in healthy individuals, depressed individuals, and individuals with lupus. Cognit. Therapy Res 28, 645–668. 10.1023/B:COTR.0000045570.62733.9f. [DOI] [Google Scholar]
  71. Sloan E, Hall K, Moulding R, Bryce S, Mildred H, Staiger PK, 2017. Emotion regulation as a transdiagnostic treatment construct across anxiety, depression, substance, eating and borderline personality disorders: a systematic review. Clin. Psychol. Rev 57, 141–163. 10.1016/j.cpr.2017.09.002. [DOI] [PubMed] [Google Scholar]
  72. Soto JA, Perez CR, Kim YH, Lee EA, Minnick MR, 2011. Is expressive suppression always associated with poorer psychological functioning? a cross-cultural comparison between European Americans and Hong Kong Chinese. Emotion 11, 1450–1455. 10.1037/a0023340. [DOI] [PubMed] [Google Scholar]
  73. Steenkamp JMEM, Baumgartner H, 1998. Assessing measurement invariance in cross-national consumer research. J. Consumer Res 25, 78–107. 10.1086/209528. [DOI] [Google Scholar]
  74. Steiger JH, 1990. Structural model evaluation and modification: an interval estimation approach. Multivar. Behav. Res 25, 173–180. 10.1207/s15327906mbr2502_4. [DOI] [PubMed] [Google Scholar]
  75. Su JC, Lee RM, Oishi S, 2012. The role of culture and self-construal in the link between expressive suppression and depressive symptoms. J. Cross-Cultural Psychol 44, 316–331. 10.1177/0022022112443413. [DOI] [Google Scholar]
  76. Taylor S, Zvolensky MJ, Cox BJ, Deacon B, Heimberg RG, Ledley DR, Cardenas SJ, 2007. Robust dimensions of anxiety sensitivity: development and initial validation of the Anxiety Sensitivity Index-3. Psychol. Assess 19, 176–188. 10.1037/1040-3590.19.2.176. [DOI] [PubMed] [Google Scholar]
  77. Thanoi W, Klainin-Yobas P, 2015. Assessing rumination response style among undergraduate nursing students: a construct validation study. Nurse Educ. Today 35, 641–646. 10.1016/j.nedt.2015.01.001. [DOI] [PubMed] [Google Scholar]
  78. Treynor W, Gonzalez R, Nolen-Hoeksema S, 2003. Rumination reconsidered: a psychometric analysis. Cognit. Ther. Res 27, 247–259. 10.1023/a:1023910315561. [DOI] [Google Scholar]
  79. Tsai JL, Knutson B, Fung HH, 2006. Cultural variation in affect valuation. J. Pers. Soc. Psychol 90, 288–307. 10.1037/0022-3514.90.2.288. [DOI] [PubMed] [Google Scholar]
  80. Wang M, Russell SS, 2005. Measurement equivalence of the Job Descriptive Index across Chinese and American workers: results from confirmatory factor analysis and item response theory. Educ. Psychol. Meas 65, 709–732. 10.1177/0013164404272494. [DOI] [Google Scholar]
  81. Wang M, Russell SS, 2016. Measurement equivalence of the Job Descriptive Index across Chinese and American workers: results from confirmatory factor analysis and item response theory. Educ. Psychol. Meas 65, 709–732. 10.1177/0013164404272494. [DOI] [Google Scholar]
  82. Webb TL, Miles E, Sheeran P, 2012. Dealing with feeling: a meta-analysis of the effectiveness of strategies derived from the process model of emotion regulation. Psychol. Bull 138, 775–808. 10.1037/a0027600. [DOI] [PubMed] [Google Scholar]
  83. Whisman MA, Judd CM, Whiteford NT, Gelhorn HL, 2013. Measurement invariance of the Beck Depression Inventory–Second Edition (BDI-II) across gender, race, and ethnicity in college students. Assessment 20, 419–428. 10.1177/1073191112460273. [DOI] [PubMed] [Google Scholar]
  84. Whisman MA, Miranda R, Fresco DM, Heimberg RG, Jeglic EL, Weinstock LM, 2020. Measurement invariance of the Ruminative Responses Scale across gender. Assessment 27, 508–517. 10.1177/1073191118774131. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Zainal NH, Newman MG, Hong RY, 2021. Cross-cultural and gender invariance of transdiagnostic processes in the United States and Singapore. Assessment 28 (2), 485–502. 10.1177/1073191119869832. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplement

RESOURCES