Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 May 1.
Published in final edited form as: Res Social Adm Pharm. 2021 Jun 19;18(5):2817–2829. doi: 10.1016/j.sapharm.2021.06.012

Developing a Short Screener for Acquiescent Respondents

Sunghee Lee 1, Fernanda Alvarado-Leiton 2, Wenshan Yu 3, Rachel Davis 4, Timothy P Johnson 5
PMCID: PMC8684561  NIHMSID: NIHMS1722199  PMID: 34244077

Abstract

Background:

Acquiescent response style (ARS) refers to survey respondents’ tendency to choose response categories agreeing to questions regardless of their content and is hypothesized as a stable respondent trait. While what underlies acquiescence is debatable, the effect of ARS on measurement is clear: bias through artificially increased agreement ratings. With certain population subgroups (e.g., racial/ethnic minorities in the U.S.) are associated with systemically higher ARS, it causes concerns for research involving those groups. For this reason, it may be necessary to classify respondents as acquiescers or a nonacquiescers, which allows independent analysis or accounting for this stylistic artifact. However, this classification is challenging, because ARS is latent, observed only as a by-product of collected data.

Objectives:

To propose a screener that identifies respondents as acquiescers.

Methods:

With survey data collected for ARS research, various ARS classification methods were compared for validity as well as implementation practicality.

Results:

The majority of respondents was classified consistently into acquiescers or nonacquiescers under various classification methods.

Conclusions:

We propose a method based on illogical responses given to two balanced, theoretically distant multi-item measurement scales as a screener.

Introduction

Acquiescent Response Style and Measurement Bias

Social science researchers have long observed a tendency among some survey respondents to systematically choose positive response options on rating scales, regardless of item content or directionality. This survey response style is known as acquiescence (Baumgartner and Steenkamp, 2001). Acquiescence appears to be most problematic when survey items include Likert-formatted response scales (Javeline, 1999; Saris et al., 2010). These items ask respondents to rate the level of their agreement on questions formatted as statements. Because respondents who acquiesce do so in a systematic manner, acquiescence has the potential to invoke bias in survey responses, such as through artificially increased agreement ratings. Acquiescence has been shown to threaten data validity for individual items, as well as relationships among variables and scale statistics (Billiet and McClendon, 2000; Baumgartner and Steenkamp, 2001; Welkenhuysen-Gybels, Billiet and Cambré, 2003; Billiet and Davidov, 2008; Lechner and Rammstedt, 2015; Plieninger, 2017). The effects of acquiescence generally appear to be small (Hoffmann, Mai and Cristescu, 2013; Plieninger, 2017); however, acquiescence-related bias may be more substantial in specific circumstances, such as when acquiescence is correlated with the variables of interest (Plieninger, 2017) or when surveying populations with high rates of acquiescence.

Acquiescence among U.S. Latinos

Within the U.S., acquiescence may be of particular concern for surveys with Latino populations, as several studies have observed higher rates of acquiescence among Latino respondents than among non-Latino whites (Aday, Chiu and Andersen, 1980; Ross and Mirowsky, 1984; Marín, Gamba and Marín, 1992; Warnecke et al., 1997; Davis et al., 2018). These findings are consistent with several country-level studies, which have shown higher use of acquiescence in Spanish-speaking Latin American countries than English-speaking countries, including the U.S. (Harzing, 2006; Meisenberg and Williams, 2008; Hoffmann, Mai and Cristescu, 2013). Since Latinos are expected to comprise 29% of the U.S. population by 2050 (Ennis, Ríos-Vargas and Albert, 2011), the potential for this systematic difference in how Latino and non-Latino white respondents respond raises concerns for data validity among both Latino-focused and national surveys.

It is not yet known why Latino respondents are more likely to acquiesce, although research indicates that acquiescence is a cultural trait (de Jong, Steenkamp, Fox, & Baumgartner, 2008; Harzing, 2006; Hoffman et al., 2013; Johnson, Kulesa, Cho, & Shavitt, 2005). As with all cultural traits, however, individuals within cultural groups vary in their use of acquiescence. Among Latinos, respondent-level characteristics such as speaking Spanish (Marín, Gamba and Marín, 1992; Davis, Resnicow and Couper, 2011; Davis et al., 2018), lower acculturation (Marín, Gamba and Marín, 1992), endorsement of specific cultural traits (Davis, Resnicow and Couper, 2011), older age (Ross and Mirowsky, 1984; Davis et al., 2018), and lower education (Marín, Gamba and Marín, 1992; Davis, Resnicow and Couper, 2011; Davis et al., 2018) have been associated with higher acquiescence in studies with Latino respondents. The relationship between gender and acquiescence has been inconclusive, as some reported higher acquiescence by males than females (Ross and Mirowsky, 1984), some the opposite (Marín, Gamba and Marín, 1992). Acquiescence among Latino respondents does not appear to be influenced by the social distance between respondents and interviewers or by interviewer characteristics (Davis et al., 2018).

Acquiescence Response Style in Pharmacy Research

Surveys are an important tool in pharmacy and pharmaceutical research. For risk evaluation and mitigation strategies assessment surveys, the Food and Drug Administration (FDA) recommends against using Likert scales for measuring knowledge (Food and Drug Administration, 2019). However, Likert scales are imperative for measuring the behaviors, satisfaction and opinions of not only patients and consumers, but also pharmacy professionals (e.g., Brewer, Campagna and Morrato, 2019; Fang, Yang, Feng, Ni & Zhang, 2011; Schuessler, Ruisinger, Hare, Prohaska, Melton, 2015; Thompson, Rao, Hayes & Purtill, 2019; Lindfelt, Ip, Gomez, Barnett, 2018). For example, FDA Health and Diet Survey includes a series of questions using a Likert scale, such as, “First, I am going to read two statements about food. As I read each statement, please tell me how much you agree or disagree with it. Do you strongly agree, somewhat agree, somewhat disagree, or strongly disagree with it? ‘I am confident that I know how to choose healthy foods.’” Under the influence of acquiescence bias, when a respondent chooses strongly agree, it becomes difficult to attribute this response to true confidence or acquiescence response style. Further, suppose that Latinos and non-Latinos are compared on this item and Latinos score higher than non-Latinos. It is impossible to know whether the score difference is because Latinos are more confident in choosing healthy foods or because they are more prone to acquiesce than non-Latinos.

Need to Classify Acquiescent Respondents

Many researchers may find it useful to assess whether or not a given survey respondent is prone to acquiesce or have need for an indicator of acquiescence that can be used to adjust survey responses during data collection. For example, survey data are often used to provide personalized health materials in a health behavior intervention with ongoing enrollment (Kreuter et al., 2013), whereas most response style detection methods can only be used during post-hoc analyses. In other situations, researchers may desire a quick and easy method to determine whether or not their data may be affected by acquiescence bias when assessing response styles is not a data analysis priority. Some researchers may also desire a screening tool to identify acquiescent respondents, such as when studying acquiescent response behaviors. In all of these situations, the ability to easily identify respondents who are more or less likely to acquiesce may be a valuable tool for identifying when to adjust for acquiescent response bias in a post-hoc manner or use methods to reduce acquiescence during data collection.

Difficulties in Classifying Acquiescent Respondents

There is no single accepted method or measure for classifying respondents as “acquiescers” or “nonacquiescers” The main difficulties stem from the facts that acquiescence is an inherently latent trait that is observed only as a by-product of asking questions where agreement is an option and because agreeing to questions may be a reflection of true attitudes instead of or in addition to acquiescence. The latter makes it challenging to disentangle response style effects from substantive responses when measuring acquiescence.

There are a number of methods for detecting acquiescence in survey data. Some of the simpler methods include: 1) based on a series of heterogeneous items, creating a binary indicator (e.g., a score of 1 for response categories of “agree” and “strongly agree” and 0 for the rest) on each item and summarizing the indicators across items (Bachman and O’Malley, 1984; Baumgartner and Steenkamp, 2001; Billiet and Davidov, 2008); 2) similar to the first method, creating a summary score but reflecting the degree of acquiescence by recoding “ strongly agree” for a score of 2, “agree” a score of 1, and all remainder a score of 0 (Baumgartner and Steenkamp, 2001); and 3) based on one or more balanced measurement scales or item pairs that include items worded both in the positive direction of the measured content (e.g., high self-esteem) and in the negative direction (e.g., low self-esteem), comparing illogical responses (Baumgartner and Steenkamp, 2001). For the first two of these methods, the logic behind using a set of maximally heterogeneous items is that when items are not related content-wise, consistently giving agreeable responses to many unrelated items may be a better indication of acquiescence. For the third method, balanced scales may make examining acquiescence less debatab1e, because agreeing to statements worded in opposite directions (e.g., “The U.S. spends too much money on scientific research” versus “The U.S. should dedicate more money to finding new scientific discoveries”) directly equate to psychological inconsistency. However, several problems have been noted with balanced scales. For one, it is arguable whether it is possible to create positive and negative items that truly measure the same construct at the same level of nuance using different, even if only slightly altered, wording. In addition, many balanced scales include negated wording in negative items, which may increase cognitive difficulty (Weijters and Baumgartner, 2012; Kamoen et al., 2017). Balanced scales may also invoke other issues such as reduced internal consistency and factorial validity for scales particularly among respondents with limited education (Salazar, 2015), although it can be argued that internal consistency and factorial validity on non-balanced scales are an artificial product of acquiescent response style (Baumgartner and Steenkamp, 2001).

More recently, advanced statistical models have been suggested for measuring acquiescence. Unlike the three methods above described above, most of these modelling approaches require two or more balanced measurement scales and specify acquiescence as a latent construct influencing all items in the same way in addition to latent substantive content constructs loading onto relevant items. These methods are: 1) confirmatory factor analysis (CFA) (Billiet and McClendon, 2000), 2) latent class analysis (LCA) (Moors, 2010; Kieruj and Moors, 2013), 3) multidimensional unfolding models (Javaras and Ripley, 2007), 4) representative indicators for response styles (RIRS) approach (De Beuckelaer, Weijters and Rutten, 2010), and 5) representative indicators response style means and covariance structure (RIRSMACS) (Weijters, Schillewaert and Geuens, 2008; Thomas, Abts and Vander Weyden, 2014). These methods differ in how the acquiescent response style factor is specified. For example, under CFA, the style factor is a continuous variable, while under LCA, respondents are grouped into a number of classes, where the classes differentiate the level of acquiescent response tendency and the number of classes is determined through model fit. In addition to capturing psychological inconsistency through balanced scales, the main benefit of using these advanced models is that acquiescence can not only be measured but also corrected for in estimating substantive constructs. The major difficulty of implementing these approaches is that they require two or more multi-item scales that are also balanced. A modeling approach that does not require balanced scales is Multidimensional Nominal Response model (MNRM; Bolt & Johnson, 2009). The multidimensional nature of the model allows for the specification of a content dimension and a response style dimension. While based on the Item Response Theory (IRT) framework, MNRM assumes that the measurement items are nominal. This specific feature allows for modelling response styles as the ordering of the response categories can be specified for each dimension of the model. For example, for a 5-category response scale where 1 is strongly disagree and 5 is strongly agree, the order of the categories for the content dimension would be [1 2 3 4 5] from the lowest to the highest category. For the acquiescence dimension in that same model, the order could be [0 0 0 1 1], assigning response categories 4 and 5 as acquiescent responses and the rest as nonacquiescent responses. Van Vaerenbergh and Thomas (2013) provides an extensive summary of these methods including their advantages and disadvantages. A wide range of methods to measure acquiescence including those introduce above is direct evidence that there is no gold standard validation measure for acquiescence.

While there are many approaches for measuring acquiescence, classifying whether a respondent is an acquiescer or a nonacquiescer requires subjective and somewhat arbitrary decisions across all methods introduced above, except for LCA with a 2-class acquiescent response style construct. In other words, acquiesce/nonacquiescer classification requires a cut-off point on the estimated acquiescence score (e.g., those score above a median as acquiescers and the remainder as nonacquiescers). Still, if acquiescence is a stable respondent trait (e.g., Weijters, Geuens and Schillewaert, 2010), there should be a reasonable overlap across the different classifications. This study sought to identify a relative easy, convenient, and proactive method for classifying acquiescers by using a telephone survey data set that included a diverse set of items designed to assess acquiescence, comparing how this method corresponds to other classification methods, and examining whether this method aligned with known characteristics of acquiescers in the existing literature. This research specifically focused on identifying acquiescers among a sample of primarily Latino telephone survey respondents, as this population has been shown to have significantly higher rates of acquiescence than non-Latino whites (Aday, Chiu and Andersen, 1980; Marín, Gamba and Marín, 1992; Ross and Mirowsky 1984). Findings from this research will enable researchers to determine when survey data are more or less susceptible to acquiescence bias, to screen out acquiescent respondents if so desired, and to predict or control for acquiescence during data analyses.

Data and Methods

Databases

A telephone survey was conducted with a total of 400 respondents1, who were roughly equally split across four comparison groups: non-Latino Whites, Mexican Americans, Cuban Americans and Puerto Ricans (see Table 1). All respondents came from a sample that was randomly drawn from a list of U.S. landline and cell telephone numbers purchased from a commercial sampling vendor. Telephone numbers on the list were those with mailing addresses in the five largest U.S. markets for each of the three Latino groups noted above and in Puerto Rico and additionally linked to individuals with 12 years of education or less living in households with an annual income of $25,000 or less. These criteria were employed to increase the occurrence of acquiescent responses, as education is a well-known correlate of acquiescence (Marín, Gamba and Marín, 1992; Davis, Resnicow and Couper, 2011), and income is highly related to education. In order to be eligible for the study, respondents were required to be between the ages of 18 and 90, speak English or Spanish, and self-identify as one of the four ethnic or heritage groups. While we ascertained eligibility with the person who answered the call for cell phone numbers, we used an adapted within-household selection method for landline numbers to reduce the potential bias associated with gender because males are often underrepresented in landline telephone surveys (Lavrakas, 2008) and ascertained eligibility with the selected member of the household.

Table 1.

Survey Respondent Characteristics (n=400)

Characteristics % Characteristics % Characteristics %
Age: Education a: Race/Ethnicity:
 18–35 years old 26.5  1–6th grade 9.5  Non-Latino White 24.8
 36–50 years old 21.0  7–12th grade 16.4  Mexican American 25.0
 51–64 years old 29.8  High school/GED 23.3  Cuban American 25.0
 65+ years old 22.8  Some college 21.3  Puerto Rican 25.3
Gender:  College degree 18.7 Interview Language:
 Female 69.5  Graduate degree 10.8  Spanish 51.3
 Male 30.5  English 48.7
a

Education is missing on 10 cases.

Interviews were conducted using computer-assisted telephone interviewing (CATI) between November 2015 and January 2016. Initially, the interview was conducted in either Spanish or English based on the language that a respondent used when answering the call. During eligibility screening, respondents’ language use was assessed based on three items adapted from the National Latino and Asian American Study (Massachusetts General Hospital, 2002). These items measured whether respondents used Spanish, English, or another language when thinking, talking with friends, and talking with family, respectively. Those who refused to answer or indicated languages other than Spanish or English on two or more of these three items were screened out as ineligible. Using responses to these questions, the interview language was chosen. Further, we verified the 1anguage choice by asking, “It sounds 1ike you would be most comfortab1e if we conducted this interview in [‘Spanish’/’English’]. Does this sound okay to you?” Depending on the response, the 1anguage was chosen for the remainder of the interview. A total of 400 respondents participated in the survey, where almost half (n=195) of the interviews were conducted in Spanish. On average, the interview lasted 35 minutes. A $20 gift card was mailed to those who completed the interview. The overall response rate was 10.1% based on AAPOR RR 3 (American Association for Public Opinion Research, 2016). Table 1 includes the distributions of age, gender, education, race, ethnicity, Latino heritage, and interview language among the 400 participants in this study.

Besides questions on socio-demographic characteristics, the questionnaire included 100 items querying respondents’ attitudes about various social topics (for actual question wording on these 100 items, see Appendix 1). All 100 questions used a 7-pt agreement Likert scale with 1 indicating “strong1y disagree” and 7 “strongly agree.” The content of the items can be grouped into four categories. The first category consisted of 20 items adapted from two, well-established, balanced psychosocial scales: the perceived stress scale (Cohen, Kamarck and Mermelstein, 1983) and the Rosenberg self-esteem scale (Rosenberg, 1965, 1979). Each of these scales measured a single, separate construct and was comprised of 10 items that included reverse-worded items. The second category of items consisted of paired items, which were designed to measure the same concept but worded in opposite directions within each pair (e.g., “The U.S. spends too much money on scientific research” versus “The U.S. should dedicate more money to finding new scientific discoveries”). There were a total of 44 such items, making 22 pairs. The third category consisted of 21 heterogeneous items concerned with a variety of topics, including several that asked about fictitious topics or issues (e.g., “The Trans-Atlantic Cultural Partnership would be a good thing for the United States”). The fourth group of items was comprised of 15 items that could not be classified into the previous item types. The 100 questions were grouped into subsets, and the ordering of both question subsets and questions within subsets was randomized across respondents to mitigate the effects of respondent fatigue or question order.

Acquiescer Classification Approaches

The main variable of this study was the classification of acquiescers versus nonacquiescers. All 100 items described above were used as they were asked (i.e., without any reverse coding) in eight classification approaches detailed below. It should be noted that these approaches are by no means an exhaustive list of methods available in the literature but cover a wide spectrum. Most of the approaches required an arbitrary cut-off point for which a given respondent was evaluated and classified. For such cases, we used the median score as a cut-off point. All analyses were done using R (Version 4.0.3, R Core Team), only the MNRM was estimated using Mplus (Version 8.4, Muthén & Muthén).

Because the classification methods below aggregate multiple items, item missing rates become relevant. We report missing rate for each of 100 items in Appendix 1. The average missing rate was 3.9%, and its median was 1.8%. Missing rates were higher for items used for heterogeneous items, with the highest observed for, “The passage of Lambert’s Law would dramatically reduce school shootings in the U.S.” at 30.8 %. For Classification 1 to 5, we used the score of nonmissing items. For the remaining methods, we describe missing treatment separately within each method.

Classification 1. Proportion of Agree Responses across 100 items (Prop_All).

All 100 items were used for this approach. Given that these items were not expected to measure the same content, we first computed for each respondent the proportion of answers 6 or 7 on a 7-pt scale (where 7 was labeled “strongly agree”) across the 100 items. When responses were missing, we used only non-missing data for computing the proportion. On average, respondents chose a response of 6 or 7 on 47.1% of the questions, with a minimum of 11.3%, a maximum 100.0% and a median of 46.2%. We used the median proportion as a cut-point and classified respondents whose score was at or above the median as acquiescers and the remainder as nonacquiescers. The proportion of acquiescers was 50.0%, which was expected given the use of median as a cut-off point. Note that we use this approach as a validation measure rather than as a suggested method for acquiescer classification, as administering 100 items for respondent classification may often be impractical.

Classification 2. Illogical Responses to Perceived Stress and Rosenberg Self Esteem Scales (Illogic_Balanced).

We only used responses to the perceived stress and the Rosenberg self-esteem scales for this classification. The perceived stress scale included four items in the positive direction and six in the negative direction, whereas the ten items in the Rosenberg scale were split evenly in each direction. Within each scale, if a respondent agreed with any of the statements written in the positive direction (i.e., choosing a response of 6 or 7) and also agreed with any of the statements written in the negative direction, we regarded that respondent as providing illogical responses. Almost half of respondents (48.5%) illogically agreed to items in the perceived stress scale, and a similar pattern of illogical responses (47.5%) was observed on the Rosenberg scale. When combined, one out of three respondents (33.5%) gave illogical responses to both scales and were classified as acquiescers.

Classification 3. Illogical Responses to Item Pairs (Illogic_Pair).

For this classification, we used the 22 item pairs, where each pair measures one construct but items are written in opposite directions, introduced previously to create a dichotomous acquiescer variable. Similar to Classification 2, we created a binary variable for each item pair (a value of 1 for illogical responses and 0 otherwise). Respondents, on average, gave illogical responses to 3.7 out of 22 item pairs with a minimum of 0 pairs, a maximum of 22 pairs, and a median of 3 pairs. The vast majority of the respondents (82.5%) gave illogical responses to at least one pair, and they were classified as acquiescers under this approach.

Classification 4. Proportion of Illogical Responses to Item Pairs (Prop_Pair).

We used the illogical response binary variable created for each of the 22 item pairs as described above and computed the proportion of illogical responses across the 22 pairs. The median proportion (13.6%) was used as a cut-point, yielding roughly the same number of acquiescers (52.8%) as nonacquiescers (47.2%).

Classification 5. Summary Acquiescent Response Score on Heterogeneous Items (Score_Hetero).

For each of 21 heterogeneous items, we assigned a score of 2 for each “strongly agree” response, 1 for each “agree” response and 0 for other responses, which equated to the method used by Baumgartner and Steenkamp (2001) to differentiate the degree of agreement. The average score across 21 items ranged from 0.05 to 1.90. We used the median score (0.78) as a cut-off point for classifying acquiescers. This resulted in 49.5% of the respondents classified as acquiescers and the rest as nonacquiescers.

Classification 6. Confirmatory Factor Analysis of Perceived Stress and Rosenberg Self Esteem Scales (CFA_Balanced).

Since both the perceived stress scale and Rosenberg self-esteem scale are balanced directionally, we used more advanced statistical models for these scales. The first approach was a confirmatory factor analysis (CFA) model that included four latent constructs: 1) the perceived stress construct loading onto its corresponding 10 measurement items; 2) the self-esteem construct loading onto its corresponding 10 items; 3) a response style construct loading onto the 10 perceived stress items; and 4) a response style construct loading onto the 10 self-esteem items. All style construct loadings were set at 1, as we believed that the style should affect all items in the same way. Further, the covariances between style and the content constructs were set to zero, because response style is theorized to be present regardless of question content (Paulhus, 1984). Essentially, this was the same model as depicted in Figure 3 in a previous study by Billiet and McClendon, (2000, p.614). We used the median of predicted scores of the two style constructs and classified those scoring above the median on both style scores as acquiescers and the rest as nonacquiescers. This approach classified 39.8% of the respondents as acquiescers (Note that the fit of this CFA model was better than a CFA model with content constructs only as compared with goodness of fit measures in Appendix 2). We used full information maximum likelihood approach for the estimation to compensate for item nonresponse. Furthermore, we used linear factor analysis even though our data were categorical, as previous simulation studies have shown that this approach is adequate when the number of categories exceeds five (Beaducel & Herzberg, 2006; Rhemtulla, Brosseau-Liard & Savalei, 2012). Appendix 3 provides estimates for the model parameters.

Classification 7. Latent Class Analysis of Perceived Stress and Rosenberg Self-Esteem Scales (LCA_Balanced).

Similar to Classification 6, the final approach used the perceived stress and self-esteem scales and considered both content-related and response style constructs in a latent class analysis (LCA) (e.g., Kieruj and Moors, 2013; Liu, Lee and Conrad, 2015). Under this approach, each item was specified as a nominal variable, with each content construct loading onto respective items. Instead of broad response style factors in Classification 6, an acquiescence style construct was specified as loading onto all 20 items with a fixed loading. All latent constructs were assigned with 2 classes. Further, the two content constructs were modelled to co-vary but independently from the acquiescence style construct per response style theory (e.g., Paulhus, 1984). Of the latent acquiescence construct’s two classes, the first class included 41.3% of the sample with higher acquiescence tendency, and the second group included the remainder with lower acquiescence tendency. Respondents included in the first group were considered to be acquiescers. The main caveat of this approach is that LCA required complete response on all 20 items. Therefore, we used mean imputation for 30 cases that had item missing on one or more of the 20 items. Appendix 2 provides a comparison of LCA model fit between the model without and with the acquiescence style construct. Similar to CFA in Classification 7, including the acquiescence factor improved the model fit. Appendix 3 provides estimates for the model parameters.

Classification 8. Multidimensional Nominal Response Model of Perceived Stress and Rosenberg Self Esteem Scales (MNRM).

This classification method used the perceived stress and self-esteem scales. We used only non-missing data to estimate the model parameters. The modeling approach for this classification method included three latent constructs: 1) the perceived stress construct loading onto its corresponding 10 measurement items and using a scoring function of [1 2 3 4 5 6 7]; 2) the self-esteem construct loading onto its corresponding 10 items and using a scoring function of [1 2 3 4 5 6 7]; and 3) a response style construct loading on the stress items and the self-esteem items with a scoring function of [0 0 0 0 0 1 1]. Covariances between style and content were set to zero, and a maximum likelihood estimation approach was utilized. This modeling approach produced an acquiescence score for each respondent. We used the median of those scores to classify respondents into acquiescers (equal or above the median) and non-acquiescers (below the median). Therefore, 50% of the respondents were classified as acquiescers. Appendix 2 provides the model fit information, and Appendix 3 provides estimates for the model parameters.

Acquiescer Classification Consistency Measure

Additionally, we computed the number of times a given respondent would be classified as an acquiescer across the eight approaches by summing up all acquiescer indicators. Naturally, the value for this summary variable ranged 0 to 8. Respondents with values 2 to 5 on this variable were considered as inconsistent acquiescers/nonacquiescers, while those with values from 0 to 1 were considered as consistent nonacquiescers, and those with values of 6 to 8 were considered consistent acquiescers.

Acquiescer Classification Validation Measures

As noted previously, there is no standard way of measuring acquiescence. In the absence of validation measures for acquiescence, we used two indirect approaches to examine validity of the eight acquiescer classification methods: one focusing on internal consistency of balanced scales in the questionnaire through Cronbach’s α; and the other focusing on covariates of acquiescence reported in the literature.

Cronbach’s α.

If reverse-worded items are not reverse coded for a balanced scale, Cronbach’s α should be low. If Cronbach’s α is high, it may be due to respondents’ tendency to choose certain response categories, such as “agree,” regardless of item content. In other words, acquiescent response style artificially inflates Cronbach’s α of balanced scales without reverse coding, which, in turn, is evidence for interrupted internal consistency. Using the two balanced scales in our questionnaire (i.e., perceived stress and self-esteem), we computed Cronbach’s α of each scale without reverse coding separately for acquiescers and nonacquiescers under each of the eight classification methods.

Covariates of Acquiescent Response Style.

In order to ascertain whether the acquiescer classification approaches were conceptually sound, we examined the relationships between each classification approach and previously observed covariates of acquiescence: age (≤35 years old; 36–50 years old; 51–64 years old; ≥65 years old), gender (male versus female), education (less than high school, high school, some college or more) and ethnicity (Latino versus non-Latino White (NLW)). For Latinos only, we also assessed interview language (English versus Spanish), acculturation, and Latino heritage group (Mexican American, Cuban American, Puerto Rican). The acculturation variable assessed each respondent’s overall personal cultural orientation and was constructed by using a slightly adapted version of the Acculturation Rating Scale for Mexican Americans-II (Cuellar, Arnold and Maldonado, 1995) for use with the three Latino subgroups included in our study. This measure was comprised of 27 items distributed between two subscales: (1) a Latino orientation subscale (15 items; Cronbach’s α=. 84 in our study); and (2) a NLW orientation subscale (12 items; Cronbach’s α=.90 in our study). Using the subscale scores, respondents were classified into: (1) more towards Latino (high Latino subscale score/low or medium NLW subscale score); (2) more towards NLW (low or medium Latino/high NLW); (3) high bicultural (high Latino/high NLW); and (4) low bicultural (medium Latino/medium NLW). No respondents scored low on both subscales.

Analysis Procedures

We first examined the distribution of acquiescers across the eight classification approaches introduced earlier as well as the consistency in classification. To validate our classification methods further, the next analyses were carried out in three steps. First, the relationships across classifications was examined through 𝜑 coefficients to accommodate the binary nature of the acquiescer classification variables. Here, because acquiescent response style was hypothesized to be a trait, the relationships were expected to be positive and significant if the classification methods tapped into the same trait. As multiple 𝜑 coefficients are tested, Bonferroni adjustments is made to the significance testing (Curtin and Schulz, 1998). We then compared Cronbach’s α of the Rosenberg self-esteem scale and the perceived stress class between acquiescers and nonacquiescers under each classification method. Here, because we intentionally did not reverse code these balanced scales, if our acquiescer classification methods were valid, higher Cronbach’s α would be expected for acquiescers than for nonacquiescers. In the last step, we examined the proportion of acquiescers under each classification approach by respondent characteristics that were known covariates of acquiescence listed previously through χ2 tests in contingency tables, first for the overall sample and then separately for Latino respondents only. This step also estimated whether consistency across classification methods varied by respondent characteristics.

Results

Distribution of Acquiescers across Classification Approaches

Table 2 provides a summary of the eight classification approaches and the proportion of acquiescers for the overall sample, non-Latino white, Latinos interviewed in English, and Latinos interviewed in Spanish. As expected, classification approaches using a median as a cut-off point (e.g., Prop_All) categorized roughly half of the overall sample as acquiescers.

Table 2.

Acquiescer/Non-Acquiescer Classification Methods and Corresponding Proportion of Acquiescers

Classification Label Classification Method Detail Items (Count; Type) Acquiescers (n=400)
1. Prop_All Proportion of acquieseent responses (“agree” and “strongly agree”); Classified as acquiescer if above the median proportion 100; All items 50.00%
2. Illogic_Balanced Classified as acquiescer if given illogical acquiescent responses to both balanced scales 20; PSS, RSES 33.50%
3. Illogic_Pair Classified as acquiescer if given illogical acquiescent responses to any of 22 item pairs 44; Item pairs 82.50%
4. Prop_Pair Proportion of illogical acquiescent responses across 22 item pairs; Classified as acquiescer if above the median proportion 44; Item pairs 52.80%
5. Score_Hetero Acquiescent response score (2=”strongly agree”; 1=”agree”; 0=Rest) for each item and average the score across items; Classified as acquiescer if above the median score 21; heterogeneous topics 49.50%
6. CFA_Balanced Score for response style factor from confirmatory factor analysis with two content factors and one style factor per content; Classified as acquiescer if above the median score on both style factors 20; PSS, RSES 39.80%
7. LCA_Balanced Acquiescent respondents from latent class analysis with two content-related latent variables and an acquiescent response latent variable (2 classes); Classified as acquiescer if in the higher acquiescent score class 20; PSS, RSES 41.30%
8. MNRM Score for acquiescence response style factor from multidimensional nominal response model with two content factors and one style factor; Classified as acquiescer if above the median score on style factor 20; PSS, RSES 50.00%

Note. PSS: Perceived Stress Scale; RSES: Rosenberg Self-Esteem Scale.

Relationship across Acquiescer Classification Approaches

The relationship across classification approaches is reported in Table 3. All 𝜑 coefficients indicate that there is a positive association across all classifications, mostly ranging around 0.3 to 0.5. Prop_All generated higher 𝜑 coefficients than other classification approaches. However, this was somewhat expected, as items used for other classifications were subsets of items used for Prop_All. Some classifications that used completely different items also showed a strong relationship (e.g., Score_Hetero and MNRM with φ=0.565).

Table 3.

𝛗 Coefficients across Acquiescer Classification Methods1

Prop_ All Illogic_ Balanced Illogic_ Pair Prop_ Pair Score_ Hetero CFA_ Balanced LCA_ Balanced
Illogic_Balanced 0.381 (p< 0.001)
Illogic _Pair 0.434 (p< 0.001) 0.257 (p< 0.001)
Prop_Pair 0.726 (p< 0.001) 0.364 (p< 0.001) 0.487 (p< 0.001)
Score_Hetero 0.630 (p< 0.001) 0.346 (p< 0.001) 0.324 (p< 0.001) 0.526 (p< 0.001)
CFA_Balanced 0.383 (p< 0.001) 0.430 (p< 0.001) 0.267 (p< 0.001) 0.339 (p< 0.001) 0.361 (p< 0.001)
LCA_Balanced 0.442 (p< 0.001) 0.513 (p< 0.001) 0.319 (p< 0.001) 0.396 (p< 0.001) 0.379 (p< 0.001) 0.720 (p< 0.001)
MNRM 0.603 (p< 0.001) 0.483 (p< 0.001) 0.339 (p< 0.001) 0.493 (p< 0.001) 0.565 (p< 0.001) 0.375 (p< 0.001) 0.432 (p< 0.001)
1

Classification methods defined in Table 2.

Cronbach’s α of the Rosenberg Self-Esteem Scale and the Perceived Stress Scale

For the two balanced scales, Cronbach’s α was computed separately for acquiescers and nonacquiescers for each of the eight classification methods. As noted previously, these two scales were not reverse coded. Hence, higher Cronbach’s α observed from acquiescers than from nonacquiescers may indicate falsely increased internal consistency due to acquiescent response style. As shown in Table 4, Cronbach’s α was generally higher for acquiescers than nonacquiescers throughout the classification methods. For example, Cronbach’s α for the self-esteem scale for those classified as an acquiescers under the proportion of acquiescent response across all 100 items was 0.47, as compared to the α for nonacquiescers of 0.28. This acquiescer-nonacquiescer difference was generally greater for the perceived stress scale than the self-esteem scale, with the exception of the results obtained from the CFA method, in which α was larger for nonscquiescers than acquiescers for both scales.

Table 4.

Cronbach’s α for the Rosenberg Self-Esteem Scale and the Perceived Stress Scale without Reverse Coding for Acquiescers and Non-Acquiescers under Different Acquiescer Classification Methods

Acquiescer Classification Method 1 Rosenberg Self-Esteem Scale Perceived Stress Scale
Acquiescer Non-Acquiescer Acquiescer Non-Acquiescer
Prop_All 0.47 0.28 0.66 0.34
Illogic_Balanced 0.47 0.33 0.56 0.45
Illogic_Pair 0.51 0.08 0.63 0.21
Prop_Pair 0.48 0.33 0.67 0.38
Score_Hetero 0.47 0.43 0.63 0.51
CFA_Balanced 0.12 0.22 0.41 0.50
LCA_Balanced 0.24 0.15 0.46 0.38
MNRM 0.48 0.46 0.65 0.47
1

Classification methods defined in Table 2.

Relationship between Acquiescer Classification Approaches and Covariates of Acquiescent Response Style

The relationship between the acquiescer classification and the proposed covariates of ARS for the overall sample is included in Table 5. A and for the Latino sample in Table 5.B. Overall, the proportion of acquiescers increased with age: for example, under Prop_All, 31.1% of the respondents aged 18 to 35 years old were classified as acquiescers, and this proportion increased to 44.0%, 56.3% and 69.2% for ages 36–50 years old, 51–64 years old and 65 years old or older, respectively (p<0.001). This association between age and acquiescer/nonacquiescer classification was significant, except for the Illogic_Pair method. The acquiescer distribution did not differ significantly between male and female respondents, except for when using Prop_All and Prop_Illogic classification, where females were more likely to be acquiescers than males. The proportion of acquiescers also varied by education: the higher the education category, the lower the proportion of acquiescers. A significant association was also observed for acquiescer classification and ethnicity/language. Non-Latino whites were least likely to be classified as acquiescers, followed by Latinos interviewed in English and then Latinos interviewed in Spanish. For instance, based on LCA, only one out of five non-Latino white respondents (19.2%) was classified as an acquiescer, while more than one third of the English-interviewed Latino respondents (35.0%) and more than half of the Spanish-interviewed Latino respondents (55.2%) were classified as acquiescers (p<0.001). The relationship between these respondent characteristics and acquiescer classification held consistently when modeling each acquiencer classification variable on the set of covariates in Table 5 (results not shown).

Table 5.

Estimated Proportions of Acquiescers by Respondent Characteristics under Different Classification Methods

A. Overall Sample
Respondent Characteristics n Acquiescer Classification Method 1
Prop_All Illogic_Balanced Illogic_Pair Prop_Pair Score_Hetero CFA_Balanced LCA_Balanced MNRM
Age
 ≤35 years old 106 31.10% 22.60% 77.40% 36.80% 32.10% 33.00% 34.90% 33.96%
 36–50 years old 84 44.00% 23.80% 78.60% 50.00% 46.40% 31.00% 32.10% 44.05%
 51–64 years old 119 56.30% 37.00% 85.70% 55.50% 52.10% 39.50% 41.20% 61.34%
 ≥65 years old 91 69.20% 50.50% 87.90% 70.30% 69.20% 56.00% 57.10% 59.34%
χ2 test2 p<0.001 p<0.001 p=0.136 p<0.001 p<0.001 p=0.002 p=0.003 p<0.001
Gender
 Male 122 41.00% 30.30% 82.80% 43.40% 44.30% 35.20% 41.00% 45.08%
 Female 278 54.00% 34.90% 82.40% 56.80% 51.80% 41.70% 41.40% 52.16%
χ2 test p=0.023 p=0.438 p=1.000 p=0.018 p=0.201 p=0.268 p=1.000 p=0.193
Education
 <High school 101 80.20% 55.40% 97.00% 82.20% 75.20% 58.40% 57.40% 66.34%
 High school/GED 91 56.00% 35.20% 92.30% 60.40% 51.60% 39.60% 44.00% 56.04%
 ≥Some college 198 31.80% 21.20% 70.20% 34.80% 35.40% 30.30% 31.80% 38.38%
χ2 test p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 p<0.001
Ethnicity/Language
 Non-Latino White 99 20.20% 16.20% 65.70% 22.20% 32.30% 23.20% 19.20% 30.30%
 Latinos interviewed in English 100 37.00% 25.00% 79.00% 36.00% 34.00% 31.00% 35.00% 39.00%
 Latino interviewed in Spanish 201 71.10% 46.30% 92.50% 76.10% 65.70% 52.20% 55.20% 65.17%
χ2 test p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 p<0.001
B. Latino Respondents Only
Respondent Characteristics n Acquiescer Classification Metdod1
Prop_All Illogic_Balanced Illogic_Pair Prop_Pair Score_Hetero CFA_Balanced LCA_Balanced MNRM
Age
 ≤35 years old 80 40.00% 26.30% 81.30% 45.00% 37.50% 33.80% 40.00% 42.50%
 36–50 years old 61 60.70% 31.10% 88.50% 62.30% 55.70% 39.30% 42.60% 54.10%
 51–64 years old 88 62.50% 42.00% 89.80% 64.80% 56.80% 45.50% 45.50% 64.77%
 ≥65 years old 72 77.80% 56.90% 93.10% 80.60% 72.20% 62.50% 66.70% 63.89%
χ2 test p<0.001 p<0.001 p=0.139 p<0.001 p<0.001 p=0.003 p=0.005 p=0.014
Gender
 Male 90 51.10% 35.60% 88.90% 54.40% 53.30% 41.10% 47.80% 50.00%
 Female 211 63.50% 40.80% 87.70% 66.40% 55.90% 46.90% 48.80% 59.24%
χ2 test p=0.060 p=0.473 p=0.918 p=0.068 p=0.774 p=0.423 p=0.969 p=0.139
Education
 <High school 96 82.30% 56.30% 96.90% 84.40% 75.00% 59.40% 58.30% 68.75%
 High school/GED 75 60.00% 38.70% 94.70% 68.00% 54.70% 42.70% 50.70% 56.00%
 ≥Some college 125 42.40% 25.60% 76.80% 44.80% 40.00% 35.20% 39.20% 46.40%
χ2 test p<0.001 p<0.001 p<0.001 p<0.001 p<0.001 p=0.001 p=0.017 p=0.004
Latino Subgroup
 Mexican 100 58.00% 35.00% 89.00% 57.00% 56.00% 41.00% 44.00% 47.00%
 Cuban 100 60.00% 39.00% 86.00% 68.00% 62.00% 49.00% 51.00% 61.00%
 Puerto Rican 101 61.40% 43.60% 89.10% 63.40% 47.50% 45.50% 50.50% 61.39%
χ2 test p=0.886 p=0.461 p=0.744 p=0.271 p=0.116 p=0.522 p=0.543 p=0.065
Acculturation
 More NLW 50 24.00% 18.00% 76.00% 26.00% 30.00% 30.00% 36.00% 32.00%
 More Latino 181 70.20% 47.00% 92.80% 74.60% 62.40% 53.60% 56.40% 64.64%
 Hi Bicultural 43 65.10% 30.20% 83.70% 60.50% 58.10% 34.90% 37.20% 60.47%
 Low Bicultural 27 48.10% 40.70% 85.20% 55.60% 48.10% 33.30% 37.00% 40.74%
χ2 test p<0.001 p=0.001 p=0.008 p<0.001 p=0.001 p=0.004 p=0.011 p<0.001
1

Classification methods defined in Table 2.

2

Chi-square statistics test bivariate relationships between each classification method and each respondent characteristics.

Focusing only on Latinos (Table 5.B), age, gender and education showed similar associations with being classified as an acquiescer versus a nonacquiescer as were found in the overall sample. Latino heritage subgroups did not differ in their use of acquiescence. Acculturation showed a significant association with acquiescent response style: respondents who had a stronger Latino orientation were most likely to be classified as acquiescers, and those with a stronger non-Latino white orientation were least likely to be classified as acquiescers. This acculturation-acquiescence pattern was consistent across the classification methods.

When examining consistency of the classification methods, more than half of the respondents (70%) were consistently classified as acquiescers or nonacquiescers across all eight classification methods (results not shown). None of the respondent characteristics in Table 5 was associated with differential consistency. For example, 56.7%, 59.0% and 62.7% of non-Latino white, English-interviewed, and Spanish-interviewed Latino respondents, respectively, were consistently classified as an acquiescer or a nonacquiescer across all eight methods.

Discussion

We examined eight different methods for categorizing survey respondents as acquiescers versus non-acquiescers. Overall, the relationship between all classification approaches were positive and significant, and, at the respondent level, the majority of respondents was classified consistently into the same category using different classification approaches. However, this study also highlights the subjective elements of some of these classification methods, as well as the substantial variability in the proportion of acquiescers identified using different methods. The use of a standardized approach to classifying acquiescers would therefore facilitate future comparisons of results across studies.

Data from this study indicated that the method of classifying a respondent as an acquiescer if they provided illogical responses to one or more paired items (Illogic_Pair) yielded the highest proportion of acquiescers, with more than eight out of 10 respondents being classified as acquiescers. A more stringent classification criterion for this method – for example, requiring four illogical responses to paired items, rather than an illogical response to one pair of items – would have naturally yielded a lower proportion of acquiescers. It is also possible that so-called “illogical” responses were, in fact, 1ogical, as respondents may have interpreted the paired item s in ways different what we researchers intended. This may be why this approach did not produce proportions of acquiescers in a pattern expected for different age groups, unlike other classification methods. For these reasons, using the illogical response to one or more item pairs method is likely to be of relatively little use for comparing the extent of acquiescence across studies where the number of items, content of items, item wording, and illogical response criteria are inconstant. Thus, though easy to execute during data collection, using illogical answers to item pairs appears to be the least attractive post-hoc approach for assessing acquiescence among the eight methods tested in this study.

The four methods using various median split strategies yielded predictably balanced proportions of acquiescers and non-acquiescers and, as such, yielded the next-highest proportions of acquiescers. By balancing study samples, these classification approaches may be useful for examining differences between study groups exhibiting more versus less acquiescence, as is often the goal in response style research. The use of a median also grounds the classification in the sample norms, such that comparisons may be made between two groups of respondents from a population with an overall high or overall low rate of acquiescence. However, using a median as a cut-off point is rather arbitrary, and defending that may be problematic. Another potential drawback of these methods is that as medians are computed using data from an entire sample, respondents cannot be classified until data collection is complete. Moreover, this categorization is relative to the sample. In a hypothetical sample entirely comprised of nonacquiescers, half the sample would be categorized as acquiescers.

Among the classification methods used in this study, the method utilizing illogical responses across two balanced scales yielded the lowest, or most conservative, proportion of acquiescers. Since acquiescers likely comprise a minority in most study populations, this approach has conceptual appeal. This approach has several other advantages: it is easy to conduct; it can be applied at any time during data collection, including as an acquiescence screener; it requires only 20 items (as opposed to ideally larger sets of heterogeneous items); the items used from the perceived stress and self-esteem scales assess different latent variables; and it captures the strength of paired items and balanced scales for measuring acquiescence without being overly affected by the extent to which the items in any one item pair do or do not succeed in mirroring one another (i.e., the design allows for items to be interpreted differently). For these reasons, this approach appeared to be the most desirable method particularly as a screener for classifying acquiescers from among those methods tested in this study.

Modelling approaches using these two balanced scales are useful in understanding the dynamics in acquiescent response style after data are collected. They explain a response to an item as a function of the associated latent content variable and the latent response style and model the response style to have a constant effect across items. Hence, this conceptually mirrors the definition of response style closely as response style is viewed as content irrelevant. However, these modelling approaches are not practical for screening in or out acquiescent respondents unless real-time prediction can be implemented and the data for such prediction is available prior to data collection.

When examining acquiescer classification as a function of respondents’ characteristics known to be associated with acquiescent response style in the literature, this study observed relatively consistent associations between respondent characteristics and acquiescer status across the eight classification methods. Respondents who were older, had less education or self-identified as Latino showed significantly higher odds of being classified as an acquiescer than their counterparts, regardless of the classification method used. When examining Latino respondents only, in addition to age and education, interview language and acculturation were important in differentiating between acquiescers and non-acquiescers in that more acquiescers were observed among Spanish-interviewed respondents than English-interviewed respondents, as well as among high bicultural respondents as compared to respondents with weaker Latino cultural orientations. In sum, these approaches provided consistent classifications of acquiescers versus non-acquiescers.

Of course, our study is not without limitations. First, the sample, although randomly drawn, was designed to increase the yield of acquiescers by targeting phone numbers associated with lower education and income levels. This, however, does not mean that our sample included only those with low education and income levels, because the commercial data used for this targeting is less than perfect for such targeting with non-trivial false positive rates (Pasek et al., 2014). Indeed, only about a quarter of our sample reported less than a high school education. Hence, although the sample may overly represent acquiescers, which was necessary to conduct the analyses, this does not limit our ability to compare across methods for assessing acquiescence, which was the primary goal of this study. Second, our study focused on non-Latino white, Mexican American, Cuban American and Puerto Rican respondents. Thus, the findings may not be generalizable to other Latino subgroups or other racially or ethnically defined populations. However, it should be noted that there are very few studies that examine acquiescent response style among multiple Latino heritage groups. Even though we did not elaborate on this, our results indicated that there was no clear evidence for Latino subgroup differences in acquiescent response style but clear evidence for differences between Latinos and non-Latino whites. Third, the survey was conducted over telephone, the mode in which higher acquiescence has been observed, as compared to face-to-face (Jordan, Marcus and Reeder, 1980; Holbrook, Green and Krosnick, 2003) or self-administered surveys (Weijters, Schillewaert and Geuens, 2008). While the telephone mode may have increased acquiescence, it is difficult to imagine that the mode affected the categorization approaches and altered the implications of the findings. Fourth, this study does not report the validity of our approaches to measuring acquiescence and classifying acquiescent respondents. This is an inherent problem with acquiescence being a concept that can only be measured indirectly as a byproduct of questions designed for other constructs. To overcome this problem, we instead examined indirect validation measures, such as internal consistency of measurement scales and covariates of acquiescence and found that these indirect measures supported our approaches. We view these limitations as less than critical for deterring implications for the acquiescer categorization tools examined in this study. Fifth, one may wonder whether the results are generalizable beyond the U.S. context. However, because the definition of acquiescence is universal, the geographic scope of our sample is not likely to limit the generalizability of the methods tested in our study. Sixth, some of the classification methods used the median as a cut-off point. As this arbitrary choice may raise concerns, we tested these methods along with those that do not require such arbitrary decisions. Our study shows that respondents are classified into acquiescers or non-acquiescers consistently across methods with and without using arbitrary cut-offs. Therefore, we are confident that this arbitrary cut-off issue is unlikely to hamper generalizability. Lastly, it is possible that the “illogical” response patterns observed among respondents may be attributable to other factors, such as item comprehension, respondent effort, or fatigue. However, the randomized question administration order should have mitigated any influences on responses due to fatigue.

By comparing eight methods of assessing acquiescence, findings from this study indicate that when adequate resources are available and the goal is solely to conduct a post-hoc assessment of acquiescence, latent modeling approaches may be preferred for conceptual illustration of response style. However, when resources are lacking or a screener for acquiescence is needed, results from this study indicate that identifying illogical responses across two balanced scales is a relatively simple method for categorizing acquiescers versus non-acquiescers. This method can be implemented in any data collection platform and may serve multiple purposes for social science research: screening in/out acquiescers in research studies, measuring acquiescence after data collection is complete, and adjusting for acquiescent response style during data analysis. Thus, the balanced scale approach appears to be the most flexible yet conservative classification method, which may make it a better choice for making comparisons across a wide range of study designs and populations. The screening function also provides researchers with a new tool for proactively recruiting acquiescers for more focused study of and experimentation with reducing acquiescence, particularly among Latino respondents. Such research is needed to understand why some survey respondents acquiesce, when such behaviors and most likely to occur, the meaning that respondents may be conveying when this survey response style is used, and how to cope with acquiescence in cross-cultural comparisons of survey data.

Acknowledgments

Funding Disclosure

This work was supported by the National Cancer Institute at the National Institutes of Health [grant number R01 CA172283].

Appendix 1. Survey Items

Please tell me how much you disagree or agree with each statement by choosing a number between 1 and 7, where 1 means “strongly disagree” and 7 means “strongly agree.”

A. Perceived Stress Scale Items

Q1. In the last 30 days, I often felt upset because of something that happened unexpectedly. (Missing rate: 1.0%)

Q2. In the last 30 days, I often felt that I was unable to control the important things in my life. (Missing rate: 1.0%)

Q3. In the last 30 days, I often felt nervous and stressed. (Missing rate: 0.3%)

Q4. In the last 30 days, I often felt confident about my ability to deal with my personal problems.* (Missing rate: 0.3%)

Q5. In the last 30 days, I often felt that things were going my way.* (Missing rate: 0.8%)

Q6. In the last 30 days, I often found that I could not cope with all the things that I had to do. (Missing rate: 0.5%)

Q7. In the last 30 days, I was often able to control things that irritate me in my life.* (Missing rate: 1.0%)

Q8. In the last 30 days, I often felt that I was on top of things.* (Missing rate: 1.0%)

Q9. In the last 30 days, I often felt angry because of things that were outside of my control. (Missing rate: 0.8%)

Q10. In the last 30 days, I often felt that difficulties were piling up so high that I could not overcome them. (Missing rate: 0.8%)

Adapted from

Cohen, S., Kamarck, T., Mermelstein, R.: A global measure of perceived stress. J. Health Soc. Behav. 24, 385–396. (1983)

Ferrando, P.J., Lorenzo-Seva, U., Chico, E.: A general factor-analytic procedure for assessing response bias in questionnaire measures. Struct. Equ. Model. 16, 364–381. (2009)

Items marked with an asterisk (*) were reverse worded.

B. Rosenberg Self-Esteem Scale Items

Q1. On the whole, I am satisfied with myself. (Missing rate: 0.0%)

Q2. At times I think I am no good at all.* (Missing rate: 1.0%)

Q3. I feel that I have a number of good qualities. (Missing rate: 0.5%)

Q4. I am able to do things as well as most other people. (Missing rate: 0.8%)

Q5. I feel I do not have much to be proud of.* (Missing rate: 0.5%)

Q6. I feel useless at times.* (Missing rate: 0.3%)

Q7. I feel that I am a person of worth, at least on an equal plane with others. (Missing rate: 0.0%)

Q8. I wish I could have more respect for myself. * (Missing rate: 1.8%)

Q9. All in all, I am inclined to feel that I am a failure.* (Missing rate: 1.5%)

Q10. I take a positive attitude toward myself. (Missing rate: 0.3%)

Obtained from:

Rosenberg. R.: Society and the adolescent self-image. Princeton University Press, Princeton, NJ (1965)

Items marked with an asterisk (*) were reverse worded.

C. Illogical Paired Items

Q1. The U.S. spends too much money on scientific research.1 (Missing rate: 3.5%)

Q2. The U.S. should dedicate more money to finding new scientific discoveries.1 (Missing rate: 2.0%)

Q3. It is healthier to eat while standing up.1 (Missing rate: 4.8%)

Q4. It is better for your body to sit down while you eat.1 (Missing rate: 3.0%)

Q5. People should be knowledgeable about important events in our country.1 (Missing rate: 1.0%)

Q6. It’s okay for people to live their lives without knowing much about what’s going on outside their local community.1 (Missing rate: 1.8%)

Q7. Children should help out around the house without expecting to be paid.1 (Missing rate: 0.5%)

Q8. Children should not be asked to do chores without receiving an allowance.1 (Missing rate: 1.5%)

Q9. Video games are not harmful to human health.1 (Missing rate: 1.3%)

Q10. For health reasons, people should limit the amount of time they spend playing video games.1 (Missing rate: 1.8%)

Q11. Money can solve almost any problem.1 (Missing rate: 0.3%)

Q12. The only problems money can solve are money problems.1 (Missing rate: 0.8%)

Q13. Only foolish people forgive and forget.1 (Missing rate: 1.0%)

Q14. A wise person forgives but does not forget.1 (Missing rate: 1.5%)

Q15. It is sometimes necessary to discipline a child with a good spanking.2 (Missing rate: 0.5%)

Q16. Children should never be spanked.1 (Missing rate: 1.5%)

Q17. Humans have existed in their present form since the beginning of time.3 (Missing rate: 6.0%)

Q18. Humans have evolved from more primitive beings over billions of years.3 (Missing rate: 5.2%)

3

Q19. Divorce should be avoided unless it is an extreme situation.3 (Missing rate: 1.0%)

Q20. Divorce is painful, but it is preferable to staying in a bad marriage.3 (Missing rate: 2.3%)

Q21. Only women who have been raped should be allowed to have an abortion.1 (Missing rate: 3.5%)

Q22. All women should have the option of having an abortion.1 (Missing rate: 1.8%)

Q23. The U.S. should put more limits on who can own guns.3 (Missing rate: 2.3%)

Q24. The U.S. should do more to protect the rights of people to own guns.3 (Missing rate: 3.5%)

Q25. It should be illegal to use animals in all types of medical research.1 (Missing rate: 1.5%)

Q26. It is okay for scientists to use animals in medical research to find life-saving treatments for humans.1 (Missing rate: 2.5%)

Q27. Society is better off if people make marriage and having children a priority.3 (Missing rate: 2.5%)

Q28. It is okay for society if people prioritize other things over getting married and having children.3 (Missing rate: 4.3%)

Q29. The growing influence of American culture is harmful to cultures in other countries.3 (Missing rate: 5.8%)

Q30. It’s good that American ideas and customs are spreading around the world.3 (Missing rate: 2.0%)

Q31. Vegetarianism is harmful to the environment.1 (Missing rate: 3.8%)

Q32. A vegetarian diet is better for the health of the planet.1 (Missing rate: 2.8%)

Q33. People should floss their teeth every day.1 (Missing rate: 0.8%)

Q34. As long as people brush their teeth regularly, it is okay for them to skip flossing.1 (Missing rate: 1.5%)

Q35. A person should be fair in all situations.1 (Missing rate: 0.5%)

Q36. In some situations, it is more important to be compassionate than fair.1 (Missing rate: 1.8%)

Q37. Education is more important than experience.1 (Missing rate: 0.8%)

Q38. Personal experience and common sense are more important than formal education.4 (Missing rate: 2.5%)

Q39. The U.S. government should use laws to limit adults from eating too many unhealthy foods.1 (Missing rate: 1.5%)

Q40. Adults should be allowed to decide on their own what foods they want to eat.1 (Missing rate: 0.8%)

Q41. In general, it is good for our society when mothers of young children work outside the home.3 (Missing rate: 1.5%)

Q42. It is good for our society when mothers of young children do not have to work outside the home.1 (Missing rate: 2.0%)

Q43. Gay marriage should not be legal.1 (Missing rate: 3.5%)

Q44. Gay people should have the same marriage rights as straight people.1 (Missing rate: 3.5%)

D. Heterogeneous Single Items

Q1. The Porino whales should be protected.1 (Missing rate: 12.3%)

Q2. Chocolate is healthier than vanilla.1 (Missing rate: 4.5%)

Q3. Global warming is a myth.1 (Missing rate: 4.3%)

Q4. Whether a person gets cancer or not depends more on genes than anything else.5 (Missing rate: 6.8%)

Q5. I support the National Food Act.1 (Missing rate: 20.0%)

Q6. I trust social movements.1 (Missing rate: 6.3%)

Q7. I agree with the political views of the Independent Citizens Movement.1 (Missing rate: 26.8%)

Q8. Honesty is more important than sincerity.1 (Missing rate: 4.5%)

Q9. British art is more sophisticated than German art.1 (Missing rate: 26.0%)

Q10. Children are smarter than we think.1 (Missing rate: 0.0%)

Q11. The Trans-Atlantic Cultural Partnership would be a good thing for the United States.1 (Missing rate: 26.5%)

Q12. Millionaires should be allowed to have as many pets as they want in their homes.1 (Missing rate: 3.5%)

Q13. Holiday celebrations should not be allowed in public schools.1 (Missing rate: 2.8%)

Q14. People who use the Internet should pay to keep it running.1 (Missing rate: 3.8%)

Q15. The passage of Lambert’s Law would dramatically reduce school shootings in the U.S.1 (Missing rate: 30.8%)

Q16. There is no evidence for the uncertainty principle in today’s world.1 (Missing rate: 19.3%)

Q17. Wisdom is found, not sought.1 (Missing rate: 4.8%)

Q18. Dramatic events unfold in unforeseen ways.1 (Missing rate: 8.0%)

Q19. The U.S. should reform tax laws from deductible corporate expenses.1 (Missing rate: 18.5%)

Q20. Celebrities should be allowed to change their names.1 (Missing rate: 3.3%)

Q21. Tree frogs need greater protection.1 (Missing rate: 16.3%)

Item Sources

1 = Items created by the study team.

2 = Item adapted from items in the NORC 2014 General Social Survey questionnaire. Retrieved from http://gss.norc.org/Get-Documentation/questionnaires.

3= Adapted from items drawn from the Pew Research Center’s online question bank. Retrieved from http://www.pewresearch.org/question-search/in2016.

4 = Item obtained from:

  • Kreuter, M.W. Buskirk, T.D., Holmes, K., Clark, E.M., Robinson, L., Si, X., Rath, S., Erwin, D., Philipneri, A., Cohen, E., Mathews, K.: What makes cancer survivor stories work? An empirical study among African American women. J. Cancer Surviv. 2 (1), 33–44. (2008)

5= Adapted from an item in the National Cancer Institute’s 2003 Health Information National Trends Survey (HINTS). Retrieved from http://hints.cancer.gov/default.aspx.

Appendix 2.

Model Fit from Analysis of Self-Esteem and Stress Scales: Confirmatory Factor Analysis with and without Response Style Factor and Latent Class Analysis with and without Acquiescent Response Style Classes

Confirmatory Factor Analysis Latent Class Analysis c Multidimensional Nominal Response Model
Content factors only Content & Response style factors a, b Content factors only Content & Acquiescent response style factors b, d Content & Acquiescent response style factors
df 169 168 259 258
−2 Log likelihood −30945 −30703 21991 21709 20448
AIC 31067 30827 22273 21993 20684
BIC (LL-based) 31311 31074 22836 22560 21155
χ 2 711.901 469.192
SRMR 0.092 0.212
GFI 0.989 0.978
AGFI 0.985 0.970
RMSEA (90% CI) 0.090 (0.083–0.096) 0.067 (0.060–0.074)
CFI 0.692 0.829
TLI 0.653 0.807
a

Similarly specified as Figure 3 in Billiet & McClendon (2000)

b

Model fit significantly better than the model with content factors only based on log-likelihood ratio tests (p<0.001).

c

2 classes are imposed for each latent variable.

d

Modelled similarly as Kieruj & Moors (2013). The difference is our model includes only acquiescent response style while Kieruj & Moors (2013) includes both acquiescent response style and extreme response style.

Appendix 3.

Model Parameters from Analysis of Self-Esteem and Stress Scales: Confirmatory Factor Analysis and Latent Class Analysis with Response Style Factor

LCA
CFA Intercepts
Content: Self-Esteem Scale Loadings Intercepts Loadings Category 1 Category 2 Category 3 Category 4 Category 5 Category 6 Category 7
Q41 0.55 6.17 −0.53 −1.08 −2.90 −1.59 0.01 1.01 1.51 3.04
Q42 −1.36 2.17 0.92 4.26 1.77 0.47 −0.57 −1.06 −2.14 −2.73
Q43 0.44 6.43 −1.15 −3.49 −3.88 −2.39 −0.51 1.98 3.24 5.05
Q44 0.50 6.27 −0.54 −1.57 −1.99 −1.58 −0.90 1.01 1.92 3.10
Q45 −1.14 2.55 0.59 3.25 1.38 0.06 −0.88 −0.97 −1.60 −1.24
Q46 −1.12 2.63 0.60 3.04 1.48 0.06 −0.76 −0.42 −1.84 −1.57
Q47 0.31 6.48 −0.54 −1.77 −2.28 −1.85 −0.64 1.16 1.77 3.60
Q48 −1.07 3.83 0.43 2.29 0.70 −0.69 −1.10 −0.06 −1.20 0.06
Q49 −1.17 1.93 1.07 4.94 2.36 0.58 −0.13 −1.26 −3.13 −3.37
Q50 0.60 6.35 −1.39 −4.30 −4.45 −2.43 −0.33 2.42 3.66 5.43
Content: Stress Factor
Q91 1.33 3.77 −0.62 0.53 −0.05 −0.38 −0.16 0.11 −0.53 0.48
Q92 1.18 3.13 −0.74 1.01 0.19 −0.40 −0.24 0.14 −0.64 −0.05
Q93 1.42 3.83 −0.72 0.24 −0.16 −0.24 −0.22 0.22 −0.31 0.46
Q94 −0.74 5.92 0.85 0.07 −1.19 −0.98 −0.02 0.45 0.63 1.03
Q95 −0.36 5.10 0.18 0.25 −0.66 −0.47 −0.08 0.70 −0.13 0.39
Q96 1.35 3.26 −0.88 0.67 0.23 −0.19 −0.20 0.14 −0.75 0.10
Q97 −0.31 5.50 0.22 0.00 −1.01 −0.60 0.05 0.51 0.27 0.78
Q98 −0.60 5.74 0.56 −0.26 −1.03 −0.65 0.08 0.60 0.48 0.79
Q99 1.32 3.73 −0.66 0.45 −0.07 −0.33 −0.26 0.26 −0.40 0.35
Q100 1.48 2.92 −1.45 0.85 0.50 −0.03 −0.47 0.18 −0.92 −0.11
Covariances CFA LCA
Self-Esteem Scale ~ Stress Factor 0.72 −2.01

Appendix 4.

Model Parameters from Analysis of Self-Esteem and Stress Scales: Multidimensional Nominal Response Model with Response Style Factor

Loadings
Category 1 Category 2 Category 3 Category 4 Category 5 Category 6 Category 7
Content: For all items 0 1 2 3 4 5 6
Style: For all items 0 0 0 0 1 1 1
Intercepts
Category 1 Category 2 Category 3 Category 4 Category 5 Category 6 Category 7
Q41 NA −4.36 −6.40 −4.98 −3.18 −2.05 −1.50
Q42 NA −3.57 −3.89 −3.64 −3.48 −3.00 −2.02
Q43 NA −5.38 −6.63 −5.72 −4.32 −2.34 −1.54
Q44 NA −4.85 −5.50 −5.01 −4.14 −2.10 −1.17
Q45 NA −2.63 −3.52 −3.38 −3.33 −2.62 −1.50
Q46 NA −2.72 −3.58 −2.43 −2.89 −2.29 −1.16
Q47 NA −5.57 −6.28 −5.81 −4.40 −2.47 −1.80
Q48 NA −0.63 −2.30 −1.66 −2.66 −2.56 −1.29
Q49 NA −4.01 −4.85 −4.04 −3.34 −3.42 −2.08
Content: Stress Factor NA
Q91 NA −0.76 −1.89 −1.14 −1.08 −1.10 −0.65
Q92 NA −1.95 −2.67 −1.72 −1.78 −1.74 −0.97
Q93 NA −0.68 −1.58 −0.90 −1.01 −0.80 −0.50
Q94 NA −3.83 −4.85 −4.21 −2.65 −1.57 −0.89
Q95 NA −2.31 −3.04 −2.22 −1.30 −0.06 −0.62
Q96 NA −1.62 −2.61 −1.56 −1.58 −1.34 −0.67
Q97 NA −3.14 −3.94 −2.91 −1.72 −0.74 −0.66
Q98 NA −3.79 −4.33 −3.38 −2.07 −0.96 −0.64
Q99 NA −0.90 −1.78 −0.95 −1.15 −1.02 −0.61
Q100 NA −2.20 −3.16 −1.95 −2.33 −1.71 −0.88

Footnotes

Compliance with Ethical Standards

All authors do not have any conflict of interest. This study was approved and monitored by a university-affiliated Institutional Review Board.

1

There was one respondent from a random-digit-dial (RDD) sample, making our total sample size to be 401. To keep the consistency within the sample, the RDD respondent was excluded from this study. Removing this individual from the sample did not alter the results.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Sunghee Lee, Institute for Social Research, University of Michigan 426 Thompson St. Ann Arbor, MI 48104, U.S.A.

Fernanda Alvarado-Leiton, Institute for Social Research, University of Michigan 426 Thompson St. Ann Arbor, MI 48104, U.S.A.

Wenshan Yu, Institute for Social Research, University of Michigan 426 Thompson St. Ann Arbor, MI 48104, U.S.A.

Rachel Davis, Department of Health Promotion, Education, and Behavior, University of South Carolina 915 Greene St. Columbia, S.C. 29208, U.S.A.Kenneth.

Timothy P. Johnson, Department of Public Administration, University of Illinois-Chicago412 S Peoria St, Chicago, IL 60607, U.S.A.

References

  1. Aday LA, Chiu GY and Andersen R (1980) ‘Methodological issues in health care surveys of the Spanish heritage population.’, American Journal of Public Health, 70(4), pp. 367–374. doi: 10.2105/AJPH.70.4.367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. American Association for Public Opinion Research (2016) Standard Definitions Final Dispositions of Case Codes and Outcome Rates for Surveys. Available at: https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf (Accessed: 16 October 2018).
  3. Bachman JG and O’Malley PM (1984) ‘Black-White Differences in Self-Esteem: Are They Affected by Response Styles?’, American Journal of Sociology, 90(3), pp. 624–639. doi: 10.1086/228120. [DOI] [Google Scholar]
  4. Baumgartner H and Steenkamp J-BEM (2001) ‘Response Styles in Marketing Research: A Cross-National Investigation’, Journal of Marketing Research, 38(2), pp. 143–156. doi: 10.1509/jmkr.38.2.143.18840. [DOI] [Google Scholar]
  5. Beauducel A, & Herzberg PY (2006). On the performance of maximum likelihood versus means and variance adjusted weighted least squares estimation in CFA. Structural Equation Modeling, 13(2), 186–203. [Google Scholar]
  6. De Beuckelaer A, Weijters B and Rutten A (2010) ‘Using ad hoc measures for response styles: a cautionary note’, Quality & Quantity, 44(4), pp. 761–775. doi: 10.1007/s11135-009-9225-z. [DOI] [Google Scholar]
  7. Billiet JB and Davidov E (2008) ‘Testing the Stability of an Acquiescence Style Factor Behind Two Interrelated Substantive Variables in a Panel Design’, Sociological Methods & Research, 36(4), pp. 542–562. doi: 10.1177/0049124107313901. [DOI] [Google Scholar]
  8. Billiet JB and McClendon MJ (2000) ‘Modeling Acquiescence in Measurement Models for Two Balanced Sets of Items’, Structural Equation Modeling: A Multidisciplinary Journal, 7(4), pp. 608–628. doi: 10.1207/S15328007SEM0704_5. [DOI] [Google Scholar]
  9. Bolt DM, & Johnson TR (2009). Addressing score bias and differential item functioning due to individual differences in response style. Applied Psychological Measurement, 33(5), 335–352. [Google Scholar]
  10. Brewer SE, Campagna EJ and Morrato EH (2019) ‘Advancing regulatory science and assessment of FDA REMS programs: A mixed-methods evaluation examining physician survey response’, Journal of Clinical and Translational Science. Cambridge University Press (CUP), 3(4), pp. 199–209. doi: 10.1017/cts.2019.400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cohen S, Kamarck T and Mermelstein R (1983) ‘A Global Measure of Perceived Stress’, Journal of Health and Social Behavior, 24(4), p. 385. doi: 10.2307/2136404. [DOI] [PubMed] [Google Scholar]
  12. Cuellar I, Arnold B and Maldonado R (1995) ‘Acculturation Rating Scale for Mexican Americans-II: A Revision of the Original ARSMA Scale’, Hispanic Journal of Behavioral Sciences, 17(3), pp. 275–304. doi: 10.1177/07399863950173001. [DOI] [Google Scholar]
  13. Curtin F, & Schulz P (1998). Multiple correlations and Bonferroni’s correction. Biological psychiatry, 44(8), 775–777. [DOI] [PubMed] [Google Scholar]
  14. Davis RE et al. (2018) ‘Why Do Latino Survey Respondents Acquiesce? Respondent and Interviewer Characteristics as Determinants of Cultural Patterns of Acquiescence Among Latino Survey Respondents’, Cross-Cultural Research, p. 106939711877450. doi: 10.1177/1069397118774504. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Davis RE, Resnicow K and Couper MP (2011) ‘Survey Response Styles, Acculturation, and Culture Among a Simple of Mexican American Adults’, Journal of Cross-Cultural Psychology, 42(7), pp. 1219–1236. doi: 10.1177/0022022110383317. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Ennis SR, Ríos-Vargas M and Albert NG (2011) The hispanic population: 2010. Available at: http://www.census.gov/prod/cen2010/briefs/c2010br-04.pdf (Accessed: 16 October 2018).
  17. Fang Y, Yang S, Feng B, Ni Y, & Zhang K (2011). Pharmacists’ perception of pharmaceutical care in community pharmacy: a questionnaire survey in Northwest China. Health & social care in the community, 19(2), 189–197. [DOI] [PubMed] [Google Scholar]
  18. Food and Drug Administration (2019) Survey Methodologies to Assess REMS Goals That Relate to Knowledge Guidance for Industry: Draft Guidance. Available at: https://www.fda.gov/media/119789/download (Accessed: 7 December 2020).
  19. Harzing A-W (2006) ‘Response Styles in Cross-national Survey Research’, International Journal of Cross Cultural Management, 6(2), pp. 243–266. doi: 10.1177/1470595806066332. [DOI] [Google Scholar]
  20. Hoffmann S, Mai R and Cristescu A (2013) ‘Do culture-dependent response styles distort substantial relationships?’, International Business Review, 22(5), pp. 814–827. doi: 10.1016/j.ibusrev.2013.01.008. [DOI] [Google Scholar]
  21. Holbrook AL, Green MC and Krosnick JA (2003) ‘Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires’, Public Opinion Quarterly. Oxford University Press, 67(1), pp. 79–125. doi: 10.1086/346010. [DOI] [Google Scholar]
  22. Javaras KN and Ripley BD (2007) ‘An “Unfolding” Latent Variable Model for Likert Attitude Data’, Journal of the American Statistical Association. Taylor & Francis, 102(478), pp. 454–463. doi: 10.1198/016214506000000960. [DOI] [Google Scholar]
  23. Javeline D (1999) ‘Response Effects in Polite Cultures: A Test of Acquiescence in Kazakhstan’, Public Opinion Quarterly. Oxford University Press, 63(1), p. 1. doi: 10.1086/297701. [DOI] [Google Scholar]
  24. de Jong MG, Steenkamp JBE, Fox JP, & Baumgartner H (2008). Using item response theory to measure extreme response style in marketing research: A global investigation. Journal of marketing research, 45(1), 104–115. [Google Scholar]
  25. Johnson T, Kulesa P, Cho YI, & Shavitt S (2005). The relation between culture and response styles: Evidence from 19 countries. Journal of Cross-cultural psychology, 36(2), 264–277. [Google Scholar]
  26. Jordan LA, Marcus AC and Reeder LG (1980) ‘Response Styles in Telephone and Household Interviewing: A Field Experiment’, Public Opinion Quarterly. Oxford University Press, 44(2), pp. 210–222. doi: 10.1086/268585. [DOI] [Google Scholar]
  27. Kamoen N et al. (2017) ‘Why Are Negative Questions Difficult to Answer? On the Processing of Linguistic Contrasts in Surveys’, Public Opinion Quarterly. Oxford University Press, 81(3), pp. 613–635. doi: 10.1093/poq/nfx010. [DOI] [Google Scholar]
  28. Kieruj ND and Moors G (2013) ‘Response style behavior: question format dependent or personal style?’, Quality & Quantity, 47(1), pp. 193–211. doi: 10.1007/s11135-011-9511-4. [DOI] [Google Scholar]
  29. Kreuter MW et al. (2013) Tailoring Health Messages. Routledge. doi: 10.4324/9781315045382. [DOI] [Google Scholar]
  30. Lavrakas PJ (2008) Within-household respondent selection: How best to reduce total survey error. Available at: http://www.mediaratingcouncil.org/MRCPointofView-WithinHHRespondentSelectionMethods.pdf(Accessed: 17 October 2018).
  31. Lechner C and Rammstedt B (2015) ‘Cognitive ability, acquiescence, and the structure of personality in a sample of older adults.’, Psychological Assessment, 27(4), pp. 1301–1311. Available at: http://psycnet.apa.org/fulltext/2015-23329-001.html (Accessed: 17 October 2018). [DOI] [PubMed] [Google Scholar]
  32. Lindfelt T, Ip EJ, Gomez A, & Barnett MJ (2018). The impact of work-life balance on intention to stay in academia: results from a national survey of pharmacy faculty. Research in Social and Administrative Pharmacy, 14(4), 387–390. [DOI] [PubMed] [Google Scholar]
  33. Liu M, Lee S and Conrad FG (2015) ‘Comparing Extreme Response Styles between Agree-Disagree and Item-Specific Scales’, Public Opinion Quarterly. Oxford University Press, 79(4), pp. 952–975. doi: 10.1093/poq/nfv034. [DOI] [Google Scholar]
  34. Marín G, Gamba RJ and Marín BV (1992) ‘Extreme Response Style and Acquiescence among Hispanics’, Journal of Cross-Cultural Psychology. Sage PublicationsSage CA: Thousand Oaks, CA, 23(4), pp. 498–509. doi: 10.1177/0022022192234006. [DOI] [Google Scholar]
  35. Massachusetts General Hospital (2002) National Latino and Asian American Study. Available at: https://www.massgeneral.org/disparitiesresearch/Research/pastresearch/NLAAS-documents.aspx (Accessed: 17 October 2018).
  36. Meisenberg G and Williams A (2008) ‘Are acquiescent and extreme response styles related to 1ow intelligence and education?’, Personality and Individual Differences. Pergamon, 44(7), pp. 1539–1550. doi: 10.1016/J.PAID.2008.01.010. [DOI] [Google Scholar]
  37. Moors G (2010) ‘Ranking the Ratings: A Latent-Class Regression Model to Control for Overall Agreement in Opinion Research’, International Journal of Public Opinion Research. Oxford University Press, 22(1), pp. 93–119. doi: 10.1093/ijpor/edp036. [DOI] [Google Scholar]
  38. Muthén LK and Muthén BO (1998-2017). Mplus. [Computer software]. Los Angeles, CA. [Google Scholar]
  39. Pasek J et al. (2014) ‘Can marketing data aid survey research? Examining accuracy and completeness in consumer-file data’, Public Opinion Quarterly, 78(4), pp. 889–916. doi: 10.1093/poq/nfu043. [DOI] [Google Scholar]
  40. Paulhus DL (1984) ‘Two- component models of socially desirable responding.’, Journal of Personality and Social Psychology, 46(3), pp. 598–609. doi: 10.1037/0022-3514.46.3.598. [DOI] [Google Scholar]
  41. Plieninger H (2017) ‘Mountain or Molehill? A Simulation Study on the Impact of Response Styles’, Educational and Psychological Measurement. SAGE PublicationsSage CA: Los Angeles, CA, 77(1), pp. 32–53. doi: 10.1177/0013164416636655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. R Core Team (2020). R: A language and environment for statistical computing. [Computer Software]. R Foundation for Statistical Computing, Vienna, Austria: URL https://www.R-project.org/. [Google Scholar]
  43. Rhemtulla M, Brosseau-Liard PÉ, & Savalei V (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological methods, 17(3), 354. [DOI] [PubMed] [Google Scholar]
  44. Rosenberg M (1965) Society and the Adolescent Self-Image. Princeton, NJ: : Princeton University Press,. [Google Scholar]
  45. Rosenberg M (1979) Conceiving the self. Basic Books. Available at: https://books.google.com/books?id=nUJqAAAAMAAJ&q=Conceiving+the+Self&dq=Conceiving+the+Self&hl=en&sa=X&ved=0ahUKEwiQh9uU1Y3eAhUl_IMKHU7DCbcQ6AEIJzAA (Accessed: 17 October 2018).
  46. Ross CE and Mirowsky J (1984) ‘Socially-Desirable Response and Acquiescence in a Cross- Cultural Survey of Mental Health’, Journal of Health and Social Behavior, 25(2), pp. 189–197. doi: 10.2307/2136668. [DOI] [PubMed] [Google Scholar]
  47. Salazar MS (2015) ‘El dilema de combinar ítems positivos y negativos en escalas’, Psicothema, 27(2), pp. 192–199. doi: 10.7334/psicothema2014.266. [DOI] [PubMed] [Google Scholar]
  48. Saris W et al. (2010) ‘Comparing Questions with Agree/Disagree Response Options to Questions with Item-Specific Response Option s’, Survey Research Methods, 4(1), pp. 61–79. doi: 10.18148/srm/2010.v4i1.2682. [DOI] [Google Scholar]
  49. Schuessler TJ, Ruisinger JF, Hare SE, Prohaska ES, & Melton BL (2015). Patient satisfaction with pharmacist-led chronic disease state management programs. Journal of pharmacy practice, 29(5), 484–489. [DOI] [PubMed] [Google Scholar]
  50. Thomas TD, Abts K and Vander Weyden P (2014) ‘Measurement Invariance, Response Styles, and Rural–Urban Measurement Comparability’, Journal of Cross-Cultural Psychology. SAGE PublicationsSage CA: Los Angeles, CA, 45(7), pp. 1011–1027. doi: 10.1177/0022022114532359. [DOI] [Google Scholar]
  51. Thompson EL, Rao PSS, Hayes C, & Purtill C (2019). Dispensing naloxone without a prescription: survey evaluation of Ohio pharmacists. Journal of pharmacy practice, 32(4), 412–421. [DOI] [PubMed] [Google Scholar]
  52. Van Vaerenbergh Y and Thomas TD (2013) ‘Response Styles in Survey Research: A Literature Review of Antecedents, Consequences, and Remedies’, International Journal of Public Opinion Research. Oxford University Press, 25(2), pp. 195–217. doi: 10.1093/ijpor/eds021. [DOI] [Google Scholar]
  53. Warnecke RB et al. (1997) ‘Improving question wording in surveys of culturally diverse populations.’, Annals of epidemiology, 7(5), pp. 334–342. Available at: http://www.ncbi.nlm.nih.gov/pubmed/9250628 (Accessed: 17 October 2018). [DOI] [PubMed] [Google Scholar]
  54. Weijters B and Baumgartner H (2012) ‘Misresponse to Reversed and Negated Items in Surveys: A Review’, Journal of Marketing Research. American Marketing Association, 49(5), pp. 737–747. doi: 10.1509/jmr.11.0368. [DOI] [Google Scholar]
  55. Weijters B, Geuens M and Schillewaert N (2010) ‘The stability of individual response styles.’, Psychological Methods, 15(1), pp. 96–110. doi: 10.1037/a0018721. [DOI] [PubMed] [Google Scholar]
  56. Weijters B, Schillewaert N and Geuens M (2008) ‘Assessing response styles across modes of data collection’, Journal of the Academy of Marketing Science. Springer US, 36(3), pp. 409–422. doi: 10.1007/s11747-007-0077-6. [DOI] [Google Scholar]
  57. Welkenhuysen- Gybels J, Billiet J and Cambré B (2003) ‘Adjustment for Acquiescence in the Assessment of the Construct Equivalence of Likert-Type Score Items’, Journal of Cross-Cultural Psychology. SAGE Publications, 34(6), pp. 702–722. doi: 10.1177/0022022103257070. [DOI] [Google Scholar]

RESOURCES